Task specific machines have been with us for a long time. Sometimes they go wrong. Who to blame? Well it is worth pointing out that slaves and employees have been around for a long time, so liability rules have become quite predictable. A machine is just the agent of the master/owner/user no matter how “intelligent”.
What’s different about AI?
In AI the machine develops and refines its own optimum manner of performing a function. Of course it does this within the limits of its methods of action, the feedback it gets from taking those actions and the efficiency of the optimisation software.
Provided the system is well designed, then the scope of potential harms can be factored out. The designer is to blame. If the system is misused, the user is to blame.
But
Utility maximisation is not always the social optimum. A self-driving car might find the optimum route but it is a route which would be ethically unacceptable. Where ethical choices are to be made, the designer must ensure that ethical optimisation is included. Again, the designer is to blame. Being law-abiding may not be sufficient – there is no comprehensive law.
But
Given enough power, some AI machines might in effect, design themselves and therefore have quite unpredictable emergent misbehaviours and even an ethical system of their own. Society should perhaps have some say in whether self-design is allowable and if so how much latitude should there be.