Artificial intelligence and liability insurance.
After a brief introduction to AI, some of the liability issues are introduced in this brief report.
AI simulations
Essentially AI systems are structured in the following way. The computer is supplied with digital information, coded with a set of statistical tools and a set of purposes that the customer would like to work with. Once data, methods and purposes are properly coded, the AI generates useful simulations of the data and can be used to identify the degree to which new situations meet the set of purposes. Missing data is effectively imputed from the overall model.
As a very much simplified example.
An example helps set the scene for the discussion of legal liability.
Intended purpose: A liability claims manager wants to identify claims which are suited to making an immediate offer, and those which are best examined more closely.
Process: Having coded ten thousand claims of the relevant sort, AI generates a statistically weighted set of factors which associate with the factual outcome of each claim. There may be many different “best fits” in different parts of variable space. The AI might be programmed to find the best of these but might also miss the best one if it is directed away by chance or by bias.
Practical application: A new claim form is entered and the AI returns the probability of claim success if challenged in the usual ways and, in addition, what sort of responses would maximise the likelihood of a successful defence. The likely cost of each intervention is multiplied by the probability of making a saving of a given size. Some of the financial comparisons lead to very clear recommendations and some need further examination by the experienced claims manager.
Non-claims information may be added to the information mix to see if the predictive power increases. Some of this information may seem irrelevant e.g. the usual holiday destinations for people in the same post code as the claimant, but you don’t know until you try it[1]. Holiday information might just add something to the model.
The data may include information on age and race but the claims manager would have no idea if these factors had been important. Allegations of discrimination would be deniable, at first sight.
Can the simulation be “understood”?
The AI simulation is in no way deterministic but by a process of deliberate systematic incremental biasing, the effect on AI outputs can be observed. This systematic biasing (more usually known as sensitivity testing) could be performed by the same AI code or by an external programme. The output graphs from this process then provide some degree of interpretability. For example, it may be found that claim success for long tail disease in men depends on claimant age, with a rapid transition from high success below the age of 79 to low success at older ages.
Some risks in this example.
Obviously the claims manager would not want the simulation to fall into the hands of the claims making industry. If it did, claims forms would very quickly include more of the factors that suggest prompt settlement. Also obviously, the claims industry could deliberately bias the data set of a defendant. By flooding a given insurer with claims information of a certain type, the data set is biased.
Risks more generally: The foreseeable risks with AI include: data can be unrepresentative or significantly incomplete, statistical tools may not be unbiased and intended purposes can be programmed badly or may simply be the wrong ones. Hardware can malfunction.
Given internet connectivity AI information sets may include the outputs of other information gathering systems[2] and even from other AI systems including information and its interpretation. Some of these may be appropriate but some may not. Some may be obtained by accident. Some may be obtained by the AI system itself.
Other examples of AI applications
Applications of AI include optimised goods delivery services, identification of key epigenetic changes in a study of gene expression, self-drive (collision avoidance, self-parking, route finding etc.) ordering stock for a warehouse, number of hot dogs to supply to a football ground, traffic planning, influencing the electorate, balance of an insurance property casualty portfolio, the recipe used by a coffee machine…
The determinism problem, the common law and, insurance
Applicable ratio? In general, if compensation is to be paid, the product must be defective; advice, actions and omissions must be negligent. In the absence of new liability laws, AI related liability rests with the manufacturer, the owner or the user[3]. However, AI systems learn. The manufacturer designs the capacity to learn but does not provide all of the real-time “education”. Errors in education could cause harm. While strict liability applies to the manufacturer, the educator or permitted real-time action may be subject to tests such as foreseeability and control. Perhaps in future, the educator should also be subject to strict liability?
Relevant questions include: What exactly is the product? When is it a strict liability standard? When is it negligence?
The determinism problem: The common law is essentially deterministic in nature. Cause and effect. Control. Knowledge and its assessment by reason. Adaptation to new information. True or false. With experience, designers, operators and learned judges alike learn to weigh up what is reasonably to be expected of a given risk situation and what is reasonably expected of the person taking due care for his neighbour. The scope or latitude to be applied is also discovered by deterministic reasoning.
AI is not deterministic. Information is not interpreted in a reasonable way but in a way which is prioritised by the programmed purposes. Changing the information does not lead to predictable changes in outputs. Sensitivity testing is different for each ‘dataset’. Sensitivity testing is different at each of the many ‘best fits’ of the simulation.
What this means is that AI decisions may not be legally foreseeable, may not be within the scope of reasonable views of the information and may not be repeatable when given the same starting point to work from.
The attribution problem: So how would a judge at common law tell which kind of error occurred? Was it the hardware, the software, the information, the order in which the information was presented, the purposes, the acquisition of other information or the way in which the AI choice was reacted to by other AI systems and by people? Perhaps the educator introduced a bias intentionally or perhaps accidentally. Different “experts” would offer their views, but none of them would actually know about the instant case as each AI simulation is unique. You cannot be an expert when the event is a one-off. The expert may be lucky enough to spot a blatant error but this cannot be expected. The attribution problem in a non-deterministic event is a new challenge for the common law.
One solution for deciding attribution: One possible answer would seem to be that the history of the AI system should be reproduced precisely in a computer simulation and then again and again with simulated defects added until the system makes the same choices that led to the error that led to the claim. This should be repeated with variations in the order of events until the legal proximal cause of the error becomes apparent. To enable this, a complete log of the AI programming, information supplied, feedback from the outside world and hardware components would need to be available. This may seem a lot to ask when the device is an AI-enabled coffee machine but may be proportionate for a super-tanker navigation system or flight control centre.
The justice problem: But even if the simulation identifies the legally proximate cause the question of justice remains unclear. The defect may be as simple as a bias introduced by the modern-day equivalent of a localised floating point error[4]. A bug. In most self-checking systems an obvious error causes a return-to-safety decision and a report is sent to “the creator” to prompt a repair. For an AI system, the opportunity offered by the error could actually lead to a better optimisation and the defect is consequently amplified out of proportion to the blameworthiness of those responsible for it. Would it make sense to assign liability to the bug? If not, then how would you apportion? To what extent is the error amplified? Was this purely a result of fixed software (strict liability) or did the educator encourage the amplification (negligence) or is the resulting combination one of strict liability, in which case the educator is exposed to an unforeseen liability standard, and so is his insurer.
Insurance problems: If you were insuring the person who installed the floating point maths system how much premium would you ask for? Would you also need to run a simulation in order to work out what your exposure was?
Correlated losses such as multiple vehicle collisions, erroneous bank transactions, and industrial plant malfunctions are not too far-fetched in an AI enabled world. Perhaps an AI enabled correlations spotter is the way forward?
Regulators and politicians
The liability regime for AI has been[5] identified as problematic. It is not immediately clear that the practicalities of insurance are much in the minds of politicians when they make their pronouncements. One proposal is to assign strict liability in cases where AI was the key decision factor in damage being caused but also to reduce the culpability if the educator has biased the AI system in some material way. How can this actually be measured?
Insurers should engage with such thought opportunities if a workable insurance offering is to be enabled.
Summary
AI systems are useful both in decision support and decision automation. The signs are that they will be adopted very widely in both high risk and low risk situations. However, AI systems are not deterministic; given the same data and the same starting conditions different outcomes are possible. It is often not possible to understand why a given response was made.
Liability for advice, acts and omissions is currently judged deterministically with some allowance made for what would be regarded as being within the scope of reasonable behaviour.
In the event that it can be shown that the AI system caused damage it may not be possible to find out the proximal cause or to justly assign culpability to any particular error or properly working design feature. This difficulty may lead to specific AI liability regulation. If so, the question is then whether such regulation enables a viable insurance regime. Since it is highly unlikely that all the developers of AI systems will have the same proportionate access to risk capital, commercial insurance would be necessary if diversity of supply is to be enabled.
[1] Some kinds of holidays are a favourite purchase for those making exaggerated claims.
[2] The AI system should include information curation especially if the simulation is to be commercialised.
[3] In these days of “up-cycling” the AI system may well find itself deployed in situations that were not foreseen. Insurers should consider excluding liability for re-cycled AI except where the AI system/simulation is deliberately sold as a ready-made generic building block.
[4] Of course floating point errors were a thing of the 1970/80s and couldn’t possibly occur now, but…
[5] One example of this is: European Parliament 2015/2103(INL) published May 2016. DRAFT REPORT with recommendations to the Commission on Civil Law Rules on Robotics