Emerging risks have, by definition, always been a challenge for companies and the insurance industry. Being new, there’s relatively little data with which to assess the probability of such risks materialising and the severity and frequency of ill-effects if they do.
As companies and industries change ever-more quickly, the pace of innovation also increases, and so arguably does the emergence of new risks. History teaches us that the products, trends and technologies we assume to be safe and adopt as part of our everyday lives are not without their risks. For example, while the toxic properties of asbestos are now well-known, it was once considered a ‘wonder material’.
Businesses could be healthier and more sustainable if they could identify significant new liability risks sooner – in other words, if they knew what the next asbestos would be.
Unfortunately, the identification and quantification of emerging risks by businesses, regulators and insurers has not always been particularly well-informed, robust or transparent. Warning signs often result in subjective responses based on a habit of risk aversion or even risk denial. It would help if the basis for emerging risk decisions was explicit and testable.
Emerging risks are commonplace in environmental and personal injury liability insurance. Aside from asbestos-related injuries, well-known examples include hand-arm-vibration-syndrome (HAVS) and noise-induced-hearing loss (NIHL). Both led to losses well in advance of historic data-based loss projection curves (i.e., sooner and far greater losses than expected), making it harder for insurers to manage the loss.
False positives are also commonplace e.g. Y2K, BPA… yet in a sense if a warning sign leads to the appropriate response, there is no such thing as a false positive. In the early stages, risk managers would be wise to make changes which can either be reversed or boosted as new information is gathered. Exploring your insurance options would make sense for the generator of risk, new wordings and conditional reserving make sense for insurers.
Fortunately, new disease types, such as prion disease and secondary Raynaud’s, are actually very rare. Most emerging liability risks concern newly recognised causes of well-known injuries such as deafness, dementia, heart disease and allergies. Identification of new causes is much easier if the kinds of outcomes you are worried about are already known.
Different language
In order to understand and quantify emerging liability risks, the language of natural catastrophe modelling (widely adopted among insurers) is sometimes employed by insurers. This is not always helpful, however, because there are many differences between the two types of risk.
Natural catastrophes are time-bound, mitigated according to building codes and highly localised. The foreseeable location and severity of natural events is apparent in geographical features, such as un-eroded rock faces, flood plains, assets at risk e.g., property values and biological responses; e.g., the age of the trees. It makes some sense to talk about return periods.
Liability emerging risks, on the other hand, have a different set of characteristics:
- Once established by precedent, liability exposure persists, year on year until the supply of losses is exhausted.
- The cause and effect may be separated by an unknown time delay, perhaps decades, e.g. bladder cancer.
- Liability may be retrospective, and influenced by changes in expert or legal consensus, e.g. noise-induced-hearing loss.
- Injury may be transmitted to future generations, e.g. epigenetic toxins.
- Hazard exposure changes with the take-up of new technologies, driven by market forces, e.g. hand-held power tools.
- Resilience changes with lifestyle, e.g. obesity-related diabetes.
- Mitigation varies with medical skill and availability of care and social mitigation responses such as welfare.
In short, data on the losses themselves and the factors that govern the eventual loss profile may take years to establish. To make quantification of such risks even more challenging, liability emerging risks do not recur – they are one-offs.
Risk scenarios
So how are emerging liability risks to be identified and evaluated?
This task is challenging, in part, because scientific expert opinion is remarkably conservative. New evidence or insights from science research take a long time to become a consensus, especially if the old ‘insights’ have gained populist support e.g. eating butter causes heart disease. But expert consensus may be needed to establish legal causation for example and so conservatism means there will be time to act. Legal precedents also can be based on misunderstandings e.g. Dryden v Johnson Matthey.
By understanding the weak points in expert consensus and case law, the opportunity for new insights to emerge can be assessed. The significant opportunities for change can be looked for. It may even be possible to invest in preserving useful principles e.g. the de minimis principle and spending money of research to reduce uncertainties e.g. does low level silica dust exposure cause lung cancer in non-smokers?
There is usually plenty of warning, however. For example, hand-arm-vibration-syndrome, which first came to light as a result of the use of chains-saws, was identified 14 years before the first industrial use compensation payment was made in the UK. The idea that it was restricted to chain-saw use in cold weather was obviously flawed.
A lot of clues lie in scientific literature. By mining peer-review scientific journals and official debate, it is potentially possible to identify the next generation of catastrophe liability risks for businesses far in advance of warnings provided by screening claims activity.
Science literature
Mining the world’s science literature abstracts for odds ratios and relative risks and “caused by” statements is an option, but without understanding the methodology behind each such summary estimate, the data miner ends up pooling ideas and numerical values that cannot be meaningfully pooled. As a result data miners flag up many more false positives than would be useful.
Even worse, the freely available abstracts very often do not accurately represent the data contained in the subscription-only body of the text. For instance, there could be 20 null estimates of risk in the body of the paper, but the one positive result is entered into the abstract. While this helps generate interest in the paper, it does not provide a fair reflection of what has been learned. Data mining of research abstracts could be useful as an indicator of a topic worth following up, but not much else. Expert assessment is always needed.
As a contemporary example, BPA is often cited as a potential human toxin with a vast array of possible associated harms, many of which bear no mechanistic relationship with each other and worse, are logically inconsistent. The data comes mostly from poorly designed studies in humans and from experiments at unrepresentative doses in rodents. Counting the number of such publications per year might be thought to indicate a trend towards expert consensus. It doesn’t, but it does indicate the potential for populist responses, including ill-advised regulatory intervention. In an age of populism, such interventions should indeed be reckoned with, but these are business risks as opposed to personal injury liability risks.
To understand why BPA is currently a false positive you need to know that BPA is frequently ingested with food and that human studies rarely correct for food quantity or timing. Certainly there is more BPA in people who have a history of poor diet and excessive eating and now, rather unsurprisingly, have health problems. In addition, humans very quickly break down BPA and so timing of excessive food intake is critical in terms of understanding any associations of ill-health with BPA measured in blood or urine samples. Control for timing is rarely attempted. Finally, rats don’t readily break down BPA, so, extrapolating from massive doses in rats to representative doses in humans is a highly speculative exercise. Unsurprisingly, after taking into account the weaknesses of published work, the world’s foremost risk assessment authorities express very little concern. Despite these and other rather obvious methodological flaws, uninterpretable BPA research continues to flourish and garner public concern. It is a media-friendly topic: ubiquitous exposure to substance, highly uncertain data on generic causation, little consumer choice but to be exposed, multiple non-specific outcomes, expert inconsistency, big business vs lone victim– typical of the ingredients for story generation.
To learn useful insights from the science literature it needs to be understood. Data mining without expertise generates false positives.
Evaluation
For those emerging risks which seem more likely than not to become real losses the question then is how big? If big enough, more detailed questions can be resolved. Sometimes the potential loss is so big that action needs to be taken even if the rational case for successful claims is marginal.
If correctly assessed, science studies provide data which can be used to calculate how big an emerging liability problem could be. In an analogous way, public health authorities usually make their estimates of the rate of new harm being done by reference to information in the same scientific publications that draw attention to the new problem. With some modification, liability insurers can use the same data combined with data from other studies to estimate frequency and severity of liability loss.
For example, if the emerging causation knowledge relates to a well-known injury, scientific study tells you the likely age profile of the injured, the likely impairment profile, the effectiveness of medicine, the cost of medical support, the time evolution from symptoms to handicap and sometimes indicates the latency between exposure to hazard and manifestation. These factors relate to severity and timing of loss.
In addition, science very often tells you how many relevant diagnoses there are in any year, how many people are exposed to the hazard and to what level and, if there is a breach of duty, how probable it is that breach would be found in the history of those who have the injury/disease. If there is a specific sign that the injury was caused by that hazard, science tells you how often such a sign is found in the injured, and the probability of making out specific causation is so addressed. These factors relate to the frequency of a good liability claim in those who are injured.
By combining the data in bespoke deterministic models, the potential size and variance of a given liability exposure can be estimated. Where data is uncertain, so is the size estimate.
In short, science enables predictive pricing. While such modelling has been used by insurers for many years to quantify natural catastrophes, the application of such modelling for emerging liability risks is less commonplace.
Business enablement
Re: Liability (Oxford) Ltd has begun circulating mathematical models of specific emerging liability scenarios.
The quality assessment of science publications is now a standardised practice. By restricting data inputs to so-called “best evidence”, as opposed to evidence diluted by countless poor and logically inconsistent studies, the models provide justifiable estimates of the liability exposure. Models vary by jurisdiction to account for variation in hazard exposures, legal systems, health care systems and variations in resilience etc.
Users are free to experiment with input values of their own choosing. This helps with understanding the sensitivity of the result to the reasonable range of data inputs.
Analysis of the literature also provides realistic estimates of how much longer it would take for generic and specific causation to be sufficiently evidenced.
What can’t be modelled are: the time taken for experts to agree a new consensus – such is the political nature of scientific expertise and; the degree of interest that would be shown by the claims-making industry. There is a balance to be had between risk and reward. A perfectly sound case on generic causation, specific causation and breach may never be brought, but it is best to be prepared.
Of course, a model is not a complete answer, it’s just a guide. But the use of such predictive modelling can help businesses to become more comfortable with uncertainty – thereby enabling them to develop and launch new products.
Contrary to some expectations, earlier and more accurate identification of certain risks would not lead to the exclusion of such risks from insurance policies – instead it should make it more feasible for these risks to be insured.
In turn, we might see the development of more named peril insurance policies, such as for electronic cigarettes, processed meat, shift work etc. Policies could hopefully become more specific, better informed, and far more tailored to companies’ specific risk profiles.
_______