Dementia science has developed some remarkably sophisticated tools with the aims of understanding the course of the underlying neurodegeneration and opportunities to prevent it or slow it down. Despite valiant efforts to portray the findings in the light of a, favoured, disease model, the factual results don’t fit in. Interventions that should have worked, don’t. Among the early promises was that negligent, and therefore preventable, triggers of dementia would be identified. Traumatic brain injury and sports concussion were presumed to be among them. The purpose of this paper is to explain: the important factual results, what really does link brain trauma and dementia and why some people develop dementia while others do not. The explanations are obvious in retrospect and provide a strong narrative basis for expert testimony in dementia liability claims. At present, and increasingly clearly, the science indicates no causal link between either brain injury or concussion and s
All kinds of insurer will at some point need to risk assess AI systems where these influence their commercial policy-holders. AI systems automatically generate selected information outcomes intended as advice or for direct control of processes. Who is liable for any attributable damage? The question applies now, or will soon apply, wherever information processing is routine. The answer: liability is vicarious. Whoever authorises the use of the AI system is strictly liable. This is a direct consequence of the way in which AI systems work. Understanding AI Systems An AI system is software which takes the place once solely occupied by meticulous analytical logic programming. It provides an automated decision-making process between the prompt for an information response and the output of that response. However, unlike conventional programming, AI systems are probabilistic, not analytical. Decisions are weighted according to the probability with which they simulate successful outcomes in da
England began lockdown 2.0 on the 5th November. Much has been learned since September when lockdown 2.0 would have saved a lot of lives. Starting on the 5th of November it came rather late in the day and even before it has ended, it is quite clear that lockdown 3.0 should be expected. This note is based on analysis of public data. Interventions definitely work. The problem is doing the right thing at the right time.
Many parts of the world have experienced a period when infection status testing became reliable and meaningful. However, the expected success of the much awaited vaccines, now about to be approved, will inevitably create testing uncertainty, provide greater opportunities for false claims and create new costs for liability insurers. Regulators should consider making a requirement for double testing. This would not only protect citizens from unwarranted restrictions of personal freedom and associated costs but would create reasonable certainty of facts at common law. A Limitation period of three years will create ample opportunities for claims supported by doubtful evidence.
Addiction is not new. Drug trade wars have been fought. Legislation passed. Empires funded. Social ills disguised, profits made, careers progressed, lawyers enriched, jails filled, politicians acclaimed, lives ruined. Fundamental to addiction is that humans are strongly adapted to both habit formation and habit reinforcement. Whether these be physical habits such as how to walk or kick a football, social habits such as preferring to speak with people who have the same interests, cognitive biases such as selecting evidence which supports our view, or political biases such as liberalism or conservatism. These are all, to some extent, habits. Addictive behaviour is indicative of particularly strong habit reinforcement. Addiction is built upon our neurological habit-forming processes, our desire for pleasure, our capacity to prefer perverse arguments, our need for social conformity (or the reverse), and highly unpleasant withdrawal effects, lest we forget. Understandably, given the machine
IARC on Glyphosate – what to do when a mistake is made? The Governing Council of the International Agency for Research on Cancer (IARC[1]) is meeting[2] today and tomorrow. Not listed on the published agenda is glyphosate, but much of the conversation will be about the hotly disputed decision[3] that glyphosate is ‘probably carcinogenic to humans’ (Group 2A). Was it the right finding, why was so much of the animal experimentation evidence deemed unsuitable for consideration, is it ethical to make pronouncements of any sort if there is no published evidence of how often IARC decisions are wrong? How should scientific expert opinion be held to account? Who underwrites the effect of mistakes? Is it ethical not to take responsibility for mistakes? Holding IARC to account It seems obvious in hindsight that institutions of all kinds whether commercial or public should publish an account of how accurate their published findings are[4]. In time, false positives, false negatives, true positives
Just occasionally someone asks if everything we believe about causation-related science is wrong. This time, the use of the t-test is a cause of doubt. In the interpretation of rat lab results, animal experimenters use the t-test. The t-test, when used as originally designed, compares the means of two data distributions. The standard deviation of the distribution is first reduced to the standard error of the mean (SEM) and compared with the mean and SEM of the other distribution. If p < 0.05 it is pronounced that the two distributions are probably different. So, when comparing control animals with those dosed with a toxin the t-test is used to detect the likelihood that the toxin did anything. The reason for doubt is that SEM comparisons are only valid for true means. The single result e.g. 4 out of 90 rats developed lung cancer, is not a mean. Despite this, experimental scientists use the t-test to decide if 4/90 is different from 5/90. In fact the same experiment, when repeated, h
glyphosate meta-analysis and non-Hodgkin lymphoma glyphosate meta-analysis Does Glyphosate cause non-Hodgkin lymphoma (NHL)? Observing that different studies in herbicide application workers give very different results, the authors of a recent[1] meta-analysis have proposed a new approach. By choosing those results which correspond with the highest exposure in each study, and ignoring all the rest, it might be possible to detect a causal association, if there is one. It being self-evident that the meta-risk ratio by itself would have no meaning, unless the high exposures were very similar, the main task is to show that the meta-risk ratio is statistically significant. Significance would suggest that a causal association was possible, even if no-one could tell how strong the association was. Significance testing in the biological sciences centres on whether or not the 95% confidence interval (95% CI) includes 1.0. This is a convention, not universally agreed with. In the recent paper th
Tech Plus reminder robot toxicologist Data mining and machine learning look set to revolutionise knowledge management in routine business processes[1]. Can they be used to assist with the identification and evaluation of new liability risks? The Robotic Toxicologist report presents an expert-based analysis of a liability-related work recently published by Allianz[2] et al. The Allianz et al. report demonstrated some very appealing underlying capabilities. However, it is clear there are areas where targeted expert assistance could lead to improvements in toxicological insight, relevance and liability meaning-making. Insurers wishing to develop or acquire robotic tools could include such targeted expertise in their development and evaluation projects and if they are adopted, in the management of machine outputs. The Robotic Toxicologist Report. Could an automated search machine usefully identify injury outcomes associated with three of the chemicals in nail varnish? There were 15 overt “
In these days of machine learning solutions for business optimisation, one key question is whether machines can usefully pick out emerging liability risks. The Robot Toxicologist The Robot Toxicologist The “Toxic Trio” as a case study Wouldn’t it be wonderful if a machine could read all of the world’s science literature, decide which substance would trigger new liability exposures, say how much this would cost and who should pay? After > 10 years of development work, the recent marketing document[1] from Allianz illustrates how far along this path one particular robot has travelled. UK liability insurers read the Allianz report and asked –‘is it better than tossing a coin’? 51% is seen as the minimum requirement for authorising reserves for example. The task was to compare the fifteen substantial findings in the report (in the context of nail varnish) with the written views of expert toxicology committees produced over several decades. Is this a fair test? One of the key features of
Vibration White Finger A modelling case study An epidemiology-based approach to liability ENID modelling has been developed and applied[1]. While based on the same concepts, in practice each scenario-specific ENID model is mathematically unique. This note describes the approach using the example of vibration white finger (VWF[2]). The results agree, within tolerance, with official data. Brief background Long term exposure to high intensity vibrations leads to a predisposition to episodes of finger blanching. In severe cases there is loss of dexterity. Cause and severity of VWF are both cumulative in nature. A typical presentation is illustrated below: The cause of these symptoms is an autonomic[3] constriction of the blood vessels supplying parts of the hands[4]. Episodes of finger blanching may be provoked by vibration, cold weather and wetting with water. A similar effect is seen in Raynaud’s’ phenomenon (RP) which is of constitutional origin. This initially gave rise to uncer
Processed meat and colorectal cancer claims – Yes, No, How Big? Summary It has been proposed that processed meat consumption causes colorectal cancer. Since there have been no UK insurance claims for this, and the proposal has not been put to proof at common law, it is described as a potential liability ENID[1], or, emerging liability risk. The background is briefly described below. As part of the necessary[2] evaluation response, a software model of the potential UK personal injury liability exposure has been developed. The model allows the user to test various technical and legal “what ifs” and to perform sensitivity analyses. A generic causation assessment tool is included in the software and is based on UK legal precedents. The normal medical response to colorectal cancer includes surgery. Such surgery entails a risk of serious infection and the consequential need for antibiotics. Given the increased rates of antimicrobial resistance (AMR) some bowel surgery cases will die as a res
De minimis – a practical step In a nut shell I propose that the explicit recognition of a two stage test of de minimis may lead to clarity of fact finding in difficult cases such as marginal noise induced hearing loss, and minor neck sprain. Further, by focusing on the first stage test, the significance of marginal exposures to risk can be resolved objectively. The conventional two-stage test The concept of de minimis is fairly straightforward to assess in cases of broken limbs, burns and other ‘impact’ events. Indeed it is so straightforward that the assessor and opposing parties may not make explicit that they are using a two stage test. In general when assessing injury: The first stage is to assess whether the state of the claimant after the event was/is ‘probably different’ to their state immediately prior to the event. Has anything changed? ‘Probably different’, is properly assessed on the balance of probabilities. If yes, the second stage is to assess whether the difference is su
Artificial intelligence and liability insurance. After a brief introduction to AI, some of the liability issues are introduced in this brief report. AI simulations Essentially AI systems are structured in the following way. The computer is supplied with digital information, coded with a set of statistical tools and a set of purposes that the customer would like to work with. Once data, methods and purposes are properly coded, the AI generates useful simulations of the data and can be used to identify the degree to which new situations meet the set of purposes. Missing data is effectively imputed from the overall model. As a very much simplified example. An example helps set the scene for the discussion of legal liability. Intended purpose: A liability claims manager wants to identify claims which are suited to making an immediate offer, and those which are best examined more closely. Process: Having coded ten thousand claims of the relevant sort, AI generates a statistically weighted s
Does mobile phone radiation cause cancer? Principle findings Following detailed lab experiments, researchers at the US National Toxicity Program (NTP[1]) have published[2] data which provides evidence of a statistically significant association between high exposure to radio frequency emfs (RF emfs) and malignant schwannoma of the heart in male rats. The evidence revealed thus far is consistent with a causal explanation. Extrapolating to human disease is however unclear; as this depends on the mechanism behind this association. It is hoped that in the final report, NTP will be as specific about this mechanism as they can be. As it is, there are signs in the data that the mechanism may be unusual, so much so that a new challenge could be made to the norms of cancer compensation. The final NTP report is due before the winter of 2018. As a draft for peer review, the NTP report may not be quoted. A challenge to the norms of cancer compensation The courts, and therefore insurers, nearly alwa
Managing liability ENIDs using Radar Radar The Radar service provides an expert view of science-based changes to liability insurance exposure and provides quantitative estimates of that exposure. Some examples from the Radar database are appended here: American football and brain disease, LED lights and eye injury, wood dust and nasal cancer and, 3D printing. History The Radar service was first developed under contract to ABI[1] as a collective work on emerging liability risks. Radar was then commercialised by Re: Liability (Oxford) Ltd[2]. As ABI evolved, the service was passed to liability insurers to take up individually. Liability emerging risks are now more often referred to as liability ENIDs[3], for example in Solvency II guidance. Aim The aim is to inform judgement of factors that could drive changes to liability insurance exposure and to estimate the resulting size of that change. Method This follows the familiar method of identify, evaluate and take action. Identificat
Glyphosate Glyphosate[1] (GP) was introduced into the market in the early 1970s. It is a widely approved and widely used herbicide. If it caused injury during foreseeable use then liability insurance would be involved. Given both professional and amateur usage, the foreseeable uses would be many and non-compliant standards of use would be unsurprising. Generic and specific causation would be the key tests of liability. News Recent interest in causation includes requests to regulators to label foods as containing traces of glyphosate on the grounds that it may cause cancer. This would have a significant impact on sales of food and glyphosate. This follows on from a 2015 decision made by the International Agency for Research on Cancer (IARC) to classify glyphosate as a probable human carcinogen. Californian law requires that it adopts IARC decisions[2]. The DeWayne Johnson (aged 46) claim for non-Hodgkin lymphoma (NHL) recently[3] reached a key step in awarding damages. The claimant made
Mania from cured meat? New research from the USA identifies an association between the consumption of cured meat products and mania. The evidence comes from both human medical cases and from studies of rats. Mania is a component of bipolar disorder and is managed by medication such as lithium. It is a recurrent problem often associated with poor decision-making. Liability for first party loss caused by legally defective cured meat products could extend to the financial decisions made during an episode of mania. Third parties might include the victims of road traffic accidents. What did the research show? Compared with controls, a history of eating cured meat preparations was significantly associated with being in the mania group (mean age 34, 66% female); adjusted odds ratio = 3.49 (2.24–5.45). There was no significant association with undercooked meat, raw meat, undercooked fish or raw fish. A history of eating cured meat preparations was not associated with a diagnosis of schizophren
Modelling of emerging liability risks can facilitate product launches, improve insurance policies and improve financial sustainability Emerging risks have, by definition, always been a challenge for companies and the insurance industry. Being new, there’s relatively little data with which to assess the probability of such risks materialising and the severity and frequency of ill-effects if they do. As companies and industries change ever-more quickly, the pace of innovation also increases, and so arguably does the emergence of new risks. History teaches us that the products, trends and technologies we assume to be safe and adopt as part of our everyday lives are not without their risks. For example, while the toxic properties of asbestos are now well-known, it was once considered a ‘wonder material’. Businesses could be healthier and more sustainable if they could identify significant new liability risks sooner – in other words, if they knew what the next asbestos would be. Unfo
Lockton have very kindly posted an article on the problem of liability ENIDs. The article is aimed at insurers but also professional insurance buyers – who are their primary audience.
The current recipe for profit-making populist sport rests on the observation that the fans identify with the players. Player-fan ‘relationships’ are manufactured and tuned to appeal to the paying public. Its all part of the business of making money from sport-related entertainment. Almost by definition then, brain injury in populist contact sports such as ice hockey and American Football is a cause for public concern. Those in charge of sport could benefit from the sympathy generated by claims of degenerative brain disease. By inflaming public opinion the public then act as the voice of the injured player and so the sentimental bond is strengthened. This will be good for the sentimental player-focused business model. But will it be good for the players? One result of the clamor to speak out for the player is the development of blood testing for evidence of possible brain injury. If a player knew they were risking a non-trivial degenerative brain disease in exchange for manu
This study has many high quality characteristics including high participation rates, prospective design, and objective data on TBI and dementia and a several medical confounders. It can be found here: Jesse R Fann et al. Lancet Psychiatry http://dx.doi.org/10.1016/S2215-0366(18)30065-8. The study includes 126,734 dementia cases in the analysis; 6,374 had experienced TBI. A potentially powerful study. The data clearly shows an increased risk of dementia diagnosis within the 2 years following TBI. However, there is no variation in risk between 4 and 14 years after TBI, suggesting a completely uniform acceleration, which would be hard to explain. There is also decreasing risk as age increases, yet older people are more vulnerable to dementia. This also is hard to explain. The authors call for more to be done to prevent TBI. Dementia is a growing problem. Motor and sports insurers would be especially sensitive to this issue if it turned out that TBI was a legal cause of dementia. Especiall
The judgment may be found here. The Court has decided that symptomless sensitisation is an injury. But there may have been some error. Background IgE sensitisation to platinum salts was detected by a routine skin prick test in five employees. Occupational exposure was the probable cause. The employer admitted breach of statutory H&S duty. Sensitisation is a necessary precursor to allergic reaction, but a person may remain free of symptoms for years and may never have an allergic reaction. What the Court seems not to have been advised though is that in a high proportion, sensitisation becomes undetectable if further contact with the allergen is avoided or remains below a certain level. There were no symptoms despite working in the same environment which had caused this biological change. It is implausible that all five became sensitised only moments before the test. More likley they were sensitised at some point between tests and had continued to be exposed to platinum salts until t
Liability ENID modelling has recently been added to our standard Radar service. (ENID = Event not in Data). What it is. For a given ENID scenario, the new modelling creates an annual liability loss distribution curve for a given jurisdiction e.g. UK. It then apportions the mean loss across industry codes. Where relevant, the time development of that loss is also estimated from latency periods and from likely dates of knowledge. The time-is-money value of reserving for the loss can then be estimated and the potential opportunity cost of delayed reserving can be calculated. Given that emerging risks arise from science studies it should be no surprise that the same studies can be analysed to give estimates of attributable frequency of loss. Other work is then used to assess how many of these attributable losses could make a reasonable liability claim. For example, if half the attributable cases could prove a breach of duty then the attributable frequency is at most, half the attributable