From Talcum to Glyphosate: The Multi-Billion Dollar Battle to Define 'Risk' 

April, 2018 -

Activists are recasting risk in a way that can only damage industry

Last August, a California jury awarded plaintiff Eva Echeverria a total of $417m in compensatory and punitive damages in a lawsuit against Johnson & Johnson (J&J). Her case was that the company should have warned consumers that studies had found an inconclusive correlation between ovarian cancer and talcum powder.

The case followed on from three other lawsuits that ended with the company owing over $300m in damages for failing to inform its customers. The Echeverria verdict was recently overturned, but J&J still faces talc-related lawsuits by 4,800 plaintiffs in the US.

In Europe, glyphosate came very close to being banned last December, based on one report from the International Agency for Research on Cancer (Iarc) declaring it ‘probably carcinogenic’. This is, as Reuters reported, the only study suggesting glyphosate might be a human carcinogen, after decades of research and testing culminating in hundreds of studies and reports.

What is going on? Both Monsanto and J&J feel that they have science on their side. J&J argues that no link between talc and increased risk for ovarian cancer has been proved, citing studies that show no association between them. One of these followed women over a period of more than 24 years. The California jury, on the other hand, felt that even though no link has ever been proven, J&J should have warned consumers about the possibility of a risk.

Who defines risk?

The battle, then, is not about scientific facts. It is over the definition of risk and who gets to decide what a risk is. Increasingly, companies need to prove that their products will not cause harm under any circumstances. This ‘better safe than sorry’ definition of risk is also called the ‘precautionary principle’. For executives, it is new and uncharted territory - and treacherous at that.

Technological advances and our scientific models make it easier than ever to find potential risk. Just one example, where we formerly detected substances at parts per hundred, we can detect parts per billion. This makes it possible to find ‘hazardous’ substances almost anywhere.

Once we have identified a substance, we turn to scientific models to predict how hazardous it might be. For many, we assess risk through the so-called linear no-threshold (LNT) model, which measures how many people a substance kills at a certain concentration. If, say, concentration X kills ten people out of 1,000, then according to the model, at 0.1%, it will kill one person out of 1,000, at 0.01% one in 10,000, and so on.

The tricky thing is that the model assumes that at any exposure, however negligible, the substance will be lethal for someone somewhere. This renders the model of great strategic interest to activists.

Take a minor health risk with a low mortality, or a substance that is found only in very small concentrations. Now calculate the death toll for the population of the EU (500 million) or even the whole world (11 billion). Suddenly, thousands of lives hang in the balance.

Whether the LNT model is accurate is far from certain. But it is certainly politically useful: for a motivated activist group or researcher, it is not all that hard to ‘predict’ a few thousand dead Europeans or Americans per year.

Exploiting the gaps

Journalists and politicians do not know the assumptions behind the models they rely on to make headlines – and policy. Not many are aware of the peculiar accounting that the LNT model allows. That, however, does not stop the model from generating headlines and policies around the world.

Add to that the fact that it is notoriously hard to assess ‘long tail’ risks like climate change, endocrine disruption and nanotechnology, and small, remote risks like the ‘cocktail effects’ of chemicals, and it is easy enough to argue that they might have effects. After all, how can you prove that they will not? If you can think up the possibility of a potential hazard or risk, it exists.

The mainstream media has embraced this precautionary principle and seems to accept that societies should regulate health and environmental risks if there is the mere possibility they exist. As a result, simply imagining a risk can become indistinguishable from creating it.

Take talcum in the case of J&J. Somebody did a study on it and found no conclusive link between talc and cancer. But the fact that it was studied in the first place surely indicates that a scientist was worried? Does that not mean that the public should worry too? The conclusion of any study might be that it ‘does not exclude that a risk exists’.

Alarmist headlines about small or imagined risks can shift public perception overnight and may adversely affect corporate reputations and stock prices. They also often spark a political response. The first usually includes a grant for a research group to study and evaluate the risk. So, now we have more studies looking for risk. Should we worry more?

Science neutral?

It would be naive to assume that science is immune to this battle to define risk. Scientists, especially in controversial fields, are practically forced to choose between adopting the precautionary principle or being sidelined.

Already we see the toxicology field breaking up into two factions. In research into endocrine disruption, we see researchers aligning more closely with activists on some issues than with conventional academics. Similar developments are visible in climate change: scientists defend their precautionary models and openly advocate aggressive policies.

The feedback loop of activism, media, policy and science keeps getting stronger and stronger. In this dynamic, corporations face difficult, but critically important, decisions.

Will you wait to act until there is a scientific consensus on the risk associated with your product? Doing so opens you up to attacks by consumer watchdogs, activist groups and lawyers. They will accuse you of corporate irresponsibility and demand to know why you did not act sooner.

Or do you accept acting on risks that have not been proven? Will you hurt your business by putting warnings on your product about those that might not exist?

That will probably not sit well with the shareholders and the board. Worse, putting a warning on a product might make you an even more attractive target for legal action by activists and tort lawyers. ‘Damned if you do, damned if you don’t’, as the old saying goes.

Black swans

In The Black Swan: The Impact of the Highly Improbable, author Nassim Taleb explains that our models for understanding and measuring the world are woefully inadequate. He argues that statistical models underestimate the occurrence of ‘black swans’ – sudden, shocking, low probability, high impact events like the financial crisis or the terrorist attacks of 9/11.

His findings might well apply to Wall Street. But in fields like health and environmental hazards, we are reaching the point where overestimating the occurrence of black swans is a much greater problem than ignoring the possibility of their existence.

At that point, we are basing decisions on emotion, not on science and facts. When politics becomes ‘post-truth’, we rightly deplore that. When it happens in science, we should not accept it either.

 

 

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots