If You Think Modelling the Future is Dangerous, Try Modelling the Past
When bovine spongiform encephalitis (BSE) hit the U.K. in the 1980s, computer modelling predicted that without intervention new variant Creutzfeldt-Jakob disease (vCJD) would kill tens of thousands of people. The U.K.’s response to BSE was to slaughter millions of cows as a way of halting the potential spread of the disease into the human population. In the end, about 3,000 people have died of CJD within the last 30 years and of these only 178 can be attributed to vCJD.
Was the modelling useful?
Surely yes! The modelling saved thousands of lives due to its timely prediction of the impact of BSE on the U.K.’s population if we didn’t do something. And luckily, we did do something, and because we did something there were only a couple of hundred deaths from vCJD… although the cows weren’t so lucky.
Welcome to the world of the counterfactual model and predicting what might happen if we ‘do nothing’.
Fast forward to COVID-19 and we found ourselves yet again in a situation where computer modelling was predicting thousands of deaths from a new disease unless we did something. And yet again, these predictions became part of the rationale for a series of major national interventions to curb the threat from this new disease, only this time the cows got off lightly and it was the humans who suffered.
Of course, it’s much more likely that the modelling predictions were wildly inaccurate, but it’s easy to see why they carried such sway in times of uncertainty. Their apparent sophistication, and general impenetrable-ness to all but a select few, together with the fact that their outputs were promoted by learned individuals with strings of letters after their names, created the illusion of providing powerful future knowledge… an objective crystal ball that should underpin difficult policy decisions. The problem is that, parking the fact that some models have major technical problems, they suffer from two much more basic fundamental problems that should limit their influence.
The first is the obvious fact that our knowledge of biology is incomplete, especially that of a novel disease, and so all models of such complex biology will also be incomplete. We may be aware of the gaps, and so make educated guesses in an effort to plug them, but it’s not the case that we know where all these gaps are or even that there are gaps at all. We may be completely ignorant of key relationships or important variables in the system. There may well be important ‘unknown unknowns’ in the biology of which we are blissfully ignorant and so, by definition, things we cannot model. It therefore falls to the modeller to fill the gaps (that they are aware of) and to be the arbiter of ‘truth’ in the disease biology. As such, these models are built upon the assumptions of the modeller and although many assumptions may be non-contentious and generally agreed on, this is not true for all assumptions, especially when it comes to modelling complex biological systems where knowledge and data maybe sparse, inconclusive, or even contradictory. It will be up to the judgement of the modeller to decide what to include in their model and this judgement will reflect the modeller’s own opinions and biases. This in turn will affect the predictions of the model itself.
So, far from being objective ‘crystal balls’, models reflect the prejudices, knowledge, ignorance, and preconceptions of the modeller. Meaning that regardless of who builds them, and regardless of how many wonderful graphs and pictures they spit out, unless their predictions are actually tested and found to have some element of truth, such computer models are effectively the codified opinions of the modeller. And opinion is the weakest form of clinical evidence.
The second important point is that counterfactual models don’t tell us what we should do only what might happen if we don’t.
Something that was striking about the use of models in the COVID-19 briefings was the fact that they were not used to make predictions of what would happen but to describe what wouldn’t. This was because, just like in vCJD, models were used to paint a picture of a future, caused by ‘doing nothing’, which we were going to avoid by ‘doing something’ and as such they were not predictions of the outcomes of our actions, but our inaction. And just like when we killed all the cows, we certainly did something to avoid the futures predicted by these pieces of computer code: lockdowns, masks, screens, school and business closures, social distancing… the whole COVID-19 dance. Having ‘done something’ we then saw that none of the modelling predictions came true and so, using the argument I used above for vCJD, concluded that by ‘doing something’ we successfully avoided the thousands of deaths that would have resulted from ‘doing nothing’. Time for drinks at Number 10!
Such fallacious circular reasoning seems to abound when it comes to the use of these kinds of models: first, we start by assuming that such counterfactual models have some level of truth in prediction, even though the models and their predictions are untested and unvalidated. Secondly, we then assume that in ‘doing something’ we are actually avoiding the outcome predicted in the ‘doing nothing’ scenarios, even if there is no relationship between the proposed ‘doing something’ and what is codified in the model. Finally, if the ‘doing something’ results in real world outcomes that are better than those predicted by the ‘doing nothing’ modelling we then take this as evidence to implicitly validate the modelling predictions we used to justify ‘doing something’. Counterfactual modelling would therefore appear to be unique amongst scientific disciplines because it creates unproven hypotheses that we do not want to explore, encourages the use of interventions that it does not explicitly predict will be effective, and makes predictions that are confirmed by not testing them. This is called ‘following The Science’ by policy makers.
The fact is that counterfactual models of COVID-19 are not a rationale for lockdown or any other intervention because they do not predict the impact of these interventions. They model NO intervention; it is policy makers and bureaucrats who decide what should be done. It is the very fact that such models make such dire predictions which provides the strong incentive to not test them by doing nothing. In fact, if one thinks about it, there is a perverse incentive for counterfactual models and modellers to paint the worst possible picture. After all, nothing substantial happening is no justification for action whereas the more dire the predicted outcomes, the more likely we are to do something to avoid them and the greater the claim we can make of lives ‘saved’ as a result. Indeed, it turns out that’s why SAGE didn’t bother modelling anything other than the reasonable worse case.
That said, counterfactual models and modellers are still making predictions and so all we need to do to test them is to do nothing and wait and see what happens; we turn the counterfactual into the factual. Which is essentially what happened after ‘Freedom Day’ and then during Christmas 2021 when the dire predictions of the modellers were ignored and in both cases the scenarios of thousands of deaths and hospitals overwhelmed failed to come to pass. The COVID-19 models were proven to be wrong, and if they were wrong then, then they were always wrong.
Predicting the Past
There is an old saw that says “prediction, especially about the future, is hazardous”, and it is certainly the case that the modelling predictions of COVID-19 futures have been truly hazardous for us all. But as we try to put COVID-19 behind us, a new form of counterfactual prediction is emerging which might be even more hazardous, and that is using models to predict what might have happened in the past.
When Professor Neil Ferguson stated last year that if the national lockdown had been instituted even a week earlier “we would have reduced the final death toll by at least a half”, one assumes he was making this statement based on the output of a model. In a similar vein, modellers at Imperial have also argued that Sweden should have adopted lockdowns to save lives and that the COVID-19 vaccines have saved 20 million people from an untimely death. We have also seen modelling papers being published that support use of lockdowns over focused shielding efforts by predicting what would have happened if we had made these more modest interventions. We can expect more and more of the same.
Unlike the ‘do nothing’ models of counterfactual predictions, such re-imaginings of the past aim to produce new counterfactuals based on different assumptions about what could have been done. Obviously, such models also suffer from the problem of the incompleteness of biological knowledge, but they have an even more fundamental scientific issue and that is that the ‘predictions’ they make are intrinsically untestable. After all, we cannot go back in time and see if the modellers were right about what would have happened if we had done something differently. So, any claim that these models are uncovering a scientific truth or proving (or disproving) anything is false. Because what defines science as a practice is the very fact that hypotheses can be tested, and their validity or invalidity determined… beautiful theories can be slain by ugly facts. It doesn’t matter how much science goes into a model, if the predictions and simulations it produces are untestable then they are, and will always be, just an opinion, a point of philosophy, an article of faith. They don’t ‘prove’ anything.
There is also huge potential for circularity and confirmation bias in developing models of the past. Imagine that you wished to model the impact lockdown had on SARS-CoV-2 spread in the pandemic, how would you do this? One way one could be to assume (because of less social mixing) that there was less transmission in lockdown, and as a result we adjust the ‘R’ number so that it was lower in lockdown than without. Lo and behold, now when we model lockdown vs. no lockdown, lockdown produces a better result due to a more rapid decline in infections. But this is completely circular: we assumed lockdown reduced transmission and our model then shows less transmission. Likewise for any other intervention (pharmaceutical or not), the temptation is to assume its level of effectiveness… and then model its effectiveness. Similarly, for approaches that the modeller does not like (for example, focused protection), assumptions about lack of effectiveness are also made… neatly demonstrating that more modest interventions would not have been as effective. Such circular arguments in these modelling efforts may not be as obvious as my example here, but it’s easy to see how the assumptions and biases (whether explicit or implicit) of the modeller can be baked into the model well before they even think of hitting the ‘run’ button.
As academic exercises, the views of modellers about what could have happened – both about the future and the past – are perhaps interesting as intellectual endeavours, but the danger lies in the way they’re used by policy makers. Just as counterfactual predictions were used to justify ‘doing something’ during the pandemic, so these retrospective re-imaginings of the past are used to validate that the ‘something done’ was the right course of action. However, unlike modelling the future, which is hard enough to test, there is absolutely no way to demonstrate that such modelling output is either right or wrong because in the absence of a time machine we cannot go back and test their predictions. There are no ‘Freedom Days’ – which provide us with a crude way of testing the dire predictions of future modellers – just an endless series of imagined what-ifs. The reality is that such models are more akin to the computer renditions of historical places, or the fantastical simulations of distant planets found in films and games which exist as binary code but are not anywhere we can visit and explore… they look solid on the screen, but their foundations are not real and their walls are pixel thin.
The trouble is that when it comes to COVID-19 the outputs from such modelling efforts can be pounced upon and reported as ‘Scientific Truth’, especially if they support the perceived wisdom or accepted narrative. So, we intend to spend many hours, and lots of money, trying to understand our responses to the pandemic, but when interrogating the role of modelling and modellers we allow the predictions of the models themselves to become the ‘evidence’ of their validity. Evidence of alternative histories avoided, whether old predictions of a future that did not happen or new predictions of a different past. Evidence that we did the right thing or should have done it harder or faster or longer. Evidence that other approaches would have been much worse. Evidence that costly, unsafe, intrusive, and ineffective interventions worked for COVID-19 and so should be used again in the future.
Predicting the future is indeed hazardous, but it might turn out that when it comes to COVID-19 predicting the past is far, far more dangerous.
George Santayana is the pseudonym of an executive working in the pharmaceutical industry. Thanks to Mildred for critical reading and comments on this article.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Well done Mildred.
I recommend the writings of the economist John Kay on the credibility of modelling.
https://committees.parliament.uk/oralevidence/464/pdf/
Quote
Models are only useful to provide an understanding of the factor at play.
Quote
A model that focuses on the key parameters is a lot more useful than a more complicated one that tries to bring in everything.
Quote
The trouble is that when it comes to COVID-19 the outputs from such modelling efforts can be pounced upon and reported as ‘Scientific Truth’, especially if they support the perceived wisdom or accepted narrative.
To DS editorial –
We need a concise piece on epistemology.
Please produce content relating to how we arrive at the truth.
It will hopefully discuss the scientific method and the different layers of credibility of evidence involved in published, peer-reviewed science.
Computer models are very, very low in this hierarchy of credibility of evidence.
But we must establish a common language and a a common understanding across the community of sceptics.
Posted on Twitter earlier…
How real is this I wonder ??
Scary if it’s true…
A cracking article but there is an omission.
“The first is the obvious fact that our knowledge of biology is incomplete”
“The second important point is that counterfactual models don’t tell us what we should do only what might happen if we don’t.”
And the third is ‘the modeller.’
In Neil pantsdown Ferguson we had the man with a track record that would have embarrassed Stalin. This is the guy who has scored 100% in one area of his career – turning out models that are so wrong they would embarass a Tory government and for sheer corrupt audacity rank alongside the predictions of Al Gore.
I do feel this should be considered whenever anybody is foolish enough to think that consulting a crystal ball gazer such as pantsdown has any merit.
If our politicians had the faintest grasp of common sense then they should have questioned how someone with 2 physics degrees, who was using schoolboy level coding to produce wildly inaccurate forecasts could be allowed to describe themself as an epidemiologist and be given the time of the day.
I wrote to my young Tory MP at length on the 2nd day of lockdown in March 2020 and described in deatil of the uselessness of Ferguson over 20 years of modelling. As an ex Dairy Farmer, I was aware of his BSE, F&M and Swine Fever predictions and demonstrated that he was a charlatan who had fooled 4 different PMs including Boris.
Ferguson was driven by the fame, the finance to Imperial and his ridiculous wish to be recognised when he is in fact a sad nerd who uses the wrong code and then interprets the results incorrectly. Where are the checks and balances in the Government – we are governed by intellectual pygmies.
I’m reminded of the ‘elephant trumpet’ joke. The short version goes:
A man starts a new job in a big office. All goes well up to lunchtime, when all of a sudden a guy comes parading through the office blowing a trumpet loudly. The man asks the person sitting next to him:
What’s with the trumpet guy?
The other person replies: That’s to scare off the elephants.
But there aren’t any elephants!
Yes I know — it works really well.
And the 82 year old who applies to be a lumberjack’s mate. The lumberjack suggests the guy might be getting on a bit. The old guy explains his years of felling trees, including the last five years spent in the Sahara Forest.
“You mean Desert.”
“Ach, now!”
We can look back and test the predictions to the extent that other countries and states in the USA did much less in the way of restrictions e.g. Sweden and South Dakota. I remember reports published on this site that showed no correlation between the stringency of restrictions and outcomes in terms of deaths attributed to COVID. My assumption is that all the restrictions were ineffective and any attempt to suggest otherwise using modelling is a fraudulent exercise.
Computer Models faking the future
If anyone needs someone to talk to we meet every Sunday.
Stand in the Park Sundays 10.30am to 11.30am
Make friends & keep sane
From 1st January 2023
Elms Field (near Everyman Cinema and play area)
Wokingham RG40 2FE
“Our knowledge of biology is incomplete”.
We know the square root of F. all.
And our knowledge of the immune system is even less.
Even worse is misunderstanding it- Sir Macfarlane Burnet, incidentally a Nobel prize winner – Immunology/virology.
No matter though, our brilliant genetic engineers will rescue us from all known pathogens.
Queue up for your regular mRNA therapies. Idiots.
It will be the end of humanity.
“Hell is empty and all the devils are here”. (Been reading a bit of Shakespeare).
There are more things in heaven and earth, Sforzesca, than are dreamt of in your philosophy.
Yep, as the bard didn’t put it, we know the square root of fuck all.
But if the proles knew the true ignorance of those in power and their chosen experts, there’d be anarchy! ANARCHY, I TELL YA! IT’D BE HELL ON EARTH!
…or maybe people would just stop believing in White Knights, and learn a bit of self-reliance…
Great article. It goes without saying that the weaknesses, biases and circular reasoning identified here are just the same as those afflicting climate modelling (particularly in ‘hindcasting’), and that those models are abused by academics and policy makers for the same reasons.
But you get the same response if you go against the climate change narrative. “Did you know that Artic sea ice is increasing?” “Ohh what conspiracy sites have you been reading again Rob?”.
If would be very amusing if it wasn’t so worrying how accepting many people seem to be…
‘Report 9’, the Imperial College London paper famous for forecasting half a million deaths, was published on 16 Mar 2020. It was demonstrably wrong on the day it was published. The ‘Results’ section (page 6) opens with this paragraph: In the (unlikely) absence of any control measures or spontaneous changes in individual behaviour, we would expect a peak in mortality (daily deaths) to occur after approximately 3 months (Figure 1A). In such scenarios, given an estimated R0 of 2.4, we predict 81% of the GB and US populations would be infected over the course of the epidemic. Epidemic timings are approximate given the limitations of surveillance data in both countries: The epidemic is predicted to be broader in the US than in GB and to peak slightly later. This is due to the larger geographic scale of the US, resulting in more distinct localised epidemics across states (Figure 1B) than seen across GB. The higher peak in mortality in GB is due to the smaller size of the country and its older population compared with the US. In total, in an unmitigated epidemic, we would predict approximately 510,000 deaths in GB and 2.2 million in the US, not accounting for… Read more »
Absolutely superb and many thanks.
Let’s get this straight: all that charlatan Ferguson did was to ‘model’ a gigantic chain letter passing around the UK, and assume that 0.9% of recipients died.
That’s it. It was garbage, and anyone with a handful of brain cells should have realised that back in March 2020. The man is a fraud. Only the numerically illiterate could have been taken in by it.
Back then I knocked up my own ‘model’ in a single Excel spreadsheet – the concept depicted by Ferguson really is that simple. Just for fun I started playing around with the variables, to see how they affected the outputs. An absolutely fundamental input, which affected the outputs hugely, was the proportion of the population that was vulnerable to infection. But Ferguson assumed 100% – clearly nonsense, as should have been understood at the time.
Yes, models have lots of pitfalls, as laid out in this article.
But Ferguson et al were frauds and charlatans, or worse, and they were dealing with politicians and public ‘health experts’ who were frauds and charlatans, or worse. Most had agendas which are even now only slowly being revealed.
It’s important to keep a clear head about this.
Enjoyable read. But the article is wrong in one aspect: Ferguson et al didn’t just model counterfactuals and let politicians decide what to do to prevent a catastrophe. They modelled different scenarios based on different sets of countermeasures with an assumed effectiveness at infection reduction. Which boils down to prescribing policy indirectly because had politicians chosen to do something other than implement the maximum safety for everyone set of NPIs, they would have been slaughtered by the MSM and the opposition for killing people for the economy (Which actually happened whenever a restriction was about to be lifted in the UK. DeSantis was and is also accused of this but he seems to be less of spineless jelly than our former PM).
Another important aspect of the COVID modelling was that it was assumed measures would have no detrimental side effects and were free to implement (or rather, implementing them would be cheaper than what would have happened without them).
BTW – I assume that picture of Ferguson is photoshopped?
On the subject of modelling, consider: Global impact of the first year of COVID-19 vaccination: a mathematical modelling study, published on The Lancet Infectious Diseases, and reported on the Australian ABC as: COVID vaccines saved 20 million lives in the first year, but we could have done better, scientists say.
The authors are affiliated with the MRC Centre for Global Infectious Disease Analysis, Imperial College London…and, just by coincidence…principal investigators with the MRC Centre for Global Infectious Disease Analysis, Imperial College London include Neil Ferguson and Roy Anderson…
Neil Ferguson is the lead author on Imperial College Report 9, published in March 2020, which inferred Covid-19 was similar to the ‘1918 influenza pandemic’, and recommended ‘suppression’ of ‘the virus’ (aka lockdown/restrictions) “until a vaccine becomes available”. It wasn’t disclosed on Report 9 that Neil Ferguson is funded by the Bill & Melinda Gates Foundation, arguably the biggest promoter of vaccine products in the world.
Ferguson et al’s Report 9 influenced the Doherty modelling in Australia, which put us into lockdown in March 2020.
The Lancet Infectious Diseases article prepared by Imperial College authors says “COVID-19 vaccination has substantially altered the course of the pandemic, saving tens of millions of lives globally. However, inadequate access to vaccines in low-income countries has limited the impact in these settings, reinforcing the need for global vaccine equity and coverage”.
They would say that wouldn’t they…
Re Neil Ferguson being funded by the Bill & Melinda Gates Foundation… In November 2020, I submitted a BMJ rapid response asking Who are the members of SAGE? There must be transparency and accountability for coronavirus policy. In December 2020, The BMJ published the article: Covid-19: SAGE members’ interests published by government 10 months into pandemic. This BMJ article includes a link to the SAGE COVID-19 Register of Participants’ Interests, which notes interesting information about Neil Ferguson which wasn’t disclosed in his Imperial College Report 9, e.g. he’s Principal Investigator, Bill and Melinda Gates Foundation and Gavi, the Vaccine Alliance Grant – Vaccine Impact Modelling Consortium…etc… Considering Ferguson et al’s Report 9 recommended ‘suppression’ (aka lockdown/restrictions) of global society pending the arrival of a vaccine, it would have been useful for it to be disclosed at the time that Ferguson was funded by the Bill and Melinda Gates Foundation, arguably the world’s biggest promoter of vaccine products. By the way… The article mentioned in my previous comment, published on The Lancet Infectious Diseases, was funded by the Schmidt Science Fellowship in partnership with the Rhodes Trust; WHO; UK Medical Research Council, Gavi, the Vaccine Alliance; Bill & Melinda Gates Foundation; National Institute for Health Research; and Community Jameel. Is there anyone in the medical and scientific establishment not in… Read more »