The UKHSA’s ‘Evidence’ for Covid Restrictions is a Complete Mess

You may recall that we undertook to review the 100 models forming the backbone of the UKHSA’s latest offering: the mapping review of available evidence. Remember, UKHSA did not extract nor appraise the evidence as it does not have the resources. This drew expressions of mirth among our readers. We agree it’s a bad joke – a very bad one – considering this ‘evidence’ is what the UKHSA states justified restrictions that led to stories such as Pippa Merrick’s, which unfortunately are not the exception. Earlier versions of the justification were a bad joke, too. What follows is no better.

Diligently, as promised, we downloaded the 100 papers defined as “models” by UKHSA (please do not ask Hugo Keith KC what is meant by that term).

Of each, we are asking the following questions:

  • What is the non-pharmaceutical intervention (NPI) being assessed (e.g. is it an NPI, and is it defined and described?) and in what setting? (e.g. community, hospital, homes etc.)
  • What is the source for the effect estimate? (to model its effects, you need a source of data, i.e., what does it do?)
  • What is the size of the effect? (such as risk reduction of SARS-CoV-2 infection)
  • What is the case definition? (how did they define a case of COVID-19?)

Straightforward, we thought. 

Anything but, we are finding out. 

First of all, the papers are full of jargon, as they are mainly written by mathematicians, or at least that is what they say they are. Secondly, most of them come to the same conclusion: lockdown harder, do as I say, or you or your auntie (or both of you) will die.

The most disconcerting answers we are getting are those to the second question: what is the source for the effect estimate?

In a classical model, you start with describing the problem, in this case, the number of cases and complications in a population, transmission patterns and perhaps age breakdown. If your second part is about how to stop or slow down the spread, hospitalisations, deaths and so on, to model the ‘how to’ in a credible way you need facts about what you are modelling is supposed to achieve (say distancing). Which, if introduced in this or that setting, is likely to diminish the risk of infection by Z%. The numerical estimate for Z should be surrounded by a range of probabilities (confidence intervals), giving the boundaries of probabilities that the observed effect (Z) in reduction of SARS-CoV-2 infection lie within X and Y around your point estimate of Z. So you then take Z and stick it in your model to see what effect Z would have and then you can use X and Y to play ‘what if’. 

The crucial word is ‘credible’ because these models (are they projections, scenarios, predictions, or scenarios upon which predictions can be projected – ask Hugo Keith KC for a simple answer) have been used to change people’s lives. Or maybe some of them were retrofits to justify something already done by the Robert Maxwell school of ethics.

Credible would mean an estimate from one or preferably more well-designed studies with a protocol and clear case definitions. As the focus is the U.K., the data should come from the U.K. or at least a similar setting.

Well, here is an example of the sources of ‘parameters’ used in one quite well-publicised model:

Of the 11 assumptions underlying the model, eight are unsourced; one comes from a systematic review without infectious case definition, one from an economic model, and one from a case-control study.

Extraordinary, you will say: this seems to be the universal method known as BOPSAT (a Bunch Of People Sitting Around a Table). Yes, it is, except that the model, in fact, was about mass community testing for SARS-CoV-2 by lateral flow devices (LFDs) with not a shred of non-pharmaceutical interventions in sight. LFDs are tests, not interventions that can slow or stop the spread of anything.

And these are some of the minor problems we face, so it takes time. Perhaps we should ask Mr. Keith for help?

Dr. Carl Heneghan is the Oxford Professor of Evidence Based Medicine and Dr. Tom Jefferson is an epidemiologist based in Rome who works with Professor Heneghan on the Cochrane Collaboration. This article was first published on their Substack, Trust The Evidence, which you can subscribe to here.

Subscribe
Notify of

To join in with the discussion please make a donation to The Daily Sceptic.

Profanity and abuse will be removed and may lead to a permanent ban.

20 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
James.M
James.M
2 years ago

Society is being ruled (ruined) by geeks who crunch numbers without a shred of common sense as to whether or not their interpretation is valid – or not. While science is being debased by garbage in, garbage out computer modelling, the politicians lap it up because they can say their policy decisions are based on ‘the science’. Whether it is climate change or viral pandemics as long as science is controlled by vested interests it matters not a jot that credible scientists don’t get to express their opinion.

huxleypiggles
2 years ago
Reply to  James.M

A more realistic interpretation of “modelling” is that those asked to produce the models are given very strong nods and winks, along with appropriate brown envelopes, indicating the sort of results that are required.

JeremyP99
2 years ago
Reply to  James.M

Scientism. The dogma of the Technocrat. Relationship with the real world? None

zebedee
zebedee
2 years ago
Reply to  James.M

I think you’ll find that a lot of the epidemiological models are merely Garbage Out. At least I’ve read a lot of papers that are.

Paramaniac
2 years ago
Reply to  James.M

Casino’s make money because they win 51% of the time and the punters win 49% of the time. Over long periods they clean up because of this simple equation.
If computer models that were even slightly accurate at predicting the future then some bright spark would have applied it to the biggest and potentially most lucrative casino of all, the Stock Market, and become the richest person on earth.
The fact that they haven’t done this tells you all you need to know about computer models.

The Real Engineer
The Real Engineer
2 years ago
Reply to  Paramaniac

Whilst your example is interesting to consider, you are not considering a “real or proper” computer model. In many areas of Engineering we use computer models, for example to design a bridge. We need to calculate the forces in many items of the structure under many conditions of loading. The answer must be tolerably accurate because failure would be disasterous, and very expensive. To make the model we use several inputs, basic mechanics, material strengths, expected loads, etc. Once modelled we build the structure and then fix sensors to all the parts which have been calculated and measure the actual values. Then if they are not exact, go back and adjust the model until they are. The adjustments are not arbitary constants to make it right, they must be based on the inputs only. A few rounds of this on various projects and you have a trusted model, which is known to be accurate. Actually the most accurate are probably for electronics design, but they are highly technical, so the simple description above. Models of the kind you describe, are not built like this, they are largely guesswork, and there is never any “go back and adjust the model” to… Read more »

iconoclast
2 years ago

Sadly The Real Engineer as attractive as it may seem, an engineering approach is a specific extremely narrow application of reductionist science which cannot work outside the tight confines of engineering.

I explained why here and here.

Reductionism always fails in the face of complexity.

Imagine what would happen to your bridge design if you could not test it thoroughly before putting it to use.

And look at what happened to the ‘Blade of Light’ footbridge in London connecting St Pauls and the north side of the Thames to the Tate Modern on the south side. The oscillations caused by footfall and resonance were completely unexpected despite all the design and testing and ‘feedback loops’.

Engineering is a limited tool and its lessons cannot be applied universally to overcome complexity in all its manifestations.

iconoclast
2 years ago

In the context of “real computer models”, I presume you are aware of the fact that it is impossible to prove anything but the most simplistic of software applications will work as intended?

So the software you use to prove your designs cannot itself be proven to be entirely reliable.

In software engineering software – particularly complex software – is constantly being maintained to eliminate deviant performance – ‘bugs’ – as they are discovered first in testing and then later during live use.

Let us hope that no bridge design will ever fail because of the computer software used to model it.

But as your examples demonstrate, even the bridge in the modern day is not put into live use without some form of monitoring and feedback to ensure it is operating within its expected parameters of operation.

The Real Engineer
The Real Engineer
2 years ago
Reply to  iconoclast

I assume you know nothing of Engineering, or software. Taking software first, one only gets software that has bugs because it has not been tested properly. This is not eased by poor computer coding languages like C or C++, which have no useful type checking leading to stupid errors. However programmers like them because anything can be an integer one line and the same variable say text in the next. Software testing takes, in my experience of 40 years, takes at least 10 times as long as writing it. Thus no one can be bothered! Let the customers find the bugs is widely the mantra. My bridge example was deliberately much simplified, and your understanding is therefore misled. The point of structural design is that we have been doing it since Brunel, and understand every first principle of the job. We do not monitor every design for compliance with the model because we have vast experience of the modelling and it is essentially completely correct. Now to confines and reductionism. If one wishes to produce a predictive model, it is essential that the situation to be modelled is completely defined from a scientific point of view. I suggest that your… Read more »

iconoclast
2 years ago

I assume you know nothing of Engineering, or software. Taking software first, one only gets software that has bugs because it has not been tested properly.

Sadly you are wrong. When I taught Masters students in Software Engineering program proving was problematic and it remains so. It is impossible to prove any software will work as designed to do.

It is similarly impossible to thoroughly test most software. That is why even the software people use on their computers and on their phones is constantly being updated to correct ‘bugs’.

If you knew anything about software you would know that and I would not be wasting my time now educating you.

Sadly your understanding of software, science and engineering is blinkered at best and the views you express are sadly wrong.

But who am I to tell you. And despite my doing so I fear it will not make any difference.

Sforzesca
Sforzesca
2 years ago

Would the treatment of patients be better if computers did not exist.
Discuss.

No codes, no programmes, no models, no algorithms, no instant and universal protocols, medical bureaucrats having to hard graft, no BLAST sequencing, no models of “virus”, no genetic engineers – and no mRNA jabs.

On the other side, hard graft in understanding more about how and why the immune system works, how “disease” actually spreads and which treatments work and doctors being able to use their own initiative without being beholden to health management consultants.

The Inquiry has been captured and is beholden to the computer which results in the mantra of “things would have been better if we’d locked down sooner and harder.”
The next manufactured pandemic = interesting times ahead.

zebedee
zebedee
2 years ago
Reply to  Sforzesca

Basic modelling says that flattening the curve results in a longer epidemic with lower acquired immunity so when you lift measures you just get another wave.

JohnnyDownes
2 years ago

‘Dear patient,
Due to the rise in COVID cases in our area. We must encourage you to wear a mask when coming to the surgery. Help us to protect all of us’
That’s a text message I received yesterday.
All this bloody nonsense is starting again, they haven’t learned a damn thing.

Jabba the Hut
Jabba the Hut
2 years ago
Reply to  JohnnyDownes

Would you want to see a ‘doctor’ who is that scientifically illiterate.

The Real Engineer
The Real Engineer
2 years ago
Reply to  Jabba the Hut

All the guff like masks comes from the DoH. They are clueless and should be sacked. We cannot afford them, and they have all the power over Practice Managers who simply parrot the circulars, in case of sacntions for not doing so.

iconoclast
2 years ago

There is a fundamental scientific reason why modelling does not work. Complexity. Classical reductionist science fails when confronted by complexity. Classical reductionism is exemplified by the classical “The” scientific method – testing x against y whilst [theoretically] holding all other variables invariant throughout the testing. The “The” scientific method fails abysmally when all other variables cannot be held invariant. It has only ever worked in specific cases in the “exact” sciences like inorganic chemistry and in some branches of physics. The problems complexity brings are manifold. It is impossible to test to try to establish a theory because the results will include results which do not conform to the theory even if they include results which do – this is typical in the biological sciences which commonly have many variables in living organisms which cannot be identified let alone controlled and held invariant. This means most science theories are invalidated [falsified] by all the aberrant results occurring alongside the conforming results – and it only takes one – the one white raven or the one black swan. Another problem is it is impossible to predict outcomes. One can at best only say what might be the probability of a particular… Read more »

DevonBlueBoy
DevonBlueBoy
2 years ago
Reply to  iconoclast

JK Galbraith, an economist of some repute, said there only two types of forecasters; those who were wrong and those who didn’t know they were wrong.
Another, previous example of the dangers of modelling was the financial crisis of 2007/8.
How on earth can the mental incompetents are are supposedly in charge of the country be unaware of these recent examples of the uselessness of modelling in general and in Ferguson’s case gross errors in his previous utterings, to choose such a useless tool?
Never mind if it all goes wrong we’ll only screw up the economy and f**k up the education of a generation of children.

The Real Engineer
The Real Engineer
2 years ago
Reply to  iconoclast

Iconoclast, please read my post on models. Whilst you are correct to talk about these “social” models, Engineering ones are different and you depend on them a lot. Your computer in front of you was designed using models of unbelieveable complexity to make the chips, the PCB etc. It is not complexity that is the problem in a general way, it is that the model (particularly climate models, LOL) are not adjusted or written to produce accurate results via a feedback loop from reality. Simple!

iconoclast
2 years ago

Hi, It might help if you review what I wrote in my immediately prior comment. I am not addressing “social models”. I am addressing every branch of the sciences in which classical reductionism does not work and indeed, cannot work. Forecasting weather is a science and not a “social model”. It is also not like a human manufactured complex machine. The biological sciences are not “social models”. These are sciences in which the variables cannot be controlled to carry out reductionist experiments. Drug trials are a typical example. They are carried out on living organisms. No two are the same. [A possible exception is the artificially maintained and cultured single ‘pure’ cell line which is never found in nature]. Every ‘test’ in the trial is on one example from an heterogenous population – no two examples of which are identical. In contrast, classical reductionism relies upon homogeneity. A motor car engine is complex. An aircraft is complex. But your feedback loops are from tests carried out on identical examples. Each example is manufactured to tight quality control standards to ensure each machine conforms as closely to manufacturing specifications as modern science and engineering can ensure. The testing carried out on… Read more »

DevonBlueBoy
DevonBlueBoy
2 years ago

Another excellent skewering of the Rona approved narrative by Messrs Heneghan and Jefferson. Long may you continue to pull aside the veil of secrecy. The line about the ‘Robert Maxwell School of Ethics’ sums it up perfectly.