Bad data is driving fear of a second wave of Covid-19

The recent spike in Covid-19 “cases” is very different to the original outbreak, says James Ferguson of MacroStrategy Partnership. The government needs to calm down.

Covid swab test © Getty Images
We’re putting too much faith in these tests
(Image credit: © Getty Images)

Covid-19 cases are on the rise again, particularly amongst the young. The “rule of six” is now nationwide. Localised lockdowns are in place up north. And even martial-law-style 10pm curfews are being considered. So-called “pillar 2” cases from the wider community – where “cases” are a misnomer for “positive” test results – have doubled in the last two weeks from about 2% to roughly 4%. Is this the long-awaited second wave? And once all these infected young northerners visit their grandfathers (ill, elderly men are particularly vulnerable), will it turn into an increase in deaths too?

This seems to be what our politicians fear – hence their aggressive response. However they are making some major mistakes. One is the policy direction itself – the correct response is not a lockdown of the general population, but shielding of the vulnerable. The other is in misinterpreting the data. “Positives” are not the same as “cases” (the latter require both symptoms and a doctor’s diagnosis).

This leads to a dilemma because, as you can see from the chart on page 23 (which runs to 12 September), although positives have leapt sevenfold from their June-July low, deaths over the same period have fallen by two thirds.

Subscribe to MoneyWeek

Subscribe to MoneyWeek today and get your first six magazine issues absolutely FREE

Get 6 issues free
https://cdn.mos.cms.futurecdn.net/flexiimages/mw70aro6gl1676370748.jpg

Sign up to Money Morning

Don't miss the latest investment and personal finances news, market analysis, plus money-saving tips with our free twice-daily newsletter

Don't miss the latest investment and personal finances news, market analysis, plus money-saving tips with our free twice-daily newsletter

Sign up

The epidemiological rule-of-thumb is that the number of (unseen) infections will be roughly ten times the number of diagnosed cases, which in turn will be roughly ten times greater than the number of deaths. Working backwards from deaths, and taking account of the three-week period of illness, we would expect the UK to have about 2,500 diagnosed cases, of which half would be hospitalised (the government currently reports just 972 hospital patients). That would also suggest 25,000 current infections, or disease prevalence (ie, the percentage of the population who has it) of sub-0.04%.

Yet the pillar 2 community-testing scheme is now finding 4% positives. In other words, the test regime seems to be finding 100 times more positives than the combined data on diagnosed cases, hospital admissions and deaths would suggest. That begs the question: how many of these are false positives?

Why false positives really matter

It might surprise you to learn that a positive medical test result does not often – or even usually – mean that an asymptomatic patient has a disease. If the false positive rate (FPR) is the same as the prevalence of the disease, then half of all positives will be false. And the lower the prevalence compared to the FPR, the more inaccurate the results will be. Even doctors don’t tend to grasp the full implications of that.

In 1982, a researcher called David Eddy provided physicians with a diagnostic puzzle. Women age 40, participate in routine screening for breast cancer, with a prevalence of 1%. The mammogram test has a false negative rate of 20% and an FPR of 10%. What is the probability that a woman with a positive test has breast cancer? The correct answer is just 7.5%.

In each batch of 100,000 tests, 800 cases (80% of the 1,000 women who actually have breast cancer) will be picked up. But so too will 9,920 (10% FPR) of the 99,200 healthy women. So the chance of having breast cancer if you test positive is still only 7.46% (800/10,720). However, 95 out of 100 doctors gave answers in the range of 70-80%, ten times higher than the correct answer.

Not a whole lot has changed since that study.

To read the whole of this article, subscribe to MoneyWeek magazine

Subscribers can see the whole article in the digital edition available here

James Ferguson qualified with an MA (Hons) in economics from Edinburgh University in 1985. For the last 21 years he has had a high-powered career in institutional stock broking, specialising in equities, working for Nomura, Robert Fleming, SBC Warburg, Dresdner Kleinwort Wasserstein and Mitsubishi Securities.