We are doing the wrong test

March 31, 2020
CEO Davies considers possibilities

This isn't really a sentence that early 2019 me would have foreseen myself writing but, here we are. I wish to propose a new paradigm for addressing novel global pandemics: serological (blood) testing for immunity. Such a standard is proactive, rather than reactive, in entailing immediate mass surveillance for protective immunity. It also addresses scenarios in which the (apparently) new pathogen may have spread faster than is immediately apparent. Early, robust data of this nature would help develop better healthcare logistic, social, and economic responses to nascent pandemics.

In a nutshell: if you only look for current evidence of infection, as is being done with PCR testing for SARS-CoV-2 infection, there is no possibility for determining, let alone predicting, an emergent non-susceptible population. And I'd argue that it is identification of this non-susceptible (immune) population that is the key to assessing, containing, and ultimately effectively managing a pandemic. In this article I'll be emphasizing some modelling, and I want to be very clear that while modelling is a fantastically sophisticated and useful tool, it is only as good as the data inputs available. Having non-susceptible population data is the key to turning this model into an actionable recommendation.

This idea was first prompted by last week's preprint of a recent study out of Oxford (as referenced in Financial Times, International Edition) on the spread of SARS-CoV-2, the virus responsible for CoVID-19. The study has presented a contrarian and fascinating hypothesis on this subject. It suggests that, according to how confident we are of some COVID-19 variables, many scenarios are still possible, relative to how long the virus has been circulating in the population and how far ahead the epidemic may have progressed. This has profound implications for the actual frequency of serious disease associated with SARS-CoV-2 and, perhaps even more importantly, bears on how close we might be to the end of this pandemic.

José Lourenço and team’s work, led by immunologist Paul Klenerman and epidemiologist Sunetra Gupta, took an atypical approach to analyzing the spread of the disease. Diagnostic testing of CoVID-19 disease, and trying to establish its prevalence by dividing positive results into the total population which has been tested, has been hamstrung by widely disparate availability and implementation of testing. In other words, the denominator of this prevalence equation is useless.

Instead, the Oxford team focused on the earliest mortality data (first fifteen days since the first death) in two countries, the UK and Italy. This decision was based on the assumption that deaths and autopsies are well-reported events, and that after day fifteen, local control measures may start to impact the death rate.

The team then back-calculated from this mortality data, using classical mathematical models of epidemic spread, viz the reproduction number of the virus (R0; how many people one infected individual will infect), the proportion of the population at risk from severe disease (ρ), the probability of death, the infectious period, and the time from infection to death. This enabled estimation of the date at which SARS-CoV-2 was introduced into each community studied. And then they performed a forward calculation from that date, using R0, to estimate how many people have subsequently been infected.

The results were surprising, even using a large range of values for both R0 and rho. Just looking at the conservative end of this sensitivity analysis, in the UK, the time of introduction/start of transmission was predicted to be over a month before the first confirmed death, by which time many thousands of individuals would have already been infected. And by as early as mid-March, 36-40% of the UK population could have been exposed. Obviously, if true, this means that the percentage of infected individuals developing serious disease is much lower than currently believed. And, most controversially, the remaining susceptible population is possibly smaller than feared.

Obviously, these results are tendentious, and have already been misinterpreted in the press as speaking irresponsibly against social distancing and related practices (which remain, currently, the safest strategy and in fact fits the model postulated, by effectively reducing R0)—but the best thing about these results is that the hypothesis generated is readily falsifiable—an often neglected hallmark of modern science! Testing large numbers of people for the presence of a pre-existing immune response to a particular pathogen is technically much easier than the PCR-type methods used to detect active infection. Encouragingly, as reported by Denise Roland in the Wall Street Journal, the UK government is actively acquiring 3.5M of these tests and may begin widely disseminating them as early as the beginning of April. As Paul Klenerman commented to me, "Hopefully we can stop modelling and actually look at data soon!"

Our Dark Horse team does not often venture into the field of epidemiology, but we do have a significant in-house quantitative modelling platform. Typically, we utilize that platform in conjunction with our expertise in the field of cell & gene therapy. However, after a conversation with Paul Klenerman, we decided to build a similar model for the state of California. Our model, below, can similarly suggest both a state-specific likely Day 0 as well as show how a likely remaining susceptible population can be estimated.

Model demonstrating trajectory of immune vs susceptible individuals

Our results—based on 10,000 iteration Monte Carlo simulations—are just as intriguing for this locale as the Oxford group’s were for the UK and Italy. Illustrated here is an example where R0 is given a mean value of 2.5 and a standard deviation of 0.025, an infectious period (σ in the Oxford group’s lexicon) mean of 4.5 days and a standard deviation of 1.0 days, and a proportion of the population at risk of severe disease (ρ in the Oxford group’s lexicon) of 1% of the population.

First we verify the accuracy of our death model, evidenced in the lower graph where you'll see the first fifteen days' death numbers (purple boxes), starting at Day 64 of the year 2020, against the expectations of the model (the cloud of black lines). Lost in the resolution of this plot is a result, equivalent to that from Oxford, where many thousands are likely infected a long time before the first death, or even the first identified case, as 1.0 on this Y-axis represents the 39.5 million population of California. But not lost is the rapid decline in susceptible individuals: in this example, moving towards 0.5 of the population right about now, as we are today at Day 91 of 2020.

However, to channel Paul Klenerman again: while fascinating, this is just modelling.  The critical need is to gather real-world data about the ‘No longer susceptible’ green plot in this figure because it is impossible to predict the susceptible or immune population from the currently infected population.

Thus, my purpose in recommending that we move ahead with wide-ranging serologial screenings as quickly as possible. (Yesterday's unveiling of Abbot's rapid-test, while a useful next step, is still a step taken towards identifying those who are currently infected rather than those who are immune.)  Once we know the local and global SARS-CoV-2 seroprevalence, we can collectively adapt our models and tighten up our region-by-region expectations of when social distancing has run its course. I believe that, when the next novel pathogen emerges, mass screening for pre-existing protective immunity should be developed rapidly and prioritized equally with detection of the actual disease. A stitch in time might save nine.

Related Posts

Join the Quarter Horse Newsletter

We’ll send you a quarterly newsletter all things DHC!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.