Journalists and Scientists – Working Together on COVID-19

William Hanage and Marc Lipsitch

 

The profusion of information about the growing COVID-19 outbreak presents challenges for reporters and the scientists they talk to when researching their stories. Good reporting (and science) has to distinguish legitimate sources of information from no end of rumor, half-truth, financially motivated promotion of snake-oil remedies, and politically motivated propaganda. While keeping track of the outbreak we’ve become aware of how hard this is, for even the most energetic and well-motivated scientists and journalists, given the firehose of available information from both traditional sources (public health authorities, journals) and new ones (preprints, blogs).

To help in this, we think reporters should distinguish between at least three levels of information they get from scientists:
a) what we know is true;
b) what we think is true: fact-based assessments that also depend on inference, extrapolation, or educated interpretation of facts that reflect an individual’s view of what is most likely to be going on;
and c) opinions and speculation.

In category a) are facts such as: this infection is caused by a beta-coronavirus; the initial viral genome sequences of the virus were very similar; human-to-human transmission happens frequently; the number of reported cases in various locations and the like. Multiple lines of evidence, including peer-reviewed scientific studies and reports from public health authorities, support these as facts.

In category b) is the vast majority of what we would like to know about the epidemic: the true number of cases in any location; the extent of community transmission outside China — or the fraction of cases that are spreading undetected; the true proportion of infections that are mild, asymptomatic, or subclinical; and the degree to which pre-symptomatic cases can transmit (because no systematic data exist). On these topics, experts can give opinions informed by their understanding of other infectious diseases, the consequences of available data (for example, inferring unreported imported cases from the differences in reported imports in countries with similar travel volumes from infected areas) or perhaps insights from information they have heard about and trust, but which has not yet been publicly released. This category includes projections as to the likely long-term trajectory of the epidemic. These views benefit from the expert judgment of the scientists who hold them, and are worthy of reporting, but they should be distinguished from hard facts.

In category c) are many other issues for which the current evidence is exceedingly limited, such as the effect of extreme social distancing on slowing the epidemic, or which will never be settled truly by data, such as questions about the motivations of governments and health authorities. It’s not that these don’t matter. It’s just that they’re not accessible to science right now and may not ever be.

~~~~~~~

At their best, scientists and reporters are trying to do many of the same things; providing accurate information and interpreting it, but with different audiences and time scales. Beyond remembering the three different sorts of information that scientists can offer, how else can we ensure that we are doing this well? We think several principles can help.

    1. Seek diverse sources of information. Because no one has digested everything about the state of the epidemic, different experts will know different things and see different holes in our reasoning. This advice applies to scientists as well as journalists: the best scientists, especially in a setting like this one where the representativeness and accuracy of data are necessarily uncertain, will consult colleagues and ask them to find weaknesses in their work before sharing the work more broadly.

 

    1. Slow down, a little. We are all on deadline of some sort to avoid being scooped. Someone on Twitter* recently has pointed out that facts about this epidemic that have lasted a few days are far more reliable than the latest “facts” that have just come out, which may be erroneous, or unrepresentative and thus misleading. We have to balance this caution with the need to share our work promptly.

 

    1. Distinguish between whether something ever happens and whether it is happening at a frequency that matters. A good example is the question of pre-symptomatic transmission. If it occurs frequently it will make control measures that target sick people (isolation, treatment, and contact tracing) less effective. It is very likely that pre-symptomatic transmission happens at some frequency, but the evidence is very limited at present. Knowing that it happens sometimes is of little use; we desperately need evidence on how often it happens. The same is true for infected travelers escaping detection. Of course this will happen for many reasons. Again, the question is how often it happens, and whether it leads to the establishment of local transmission.

Emergencies like this lead to extreme pressure on both scientists and journalists to be the first with news, and there are perverse incentives arising from the attention economy we now inhabit, exacerbated by social media, that may provide short-term rewards for those willing to accept lower standards. Accurate reporting should be aware of this risk, seek to avoid contributing to it, and rapidly correct falsehoods when they become clear. We have a common responsibility to protect public health. The virus does not read news articles and doesn’t care about Twitter.

 

*please note that we do not necessarily recommend “someone on Twitter” as an adequate source. Nevertheless, we think the point stands.