DEEP VZN, dangerous information and disparities in research ethics

Knowledge about how diseases spread has done a lot to improve human welfare over the years. It has protected a lot of human life and prevented a lot of human misery.

So, the idea that we shouldn’t pursue knowledge that could advance our understanding of pathogens might be a little confusing. But this is what one scientist is advocating. Kevin Esvelt, a biology professor at MIT, has argued that a new $125 million NIH-funded project should be drastically changed, because the knowledge it aims to produce is likely to be more dangerous that beneficial.

The project, DEEP VZN, is a five-year initiative to find and characterize viruses in nature that pose a risk of spillover into humans. It will see researchers take samples from animals in areas that are high-risk for viral spillover and then using DNA sequencing to search the samples for novel pathogens. Detected pathogens will then be analyzed to identify which of them pose the biggest risk to humans and then classified according to their level of risk.

It's easy to see why someone might want to do this research. If we identify pandemic-capable viruses we could study them and develop vaccines to have ready-to-go if an outbreak ever occurs, potentially stopping the outbreak in its tracks. We could also increase targeted surveillance for the pathogen, making it more likely that we would detect it quickly. There are other ways this research could potentially be beneficial. And there are also reasons to doubt that these potential benefits are all that large. If you’re interested in hearing all the reasons for and against this research, then I recommend listening to this conversation between Esvelt and Rob Reid or this great Vox article by Kelsey Piper. But for this post it’s enough to know that Esvelt’s biggest concern relates to the last part of this project – identifying the riskiest viruses and publishing the ranked results.

Esvelt argues that running tests to determine if a pathogen has the potential to cause a pandemic essentially amounts to the testing of weapons of mass destruction. This might sound dramatic, but he notes that thousands of people have the ability to construct viruses from synthetic DNA[1] and that viruses can be weaponized. It’s worth noting that the US CDC lists several viruses as possible biological weapons. If a weaponized virus was deployed in a coordinated way it could cause a pandemic (or multiple pandemics) far worse than COVID-19, with potentially catastrophic outcomes.

Esvelt isn’t suggesting that the knowledge that would be gained in this project couldn’t be beneficial. Instead, he’s arguing that the same knowledge could be used to cause harm, and that the magnitude of this harm and the likelihood of it occurring outweigh the potential benefits.

Research that has the potential to cause harm as well as benefit – dual use research – has received attention in the biomedical science community previously. In 2007 the National Science Advisory Board for Biosecurity developed a report on oversight of dual use research. In particular, so-called gain-of-function research – which aims to enhance the function of dangerous pathogens, i.e. make them more transmissible or more deadly – has been a focus of discussion. Gain-of-function research on particularly dangerous pathogens was the subject of a moratorium on funding from 2014 to 2017. It’s easy to see how this type of research might be dangerous. In these experiments scientists are creating something dangerous, which poses an immediate threat if it isn’t safely contained. It’s not so immediately obvious how identifying dangerous pathogens could be harmful. Rather than creating a dangerous pathogen – a physical (if microscopic) thing – this research is just creating information. But in a world where thousands of people have the ability to turn that dangerous information into the dangerous thing then the risk posed by this research might be many-fold larger than the risk of traditional gain-of-function research.

The funding moratorium was met with fierce objection from some scientists, who suggested that restriction of scientific freedom was against the ethos of science and free enquiry necessary for scientific progress. However, the scientific community accepts other limitations on science. Since the development of the Nuremberg code, scientists conducting research on human subjects have had to ensure that their research meets strict ethics requirements. The US National Research Act of 1974 led to the creation of the Belmont Report, which led to the development of the institutional review board (IRB). Now all research with human subjects must be reviewed and approved by an IRB before it can start. Although people might complain about the paperwork and bureaucracy of the IRB system, it seems completely uncontroversial that the freedom of scientific inquiry should be limited to prevent harming people. We should then ask why this limitation shouldn’t also apply to other types of scientific research.

Someone who isn’t convinced that we ought to restrict the research scientists conduct might argue that human subject research is different, as it involves researchers imposing direct harms on individuals, rather than just creating knowledge that *could* be used by people other than the researchers to cause harm. But one of the major ethical concerns in human subject research is the protection of privacy, which is primarily about ensuring that third parties don’t get access to information that they could use to harm participants. Similarly, it’s recognized that big data research should ensure that people other than the researchers can’t combine data sets or data tools to breach privacy. Here the worry is how harm could come about by other people combining results of different research projects.

These considerations of privacy and data highlight another disparity in current approaches to the ethics human subject research and the ethics of other scientific research. In human subjects it’s expected that research data will be restricted. It won’t be freely available for anyone who might want to access it. This goes against the increasing move towards open science, including the open access of data, methods, and results of analysis. But the need to prevent harms from breaching individuals’ privacy is uncontroversial, and so this limitation on openness is accepted.

It seems plausible then that we should restrict access to other types of research data for the purpose of preventing harm. It could be possible to gain the benefits of research like DEEP VZN by allowing it to occur but restricting access to results, rather than making them freely available. Access to results could be restricted to scientists working at known research sites. Governments (or other institutions) could even implement a system of vetting scientists to further ensure the good intent of those accessing the results.

This attempt to get the best of both worlds – to get the benefits of scientific research without the risks – might, like most compromises, leave everyone slightly dissatisfied. Many scientists see transparency and free flow of information as fundamentally important to scientific progress. Restricting access to scientific methods and results is antithetical to the open science movement and may be seen as unacceptable gatekeeping[2]. But, as mentioned, if we accept that there should be limitations on scientific openness in human subject research, we should accept that limitations on openness might be appropriate for other types of research too.

On the other hand, those who, like Esvelt, are worried about the risks of this research will probably have concerns about the ability to keep information confidential, especially when those accessing it come from a culture that so highly values openness.

But even though it will probably leave all parties slightly dissatisfied, the option of reducing transparency and openness in science should be explored, not just for this project but for potentially risky research more broadly. Even if this doesn’t eliminate all the risk from research, it doesn’t seem like we should abandon the option of reducing transparency and making it harder for those who might want to misuse information. We shouldn’t fall into the trap of failing with abandon.

In general, we should be choosing options that reduce risk. With DEEP VZN, I think the clearest argument that Esvelt makes against the project continuing as it is, is that we have other options for reducing pandemic risk. Options that don’t carry potentially catastrophic risks but seem likely to provide robust benefits. These include better surveillance for novel infections in humans living in areas with high spillover risk, the development of vaccines for prototype pathogens of different viral families, and investment in advanced PPE. As biosecurity researcher, Jonas Sandbrink suggests in this podcast, we should be evaluating the impact of research with reference to its risks as well as its benefits, so that less risky research that provides the same benefit is favoured over riskier research.

DEEP VZN provides a good example of how research governance structures haven’t kept up with changes in technology and society. Although there has always been a risk of scientific knowledge being misused, today, a greater number of people can access and use scientific knowledge, increasing the chance that someone decides to use it to cause harm. At the same time, the magnitude of damage that they could inflict has dramatically increased, thanks to technological advances and our increasingly connected world. We need to adapt our understanding of research ethics to account for this risk and develop governance structures that result in us pursuing research that is most likely to help us protect life rather than bringing about misery.



[1] Esvelt provided the rationale for this in this congressional testimony. He uses data from the OECD database on graduates in different disciplines and assumes that 20% of graduates in the life sciences have the ability to construct a virus. It’s worth noting that all the following viruses have been constructed from synthetic DNA; poliovirus, the 1918 pandemic flu strain, horsepox (a close relative of smallpox), and SARS-CoV-2.

[2] James Smith and Jonas Sandbrink have a great paper on the interactions between open science and biosecurity.