The COVID-19 months have seriously challenged our use and understanding of science in the public arena. Public health officials have called upon decision-makers to “believe the science” and to “follow the data,” but that’s not actually easy to do. Health officials themselves have lacked standards regarding when scientific information is dispositive and when it isn’t.
A few elements of the virus conversation demonstrate the complexity and confusion of it all.
Early on, officials did not recommend masks for all, and now they do. “Seriously people- STOP BUYING MASKS!” Surgeon General Jerome Adams tweeted in February. “They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!”
Adams reversed course on April 3, saying during a task force press briefing that evidence had grown in favor of broader mask use. Just four days before, he had reiterated to Fox News that masks aren’t effective. The guidance about masks wasn’t, “Masks are somewhat effective but not perfectly, so don’t buy them until we have enough for everybody.” They were not effective until they were.
Seriously people- STOP BUYING MASKS!
They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!
https://t.co/UxZRwxxKL9— U.S. Surgeon General (@Surgeon_General) February 29, 2020
An evolution of that sort isn’t inherently troubling, but it is curious why, in particular, masks went from “NOT” effective to effective. Adams said there was new evidence. Authorities clearly have to make a determination at a certain point about whether new evidence overthrows old, and how that affects guidances, but that threshold is far from clear. Is it two new peer-reviewed studies, five, or nine? Or is it something else altogether?
Officials treat studies as the main answer to questions about the virus, but they haven’t set visible standards as to which studies are valid and which are problematic. Nor have they been transparent enough in their public statements about which studies they use.
This week brought news of serious compositional problems of the Lancet’s recent hydroxychloroquine study, which had for two weeks already made rounds and been adopted as a new arrow in the quivers of the drug’s skeptics.
The Lancet published this proviso on Wednesday: “We are issuing an Expression of Concern to alert readers to the fact that serious scientific questions have been brought to our attention. We will update this notice as soon as we have further information.”
In other words, don’t trust our study yet.
But once scientists cast their studies out into the ether, they are going to be cited as authoritative, if not by public health officials, then by partisan talking heads. Even where they aren’t cited, they have apparent influence.
The World Health Organization put a pause on its own hydroxychloroquine study several days after the Lancet’s study was published, “Because of concerns raised about the safety of the drug.” It has since reinstated its study of hydroxychloroquine but has not discussed why either decision was made.
Dr. Anthony Fauci recently announced that scientific data show hydroxychloroquine really isn’t an effective treatment. His announcement came on May 27, a few days after the Lancet published its study. Fauci didn’t cite the Lancet study as contributing to his conclusion, but he also didn’t cite any other specific studies.
Studies drive public health guidances on the virus, though it is rarely clear which ones, or to what extent. Amid their frequency and depth, few people have time or capacity to dig up and evaluate the studies. Making that information more available would inspire more confidence in the legitimacy of those guidances, especially as they evolve.