When the USNS Comfort first arrived in New York Harbor, it was seen as a symbol of hope and stability in a city that desperately needed both. Now, the massive, 894-foot military hospital, which discharged its last patient on Sunday, represents a calamity that never struck and predictions that never became reality.
The Trump administration sent Comfort to New York City at a time when New York officials worried that the city’s healthcare system would be overrun with COVID-19 patients and unable to treat anyone else. At one point, the city’s hospitals were accepting nearly 2,000 coronavirus patients a day, and the death toll spiked to more than 800 per day. This wasn’t an ordinary crisis, so it clearly required an extraordinary response. But then the city’s curve began to flatten, ICU admissions steadily decreased, and Comfort announced its departure after treating only 182 patients.
New York’s fight against COVID-19 is far from over. But it has become far more manageable than the scientific models originally predicted. The Comfort’s impending departure is only the latest proof of that.
A similar story has been unfolding all across the country: The doomsday scenarios many people envisioned simply did not come to pass. The result has been a growing amount of frustration over the models we were given and the policies they produced.
Some of this frustration is well-founded. Many of the curve-fitting models we used relied on international data that did not accurately represent our situation in the U.S., and as a result, the projections were wildly off-base. In Washington, D.C., for example, the popular IMHE model revised the city’s projected peak from mid-May to mid-April — that’s a huge discrepancy that seems to have been the result of bad math, not just updated information.
However, a lot of the frustration toward the coronavirus models is misplaced. Back in February and early March, we knew very little about COVID-19, how to identify it, or how to treat it. It’s easy to look back at the original projections in hindsight and dismiss them. We know a lot more now than we did a few months ago, and many of the preventive measures we’ve taken have worked.
Our projections reflected the little that we did know, and as we’ve learned more about this virus, those projections have changed.
This means that the models are working exactly as they should. Scientific models are not crystal balls; they merely project outcomes based on certain input parameters. In this case, the parameters (what we knew about the coronavirus and when) were extremely limited. Italy, South Korea, and China were our primary sources of information at the beginning of the U.S. outbreak, and each of these sources was in some way flawed. China didn’t provide accurate data (and still hasn’t), Italy’s population is much older and susceptible to the virus, and so on.
So it shouldn’t have been surprising when our original predictions fell flat. Indeed, it’s a good thing they did! If our original projection had been spot-on and 2% of the U.S. population had caught the virus, we would have been in a world of hurt. But that didn’t happen for two reasons — one, we changed our societal behavior, and two, we learned more about this virus, how it spreads, and how to treat it.
Still, there are many who look at the original models and our subsequent actions and conclude that this has all been an overreaction. And there may indeed be some truth to that. But that is something we won’t be able to definitively conclude until we have more information and more time to study the virus and evaluate our response to it. So for right now, we’re stuck with the limited and imperfect tools we have, including the coronavirus models.