See more of the story

The COVID-19 pandemic will be remembered most for the immense loss of life and health it caused, but also for some famous forecasts gone wrong.

Simulations at the pandemic's start by the University of Minnesota and Minnesota Department of Health predicted that 74,000 COVID-19 deaths by August 2020 could be reduced to 50,000 through mitigations that limited social contact. But as August 2021 approaches, the state's toll is nearing 7,700.

Meanwhile, the Institute for Health Metrics and Evaluation in Washington (IHME) predicted 60,000 COVID-19 deaths nationwide by last August. That total was quickly eclipsed and has surged beyond 600,000 this summer.

Computer modeling has improved, emerging from only internal use by health care providers to a level of public scrutiny during the pandemic akin to TV weather forecasts. But in many ways, it is still judged by the early misfires and the way politicians ranging from Gov. Tim Walz to President Donald Trump used initial modeling to influence the public health response to COVID-19.

Hesitancy about wearing masks, and opposition to stay-at-home orders and business restrictions, have roots in Walz's initial use of the U model that severely overestimated the COVID-19 death toll in Minnesota, said Michael Osterholm, director of the University of Minnesota's Center for Infectious Disease Research and Policy. When death and hospitalization tolls in the first wave fell far short of estimates, critics questioned whether subsequent restrictions were even needed, he said.

"They used that as fodder for showing how public health was not being honest to the public with those numbers. And that called into question a lot of credible things that were done," Osterholm said.

More knowledge of the coronavirus that causes COVID-19 has resulted in sharper forecasts, though.

Mayo Clinic projections initially underestimated the severity of the fall pandemic surge, perhaps because its model overcounted people with immunity through prior infection. But Mayo's model by design learned over time, giving Walz precise short-term forecasts for deciding when to impose and ease restrictions.

IHME was criticized for its low projection of 60,000 deaths that Trump used to advocate for loosening state restrictions. Its initial estimate of 301 COVID-19 deaths in Minnesota by Aug. 4, 2020, was below the actual count of 1,654 as well. But its prediction in June 2020 of 1,797 COVID-19 deaths in Minnesota by Oct. 1, 2020, came close to the actual 2,036 fatalities. And its Oct. 29 forecast of 6,084 deaths by Feb. 1, 2021, was close to the actual 6,202.

The early U model was maligned for its death estimates, but state leaders said its purpose was misunderstood and that it provided important proof that preserving hospital capacity would reduce COVID-19 deaths.

"That really sharpened the focus on surge planning and ICU beds," State Health Commissioner Jan Malcolm said.

Confused the public

Walz warned in a March 25, 2020, public announcement of Minnesota's first stay-at-home order that the U model was based on limited information and subject to change, but he also said, "If we just let this thing run its course, and did nothing, upwards of 74,000 Minnesotans could be killed."

The wide disparity in early projections confused the public and gave politicians a choice of scientific models to back responses they favored in the pandemic, said Andy Slavitt, a former senior federal health official who lives in Edina and served on President Joe Biden's COVID-19 advisory team.

Trump promoted the low death count in the IHME estimate at the same time as he issued a "Liberate Minnesota" tweet, alleging that the state's pandemic restrictions were too severe and needed to be lifted given the improving COVID-19 forecasts.

"The president went out and declared early victory," said Slavitt, author of "Preventable: The Inside Story of How Leadership Failures, Politics, and Selfishness Doomed the U.S. Coronavirus Response." "When Donald Trump went model shopping and found a model that said there were only going to be 60,000 deaths, people who said there were going to be substantially more deaths got ridiculed."

One problem looking back was a failure to explain the nuances of different models. Malcolm said neither she nor the governor adequately explained how the U model was intended to assess scenarios that would slow the spread of COVID-19 and wasn't intended to be like the IHME or other models in making ongoing forecasts.

"People were fixated on, 'Oh my God, it says we're going to have X thousand deaths by X date,' which was not the point," Malcolm said. "The U model kind of got mischaracterized."

Some early assumptions about COVID-19 were wrong, though, and affected the accuracy of the first models. Using infection and illness rates out of Italy and China, the U predicted that COVID-19 would overwhelm hospitals, leaving Minnesotans without lifesaving ventilators and increasing the death toll, said Eva Enns, a leader of the U modeling.

In reality, many more people had asymptomatic cases, so the model overestimated the share of patients who needed hospital beds and their average lengths of stay, she said. "We have since learned there is a very large asymptomatic population," Enns said.

Projections improved

Second and third versions of the U model also overestimated COVID-19 deaths, but their lower error margins were a little closer to reality. One scenario in the second model predicted 21,800 deaths over 12 months, with a statistical range from 9,900 to 36,000.

Mayo leaders in spring 2020 reached out to Walz and offered what they believed would be more reliable estimates. Programmed to review the latest data and consider all likely possibilities, Mayo's machine-learning model by design made more conservative predictions than occurred, but it proved adept at warning about shifts in the pandemic.

"It started out for me as, 'Can we even do anything? Are we going to be able to do any better than a few people getting together over drinks and guessing?' " said Curtis Storlie, a statistician who created the Mayo model. "Because honestly I thought it was a fool's errand to be able to predict this with any degree of certainty. But we evolved past that."

Decisions made by Mayo in response to its model included suspending non-urgent surgeries in Arizona ahead of a summer 2020 wave and diverting nurses last fall to hospitals in western Wisconsin before they were overwhelmed.

The Sept. 30 model predicted the fall COVID-19 surge in Minnesota, but saw it peaking in mid-October with no more than 500 hospitalizations. The wave lasted longer, however, consuming more than 1,400 hospital beds at its peak in mid-November.

Walz said Mayo's short-term forecasts about hospital needs were accurate "almost to the person" this spring when he ended a mask mandate earlier than some expected.

"It was really good at predicting hospital bed usage," he said. "It's why I was speaking with a lot of confidence in late March and early April."

The variation in early COVID-19 projections also concerned Nicholas Reich, an infectious disease modeler at the University of Massachusetts Amherst who worked with the Centers for Disease Control and Prevention to aggregate estimates of influenza activity into a single FluSight forecast. His research group did the same with COVID-19, creating the Ensemble predictor to see if the average of all models offered a more reliable forecast than any single one.

"You just have to sort of trust that the different perspectives and different data sources these models are using will kind of balance each other out," Reich said.

Studies by Ensemble and Mayo modelers validated the accuracy of their approaches, but the key takeaway from both groups is that the forecasts are reliable only when looking about four weeks ahead.

Osterholm said short-term forecasts such as Ensemble are responsible, but he called long-term models "scientific incompetency," even with recent improvements in their methods, because there are too many unknown variables, such as whether people wear masks properly, for their predictions to be valid.

The uncertainties leave most long-term forecasts with wide error margins that limit their utility, he added.

"It's like saying I lost something and I know it's somewhere between here and St. Cloud," he said.

Osterholm predicted without modeling in March 2020 that the U.S. would see 480,000 COVID-19 deaths within six to 12 months. The nation reached that mark 11 months later. But he incorrectly predicted that the more infectious alpha variant would cause a severe nationwide COVID-19 wave this spring, just like it did in Europe. He said that also fooled models — with the variant surprisingly spreading in two states with better mask-wearing and vaccination rates.

"It was Minnesota and Michigan and that was it," he said. "We can't even explain it in retrospect, using the best virology that we have."

State leaders promised a fourth update to the U model, but none was released. State health economist Stefan Gildemeister said an updated COVID-19 forecast without a purpose would have been a distraction last fall amid efforts to slow the second pandemic wave and buy time until vaccine was available. Privately, their simulations of pandemic growth based on varying levels of vaccine distribution, variant spread and social activities tracked much closer to the actual pandemic this fall and spring.

"We can't set aside a hammer just because we hit ourselves on the thumb once," Gildemeister said. "It's a critical tool for the job."

Jeremy Olson • 612-673-7744