The case for rational expectations in COVID-19 modeling

British biologist Carl Bergstrom recently gave an interview to the Guardian on the topic of “bullshit”. In it, the interviewer asked Bergstrom about shortcomings of existing epidemiological models as well as their use (and misuse) by political decision makers.

[Guardian] If you had the ability to arm every person with one tool – a statistical tool or scientific concept – to help them understand and contextualize scientific information as we look to the future of this pandemic, what would it be?

[Bergstrom] I would like people to understand that there are interactions between the models we make, the science we do and the way that we behave. The models that we make influence the decisions that we take individually and as a society, which then feed back into the models and the models often don’t treat that part explicitly. Once you put a model out there that then creates changes in behavior that pull you out of the domain that the model was trying to model in the first place. We have to be very attuned to that as we try to use the models for guiding policy.

In the context of the coronavirus, the problem was this: Early models such as the one by the Imperial College in London predicted between 1.1 and 2.2 millions of Americans could die from COVID-19, depending on the severity of mitigation efforts. This eye-popping number jolted the political decision makers (Trump, Congress, the Governors, etc.) into action, locking down schools and businesses and issuing stay-home orders. The media publicity around the study probably scared many people which made them take the social distancing measures much more seriously. All of this probably helped in slowing the spread of the disease such that the same researchers had to revise their predictions downward only weeks later.

That is, the publication of the initial predictions changed the behavior of people which rendered those predictions obsolete.

Bergstrom seems to say that the problem here is with the general public. They don’t understand that the models rely on behavioral assumptions which no longer hold once people learn about the models’ predictions and adjust their actions accordingly.

But, with apologies to Shakespeare: The fault, dear Bergstrom, is not in the general public, but in your models!

The problem with those epidemiological models (at least with the SIR-types of models) is that some of their key parameters (such as the reproduction rate R0, for instance) depend, in various ways, on people’s expectations about the future path of the disease. If you don’t take that into account, your predictions will be way off.

And way off they were! Here’s the summary of a statistical evaluation of a model similar to the one used in the Imperial study:

In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state (Figure 1)

The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)

You might say that prediction is not the point with those models. Maybe their only purpose is to produce scary headlines to make people listen to the experts. But that is a weird proposition. If experts want the general public to take them more seriously, making wildly erroneous predictions seems like a bad strategy.

So how are we going to take people’s expectations into account in epidemiological models? Let’s see.

March: Imperial predicts 2 million deaths. Government imposes lockdown. People are scared and stay at home.

April: Imperial revises his model, now predicts 50,000 deaths. Government partially re-opens the economy. People cautiously start going out again.

May: Imperial revises his model, now predicts 200,000 deaths. Government re-imposes some lockdown measures. People are scared again.

June: Imperial revises his model, now predicts 75,000 deaths. Government opens up again. People relax again.

And so on until we have converged to a situation in which the number of deaths Imperial predicts is consistent with the government’s (and the people’s) expectations and actions.

Such a situation is what economists call a rational expectations equilibrium. I think that trying to model people’s expectations in a consistent way would improve the usefulness of epidemiological models. This is, of course, a tall order. But perhaps if economists, statisticians, and epidemiologists would put their heads together, we could move in this direction.

2 thoughts on “The case for rational expectations in COVID-19 modeling

  1. So could you call the back an forth between the predictions of the models used “rational expectations equilibrium” and one of the goals of such models in general is to make this adjustment process as fast as possible – is it?

    This makes me think about the Cobweb-Theorem, but in a way of how difficult it must be to incorporate devious variables (which are not in direct correlation to it, but also with an immediate impact) like – hmmm, what could be one?

    Probably local shortages of medicinal equipment with or on the positive side a discovery of a potential vaccine.

    Thanks anyways – that was quite interesting!

  2. Pingback: Some descriptive COVID-19 regressions | Graz Economics Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s