Why our models are models of models and what that means for the current debate about the future of macroeconomics

In the latest issue of the “Oxford Review of Economic Policy”, Simon Wren-Lewis has written an interesting contribution concerning the shortcomings of contemporary macroeconomic models. In his article, he argues that the “microfoundations hegemony” is among the core problems that hold back progress. I want to add an argument to this debate which shall support the beginning collapse of this dogma.

Historically, the idea of basing macroeconomic models on explicit microfoundations initiated in the 1970s leading to the demise of old-style Keynesian models which relied heavily on ad-hoc restrictions such as a constant aggregate savings rate. With the famous Lucas-critique declaring that ad-hoc restrictions cannot be considered invariant to changes in economic policy, a methodological standard came to dominance in the profession which demands explicit microfoundations as a pre-condition for doing proper macroeconomic modelling. The subsequent points are central to this doctrine:

I. Explicit microfoundations are needed to make models “robust” to the Lucas-critique.

II. Explicit microfoundations provide the basis for “checking” the internal consistency of the underlying thought.

III. As a pre-condition to be certain on i) and ii), the microfoundations have to be expressed using the precise language of mathematics.

Although this all seems quite convincing at first sight, the whole idea nevertheless rests on one particularly troublesome misconception of what (macro?)economic models usually represent. In the standard view, we see them as simplified representations of reality – as approximations to a complex world. “Think of it like a map! If you design a map of the Austrian highway system, you leave out irrelevant aspects like the trees guarding the highway.” – Right? Ok…, so our models are approximations! Approximations of what? Of the real world! Which looks how? Well, of course we cannot know everything in perfect detail – the reality is rather complex, but…but you know how to design proper approximations to it? How do you make proper approximations to something you do not really know because it is too complex?

In my view, the majority of (macro)economic models are indeed best seen as approximations, but as approximations of what somebody thinks about the real world rather than of the real world itself. They are formal models of the fuzzy “models” that we have in our brain – models of models, approximations of simplifications. To see this, consider the example below which you may easily find in a standard macro-paper.

“For sake of simplicity, suppose that an infinity of identical firms produce according to Y=f(K,L) with Y giving output, K denoting the capital stock and L the amount of labour.” How do we proceed on this if we read that?

a. Translate the equation Y=f(K,L) into words: “Ok, so…production uses capital and labour as inputs.”

b. Guess what the author might want to say about the real world:

  1. “So there is an infinity of firms in the model. Does he/she mean that there is an infinity of firms in the real world? – I guess not. So how many firms does he/she mean – 1000, 1 000 000?
  2. “Does he/she mean that all firms are the same in the real world? – I hope not!”
  3. Ah…“for sake of simplicity” – so the assumption was taken although he/she anyway means something else – if so…what?! Hm…
  4. “Maybe he/she means that analyzing market power of firms is not necessary for the respective purpose of the model?” – Sounds better. Or maybe, he/she means that market power is generally negligible…– whatever. I just stick to the latter interpretation.

Note that this is a pretty simplified example. In macro models you typically have various actors and feedback effects between consumers, producers, the central bank etc. If you let 10 people conduct the upper steps for such models you will usually get 10 different interpretations. To overcome this, you may introduce some form of heterogeneity in the model, try to get a slightly more realistic expression of competition and so on. You will nevertheless end up with mathematical expressions that do not correspond to what you actually want to say about people´s behavior and their respective interactions. In other fields, the difference between the formal model and the model you have in mind may be small, in macro, the gap is usually rather pronounced.

What does that imply now for the future of macroeconomics? I assume here that one is willing to follow some form of McCloskey´s view of economists as “persuaders”, i.e. we are interested in changing the fuzzy “models” in our brain or in other peoples´ brainswhile the formal ones are only tools for achieving this. It follows:

i) Explicit microfoundations may help to address the Lucas-critique, but they cannot make it immune since other people may simply not interpret the parameters of the formal microfoundations as structural. More importantly, a model that is not explicitly microfounded may be reasonably interpreted by people to be robust by adding an informal story. Both proceedings end up with an informal judgement. Explicit microfoundations are therefore neither necessary nor sufficient to address the Lucas-critique and by using them we do not overcome the final step of informal, fuzzy, subjective judgements.

ii) Since the formal model on paper and the fuzzy model in our brain are distinct, the internal consistency of the formal structure is neither necessary nor sufficient for the consistency of the underlying thought.

iii) Mathematical models are not an intrinsically precise way of communicating economic ideas. Ordinary speech may promote clarity since it describes the fuzzy models in our brains directly rather than approximating them with the often pretty rough formal elements available.

With all this, I neither want to say that we should completely depart from explicit microfoundations nor that we should abandon mathematical representations. I think both are powerful tools for bringing macroeconomics forward. There is just no reason to apply them dogmatically without thinking about whether doing so makes sense for the purpose at hand and it is certainly unjustified to impose this standard on others when judging their contributions, at least if one´s arguments in favor of this standard are based on I)-III). Finally, given that the gap between the formal and the fuzzy model is often pretty sizeable, we cannot stick to simply throwing models at each other. They can be great tools for thinking but in the end, somebody will have to give the actual argument about the source of e.g. the recent financial crisis. This necessitates using and describing the relevant parts of his/her fuzzy model that would have optimally advanced using the formal ones. And: Doing so requires fuzzy, ordinary language, not math!

 

Advertisements

“The Rate of Return on Everything“

This is the title of a new paper by Oscar Jorda, Katharina Knoll, Dmitry Kuvshinov, and Moritz Schularick (original paper, voxeu article). The paper is the result of a research project to calculate the rates of return on four major asset categories – equities, bonds, bills, and real estate – in 16 major developed economies going back as far in time as reasonable. (Quibble: Is that really everything? What about gold? currencies? commodities? paintings? vintage cars?)

The paper does nothing but compute long-run averages and standard deviations and draw graphs. No regressions, no econometric witchcraft, no funny stuff. Yet its findings are fascinating.

Bildschirmfoto 2018-01-08 um 09.46.21

Some of the results confirm what „everyone already knew, kind of“:

  1. Risky investments like equities and real estate yield 7% per year in real terms.
  2. The risk premium (equities/housing vis-a-vis short term bond rates) is usually between 4 to 5%.
  3. There is no clear long-run trend (either up or down) in rates of return. (Take this, Karl Marx!)

Some of the results are interesting, but not particularly puzzling:

  1. The return on total wealth (average of the rates of return on all assets weighted by their share in the economy’s aggregate portfolio) exceeds the rate of growth of GDP. This confirms Piketty’s claim that r > g. In terms of the Solow model it means we are living in a dynamically efficient regime: we cannot make both present and future generations better off by saving less. Perhaps the most interesting aspect of this finding is its robustness: it holds for every sub-period and for every country. It really seems to be a „deep fact“ about modern economies.
  2. The return on risk-free assets is sometimes higher, sometimes lower than the growth rate of GDP. For instance, before the two World Wars, the differential between the risk-free rate and growth was mostly positive, so that governments had to run primary surpluses to keep debt stable. Post-1950, the differential was mostly negative.
  3. Negative returns on safe assets are not unusual. Safe rates were negative during the two World Wars as well as during the crisis of the 1970s. In recent times safe rates went negative again in the aftermath of the global financial crisis. These findings don’t disprove the „secular stagnation“ hypothesis of Summers et al. but they do put it in historical perspective. It seems that rates were unusually high during 1980s and the recent downward trend could just be a reversion to the mean.

But some results are really puzzling – even shocking from the point of view of standard finance theory:

  1. The return on risk-free assets is more volatile than the return on risky ones. I haven’t yet digested this fact fully. Does this mean that “risk-free asset” is a total misnomer? No, because „risk-free“ refers to the unconditional nature of the payoff of an asset, not the variability of its return. A bond is „risk-free“ because it promises a fixed cash flow irrespective of the state of the economy. Stocks are called risky, not because their returns are volatile, but because the dividends they pay are conditional on the performance of the company. So does this mean that people’s time discount rate varies a lot? Why? It can’t be consumption growth variability – because consumption is quite smooth. What’s going on?
  2. Housing offers the same yield as equities, but is much less volatile. Jorda et al refer to this as the housing puzzle. I’m not sure how puzzled I should be by this. I surely found the high average yield of real estate assets surprising. However, from what I know about house price indices and the myriad measurement issues surrounding them, I feel that one should be very cautious about the housing returns. I definitely would like someone who knows more about this look carefully at how they calculated the returns (paging Dr. Waltl!). One potential solution to the puzzle I can see would be differences in liquidity. Housing is super illiquid, equities are quite liquid. Couldn’t the high return on housing just be an illiquidity premium?

There is much, much more in the paper, but those were the points that I found most striking. I’m sure this will be one of the most important papers of the past year and will be a crucial source for researchers in finance, growth, and business cycle theory. Plenty of food for thought.