# Ökonomischer Analphabetismus in der “Zeit”

Es erstaunt mich immer wieder, dass vielen Menschen, die sich beruflich mit „der Wirtschaft“ beschäftigen, ökonomisches Denken so ganz und gar fremd zu sein scheint. Das trifft insbesondere auf Wirtschaftsjournalisten zu.

Ein eindrückliches Beispiel lieferte neulich „Die Zeit“. In diesem Artikel möchte Hermann-Josef Tenhagen uns über “10 Dinge, die wir über die Wirtschaft wissen müssen” belehren. Schon beim ersten Punkt muss dem guten Ökonomen die Grausbirn’ aufsteigen.

Gebrauchtwagenhändler haben einen schlechten Ruf. Früher war der noch schlechter. Ich habe mir immer Danny de Vito als Gebrauchtwagenhändler in der alten Zeit vorgestellt. Mit dem Bild im Kopf vom dicken, kleinen Mann mit Zigarre im Mund kann man gut erklären, warum ein Markt Regeln braucht. Denn erst seit Gebrauchtwagenhändler die Qualität ihrer verkauften Autos für eine Zeit nach dem Kauf gewährleisten müssen, kann ich dort ein Auto kaufen, ohne davon auszugehen, dass die Karre an der nächsten Ecke stehenbleibt. Und erst seit dieser Zeit haben faire Gebrauchtwagenhändler eine Chance gegen Konkurrenz, die nur ihre Kunden besch… Markt braucht Regeln, um zu funktionieren.

Es ist eine gute Übung für VWL-Ersties eine paar Minuten darüber nachzudenken, wo das Problem bei dieser Argumentation ist. (Es gibt mehr als eins.)

Hier ist das Hauptproblem.

Herr Tenhagen ignoriert die Möglichkeit, dass in einem freien Markt gute Gebrauchtwagenhändler einen Anreiz haben freiwillig Garantien zu gewähren. Eine freiwillige angebotene Garantie hilft den Käufern, gute Gebrauchtwagen von schlechten zu unterscheiden. Eine verpflichtende Garantie zerstört dieses Signal und damit auch den Markt für billige Gebrauchtwagen.

In einem Markt ohne verpflichtende Garantie habe ich als Käufer die Wahl zwischen einem Auto mit Garantie um 12.000 Euro oder dem gleichen Auto beim Händler nebenan um 8.000 Euro aber ohne Garantie. Als Käufer kann ich entscheiden ob ich die 4.000 Euro extra für die Garantie zahlen will, oder ob ich lieber Geld spare und mich dafür dem Risiko aussetze eine Schrottkarre zu erwischen. Der Familienvater mit geregeltem Einkommen und geringer Risikobereitschaft wird eher die Garantie bevorzugen. Der prekär beschäftigte VWL-Student, der nebenbei ein bisschen mit Bitcoin spekuliert, könnte sich auf die Schrottkarren-Lotterie einlassen. Als Gebrauchtwagenhändler werde ich nur dann die Garantie anbieten, wenn die dadurch zu erwartenden Kosten 4.000 Euro nicht übersteigen.

Im Gleichgewicht muss der Preisunterschied zwischen dem Auto mit Garantie und dem ohne genau den Qualitätsunterschied zwischen den angebotenen Autos ausgleichen.

Was passiert, wenn nun alle Gebrauchtwagenhändler dazu verpflichtet werden, eine Garantie zu gewähren? Diejenigen Händler, die vorher nicht bereit waren die 4.000 Euro Gewährleistungskosten zu tragen, werden auch jetzt nicht wie durch Zauberhand dazu bereit sein. Und diejenigen Käufer, denen 4.000 Euro extra für ein Auto mit Garantie zu viel war, werden auch nicht plötzlich bereit sein mehr zu zahlen. Der etwas schäbige Gebrauchtwagenhändler wird vom Markt verschwinden, und der VWL-Student wird sich eben kein Auto mehr leisten können.

Die Pflichtgarantie führt nicht dazu, dass alle schäbigen Gebrauchtwagenhändler plötzlich geläutert werden und nur mehr hochqualitative Autos anbieten. Sie führt lediglich dazu, dass das Angebot von billigen, weil weniger qualitätvollen Autos zurückgeht – zulasten der Käufer mit der geringsten Zahlungsbereitschaft. Die Regel, die Herr Tenhagen als dringend notwendig erachtet, ist nicht nur nicht notwendig, sie ist sogar schädlich: Denjenigen, die sowieso die Garantie genommen hätte, bringt sie nichts, und die anderen, die auf die Garantie gerne verzichtet hätten, verdrängt sie vom Markt.

Ich finde es schon irgendwie problematisch, dass eine angebliche Qualitätszeitung wie “Die Zeit” ihren Lesern ökonomischen Analphabetismus unter der Schlagzeile “Grundwissen Ökonomie” verkauft. Es ist ein ökonomischer Analphabetismus, der, wenn er zur Grundlage von Wirtschaftspolitik dient, schwerwiegende Folgen haben kann.

# Why our models are models of models and what that means for the current debate about the future of macroeconomics

In the latest issue of the “Oxford Review of Economic Policy”, Simon Wren-Lewis has written an interesting contribution concerning the shortcomings of contemporary macroeconomic models. In his article, he argues that the “microfoundations hegemony” is among the core problems that hold back progress. I want to add an argument to this debate which shall support the beginning collapse of this dogma.

Historically, the idea of basing macroeconomic models on explicit microfoundations initiated in the 1970s leading to the demise of old-style Keynesian models which relied heavily on ad-hoc restrictions such as a constant aggregate savings rate. With the famous Lucas-critique declaring that ad-hoc restrictions cannot be considered invariant to changes in economic policy, a methodological standard came to dominance in the profession which demands explicit microfoundations as a pre-condition for doing proper macroeconomic modelling. The subsequent points are central to this doctrine:

I. Explicit microfoundations are needed to make models “robust” to the Lucas-critique.

II. Explicit microfoundations provide the basis for “checking” the internal consistency of the underlying thought.

III. As a pre-condition to be certain on i) and ii), the microfoundations have to be expressed using the precise language of mathematics.

Although this all seems quite convincing at first sight, the whole idea nevertheless rests on one particularly troublesome misconception of what (macro?)economic models usually represent. In the standard view, we see them as simplified representations of reality – as approximations to a complex world. “Think of it like a map! If you design a map of the Austrian highway system, you leave out irrelevant aspects like the trees guarding the highway.” – Right? Ok…, so our models are approximations! Approximations of what? Of the real world! Which looks how? Well, of course we cannot know everything in perfect detail – the reality is rather complex, but…but you know how to design proper approximations to it? How do you make proper approximations to something you do not really know because it is too complex?

In my view, the majority of (macro)economic models are indeed best seen as approximations, but as approximations of what somebody thinks about the real world rather than of the real world itself. They are formal models of the fuzzy “models” that we have in our brain – models of models, approximations of simplifications. To see this, consider the example below which you may easily find in a standard macro-paper.

“For sake of simplicity, suppose that an infinity of identical firms produce according to Y=f(K,L) with Y giving output, K denoting the capital stock and L the amount of labour.” How do we proceed on this if we read that?

a. Translate the equation Y=f(K,L) into words: “Ok, so…production uses capital and labour as inputs.”

b. Guess what the author might want to say about the real world:

1. “So there is an infinity of firms in the model. Does he/she mean that there is an infinity of firms in the real world? – I guess not. So how many firms does he/she mean – 1000, 1 000 000?
2. “Does he/she mean that all firms are the same in the real world? – I hope not!”
3. Ah…“for sake of simplicity” – so the assumption was taken although he/she anyway means something else – if so…what?! Hm…
4. “Maybe he/she means that analyzing market power of firms is not necessary for the respective purpose of the model?” – Sounds better. Or maybe, he/she means that market power is generally negligible…– whatever. I just stick to the latter interpretation.

Note that this is a pretty simplified example. In macro models you typically have various actors and feedback effects between consumers, producers, the central bank etc. If you let 10 people conduct the upper steps for such models you will usually get 10 different interpretations. To overcome this, you may introduce some form of heterogeneity in the model, try to get a slightly more realistic expression of competition and so on. You will nevertheless end up with mathematical expressions that do not correspond to what you actually want to say about people´s behavior and their respective interactions. In other fields, the difference between the formal model and the model you have in mind may be small, in macro, the gap is usually rather pronounced.

What does that imply now for the future of macroeconomics? I assume here that one is willing to follow some form of McCloskey´s view of economists as “persuaders”, i.e. we are interested in changing the fuzzy “models” in our brain or in other peoples´ brainswhile the formal ones are only tools for achieving this. It follows:

i) Explicit microfoundations may help to address the Lucas-critique, but they cannot make it immune since other people may simply not interpret the parameters of the formal microfoundations as structural. More importantly, a model that is not explicitly microfounded may be reasonably interpreted by people to be robust by adding an informal story. Both proceedings end up with an informal judgement. Explicit microfoundations are therefore neither necessary nor sufficient to address the Lucas-critique and by using them we do not overcome the final step of informal, fuzzy, subjective judgements.

ii) Since the formal model on paper and the fuzzy model in our brain are distinct, the internal consistency of the formal structure is neither necessary nor sufficient for the consistency of the underlying thought.

iii) Mathematical models are not an intrinsically precise way of communicating economic ideas. Ordinary speech may promote clarity since it describes the fuzzy models in our brains directly rather than approximating them with the often pretty rough formal elements available.

With all this, I neither want to say that we should completely depart from explicit microfoundations nor that we should abandon mathematical representations. I think both are powerful tools for bringing macroeconomics forward. There is just no reason to apply them dogmatically without thinking about whether doing so makes sense for the purpose at hand and it is certainly unjustified to impose this standard on others when judging their contributions, at least if one´s arguments in favor of this standard are based on I)-III). Finally, given that the gap between the formal and the fuzzy model is often pretty sizeable, we cannot stick to simply throwing models at each other. They can be great tools for thinking but in the end, somebody will have to give the actual argument about the source of e.g. the recent financial crisis. This necessitates using and describing the relevant parts of his/her fuzzy model that would have optimally advanced using the formal ones. And: Doing so requires fuzzy, ordinary language, not math!

# “The Rate of Return on Everything“

This is the title of a new paper by Oscar Jorda, Katharina Knoll, Dmitry Kuvshinov, and Moritz Schularick (original paper, voxeu article). The paper is the result of a research project to calculate the rates of return on four major asset categories – equities, bonds, bills, and real estate – in 16 major developed economies going back as far in time as reasonable. (Quibble: Is that really everything? What about gold? currencies? commodities? paintings? vintage cars?)

The paper does nothing but compute long-run averages and standard deviations and draw graphs. No regressions, no econometric witchcraft, no funny stuff. Yet its findings are fascinating.

Some of the results confirm what „everyone already knew, kind of“:

1. Risky investments like equities and real estate yield 7% per year in real terms.
2. The risk premium (equities/housing vis-a-vis short term bond rates) is usually between 4 to 5%.
3. There is no clear long-run trend (either up or down) in rates of return. (Take this, Karl Marx!)

Some of the results are interesting, but not particularly puzzling:

1. The return on total wealth (average of the rates of return on all assets weighted by their share in the economy’s aggregate portfolio) exceeds the rate of growth of GDP. This confirms Piketty’s claim that r > g. In terms of the Solow model it means we are living in a dynamically efficient regime: we cannot make both present and future generations better off by saving less. Perhaps the most interesting aspect of this finding is its robustness: it holds for every sub-period and for every country. It really seems to be a „deep fact“ about modern economies.
2. The return on risk-free assets is sometimes higher, sometimes lower than the growth rate of GDP. For instance, before the two World Wars, the differential between the risk-free rate and growth was mostly positive, so that governments had to run primary surpluses to keep debt stable. Post-1950, the differential was mostly negative.
3. Negative returns on safe assets are not unusual. Safe rates were negative during the two World Wars as well as during the crisis of the 1970s. In recent times safe rates went negative again in the aftermath of the global financial crisis. These findings don’t disprove the „secular stagnation“ hypothesis of Summers et al. but they do put it in historical perspective. It seems that rates were unusually high during 1980s and the recent downward trend could just be a reversion to the mean.

But some results are really puzzling – even shocking from the point of view of standard finance theory:

1. The return on risk-free assets is more volatile than the return on risky ones. I haven’t yet digested this fact fully. Does this mean that “risk-free asset” is a total misnomer? No, because „risk-free“ refers to the unconditional nature of the payoff of an asset, not the variability of its return. A bond is „risk-free“ because it promises a fixed cash flow irrespective of the state of the economy. Stocks are called risky, not because their returns are volatile, but because the dividends they pay are conditional on the performance of the company. So does this mean that people’s time discount rate varies a lot? Why? It can’t be consumption growth variability – because consumption is quite smooth. What’s going on?
2. Housing offers the same yield as equities, but is much less volatile. Jorda et al refer to this as the housing puzzle. I’m not sure how puzzled I should be by this. I surely found the high average yield of real estate assets surprising. However, from what I know about house price indices and the myriad measurement issues surrounding them, I feel that one should be very cautious about the housing returns. I definitely would like someone who knows more about this look carefully at how they calculated the returns (paging Dr. Waltl!). One potential solution to the puzzle I can see would be differences in liquidity. Housing is super illiquid, equities are quite liquid. Couldn’t the high return on housing just be an illiquidity premium?

There is much, much more in the paper, but those were the points that I found most striking. I’m sure this will be one of the most important papers of the past year and will be a crucial source for researchers in finance, growth, and business cycle theory. Plenty of food for thought.

# Intro to Econ: Third Lecture – Efficiency, Fairness, Trade, and a bit about Free Trade Agreements

In the third lecture, after a review of the second lecture, I talk about (bilateral) trade and more general exchange, efficiency, and fairness. I do this in the context of a kids’ birthday party and follow to some extent chapter 3 of Ariel Rubinstein’s “Economic Fables”. I don’t know how this is done in other areas in the world, but in Graz there seem to be certain specific norms that one should follow when you host a kid’s birthday party. You invite roughly as many children as your child’s age in years. Children bring presents, but each child also goes home from the party with some little bag of goodies. As concerned parents we do not want to give the children too many sweets so we give them little presents such as little Lego or Playmobil figures or a car or something like this. We did this twice this year (we have two kids) and in both cases the first thing that happens after the kids finally find the treasure (there is often a sort of treasure hunt) is this: the kids start to trade. So, I ask the students what is going on when kids are trading their presents.

# Me, Myself and Economics: Disequilibrium

I considered to choose ‘A Non-Equilibrium Approach’ as a subtitle of my dissertation thesis. About at the same time a colleague of mine stated that ‘disequilibrium economics’ are a ‘logic implausibility’ as an equilibrium in economics is not much more than a consistency condition – different to the notion in physics where it mainly refers to a state where the described system is at rest. I have to disagree with this maybe unintentional attempt to whitewash a bunch of approaches which are – as probably every other approach – criticized for good reason.

Just think of basic micro or macro and the definition of a market or an economy in equilibrium. There the term is not used to describe consistency in the derivation of the outcome, but mainly refers its characteristics – for example that supply and demand are balanced. Go further in the curriculum and think of an equilibrium in game theory. While it is also derived in a way which is consistent with the stated assumptions, its description states more than that – for example that it is a combination of strategies for which no individual has an incentive to unilaterally deviate.

Therefore, equilibrium approaches in my opinion go beyond detecting an outcome that is logically implied by assumptions and step-by-step analytics. They also tend to presume an outcome of a certain type and thereby risk the neglect of other outcomes, strategies, behaviour, and thereby even whole issues that may be highly relevant in reality.

In case my concern is not clear, a discussion of Rubinstein’s famous e-mail game may help. The e-mail game may be described as the following: A couple wants to meet and prefers being together over being separated. However, if it rains they prefer to meet inside, otherwise the prefer to meet outside. Whether it rains or not is determined by nature and only one person, let’s assume the woman, knows the weather for sure. If it will rain, she sends an e-mail to the man. Every received e-mail is read and automatically triggers a response, but every e-mail also gets lost with a certain small probability. That means that the e-mail conversation may last for a long time and even forever, but the probability for the latter case tends to be zero.

Because of the small but nevertheless positive probability for an e-mail to get lost, both parties will never know for sure how many e-mails have been sent. The woman knows whether she sent an e-mail or not, but she is confused about the state where one or two e-mails were sent (captured by the partition Pw). While it may be that the second e-mail – sent by automatic response from the man’s account – got lost, it also may be the case that her e-mail did not pass through in the first place. The moment the second e-mail passes through, the third e-mail is triggered automatically and she can distinguish that state from the ones before. However, she again cannot distinguish between the state of three and four e-mails sent – because if she would know about the fourth e-mail, she would have automatically sent the fifth, being in another state. The man faces a similar incompleteness of information (captured by the partition Pm). He in turn is confused about whether none or one e-mail was sent, just like he is confused about whether two or three e-mails were sent and so on.

Rubinstein thereby shows that the strictly formal approach does not lead to an equilibrium in which they meet outside in the nearby game even if there is a high probability for the information to pass on. In fact, the formal result of the game described above is that none of the two will risk to go outside as there is no state (described in terms of e-mails sent) about whose appearance exists common knowledge. However, the example not only shows how easy simple games may get complicated in formal term, but also shows how misleading the strictly formal conclusion can be with regard to an underlying issue. It was about a couple who wants to meet, inside on rainy days, outside otherwise. They both know their preferences. They differ only in the information they have – first about the state of nature and second about how many e-mails are sent. The second issue however should not be the one of primary interest. Instead a social scientist and therefore economist should just ask: how many e-mails have to be sent that they both know that they both know about the weather and therefore human beings of these days will coordinate for the preferred equilibrium.

One e-mail sent just states that it is rainy and the woman knows about it. Two e-mails sent means that the man received this important information, but the women does not know that yet. Three e-mails sent means that the woman knows that the man knows. Four e-mails sent means that the man now knows that the woman knows that he knows. Five e-mails mean that the woman now knows that the man knows that the women knows that the man knows. At the latest after the sixth and seventh e-mail both know that they reached the aspired situation where both know that they both know.

While they can never be sure that their last e-mail passed through, they reach a state where human beings of these and thereby the economic agents of interest will not care about it. Agents may differ with regard to the number of e-mails they require in order to believe in a successful coordination, but I claim that there are not much of them who require more than the five to seven e-mails.

So, while the formal equilibrium approach provides some insights in favour of a theoretical statement about mutual and common knowledge, it risks to draw too much attention towards the wrong issue or at least away from non-equilibrium outcomes that may be highly probable in reality. I think that this is a general issue of equilibrium economics, which are worthwhile and helpful in many regards, but always have to be done as well as interpreted with caution.

# Intro to Econ: Second Lecture – Financial Derivative Pricing

The final bit of the second lecture is an introduction to financial engineering. Assuming the absence of arbitrage is all one needs to price financial derivatives. A financial derivative, perhaps a bit narrowly defined, is a financial product – that is a risky investment possibility – with payoffs that depend exclusively on other “basic” financial products such as bonds and stocks. Students may want to google what bonds and stocks are if they do not yet know. For our purposes all we need to know is that a stock of a company has a value or price that substantially varies over time. The future price of a stock is uncertain today and this uncertainty can be quite large.

# Intro to Econ: Second Lecture – Arbitrage with Sports Bets

In this part of the second lecture I turn to another area in which the absence of arbitrage – due to people preferring more money over less – implies severe restrictions: sports betting. I begin by giving the students potentially fictional betting odds on three football (soccer) games, given in the following table.

$\begin{tabular}{c|ccc} & Game 1 & Game 2 & Game 3 \\ \hline A & 1,1 & 4,75 & 1,9 \\ x & 11 & 3,6 & 4,2 \\ B & 21 & 1,78 & 5 \\ \end{tabular}$