The license plate celebrates the 200th birthday of the State of Indiana in 2016. 200 years! This means, in a sense, Indiana is older than many of the states of the European Union. In 1816, Germany was still a patchwork of small territories, loosely connected through the German Confederation – of which Austria was a part. Italy was merely a geographical description – the process of Italian unification had not even begun. Greece was just a province of the Ottoman empire. Belgium, until 1815 known as „Austrian Netherlands“, was part of the United Kingdom of the Netherlands. France did exist as a nation then, however, while the people of Indiana lived for 200 years under the same political system and only once made marginal changes to their constitution (more about this below!), the French during the same time went from the post-Napoleonic Bourbon monarchy to the Second Republic to the Second Empire, then back to the Third Republic, then to totalitarian rule under the Nazis, and finally back to Fourth and now the Fifth Republic. As far as I am aware, there is no European country which has had the same constitution for the past 200 years without interruptions or major changes – the single exception I can think of is the United Kingdom which always had the same constitution: none.

There is another striking fact about the history of Indiana. Indiana has been in a monetary union with the rest of the United States for as long as it existed. And during its early history, it has had its own debt crisis which bears a striking resemblance to the recent history of the much younger European monetary union.

When Indiana became a State in 1816, it was mostly a wilderness at the margin of civilization. The only major road in the country was the Buffalo Trace – literally a trace created by migrating bison herds. Population was only 65,000 initially, but growing fast. The government of the young state decided to take the country’s infrastructure into the 19th century. And 19th century infrastructure, they figured, was going to be canals. So, they launched a giant public investment program, called the Mammoth Internal Improvement Act, spending 10 million dollars (equivalent to 260 million current dollars, roughly 100% of GDP at the time) on canals and toll roads. The heart of the project was the Wabash & Erie Canal connecting the Great Lakes with the Ohio River. „Crossroads of America“ was the official state motto of Indiana.

To finance these projects, the governor of Indiana, a certain Noah Noble, had a plan: some money was to be raised by selling public lands, some by raising taxes, and some by borrowing from the Bank of Indiana, which was partly state-owned. The Bank of Indiana refinanced itself by issuing bonds, backed by the state, at the London exchange.

Initially, the plan looked like a big success. The construction works employed many thousands of people and provided a stimulus for the economy. Borrowing costs were low and spirits were high. But soon, problems started to appear. It turned out that the government had greatly underestimated the costs of building the canals, mostly because they failed to take into account the damage done by muskrats who burrowed through the walls of the dams. Critical voices in the State Congress regarded the canals as a total waste of money. Railroads, they argued, were the future! Nobody seemed to listen.

And then, in 1837, a financial crisis broke out. The crisis was triggered by the Bank of England which, in an attempt to curb the outflow of gold and silver reserves, raised interest rates. This had a direct impact on Indiana whose borrowing costs skyrocketed. It also had an indirect effect: since the United States was on a gold and silver standard, American banks were forced to follow the Bank of England in raising interest rates, which led to a credit crunch and a nation-wide recession. (A classic example of a monetary policy spillover effect!)

The combination of stagnant tax revenues, exploding construction costs and rising interest rates meant that State of Indiana was effectively bankrupt at the end of 1841. So they sent the head of the Bank of Indiana to London to negotiate a restructuring of the debt. The creditors agreed to a haircut of 50% of the debt. In exchange, Indiana handed over control of most of the canals and roads, many of them still unfinished. The Wabash and Erie Canal was held in trust to pay off the remaining debt. It operated until the 1870s yielding a low profit, but was soon made obsolete by – the railroads which turned out to be the key infrastructure of the 19th century.

The conclusion Indiana drew from this was that the long-run costs of government borrowing far exceed the short-run benefits. Which is why in 1851, they adopted an amendment to their constitution, forbidding the State government to get into debt (except in cases of emergency).

I’d say there is a thing or two our modern European states can learn from this story.

]]>In this post I want to ask the question whether this threat is a credible one. I will have two answers to this question. Yes and no.

Haha. Well, it depends on what you call a “credible” threat. The most commonly known notion of a non-credible threat is due to Reinhard Selten. Paraphrasing his work somewhat, a threat is not credible if, once asked to actually go through with it, people do not find it in their own interest to do so. Reinhard Selten then defined a Nash equilibrium to be free of non-credible threats if it is a subgame perfect equilibrium, that is a Nash equilibrium that is also a Nash equilibrium in every subgame. What does that mean? Well this means the strategy profile that both players in the game play is such that **no matter what has happened in the game so far** the strategy profile from any point onwards is still so that no player would want to deviate from it if they believe the other player to follow their part of this strategy profile.

Let us come back to the nappy-changing game as described in my first post in this series. Here is a brief summary of this game. I ask Oscar if his nappy is full (after some initial but uncertain evidence pointing slightly in this direction). Oscar can make his answer depend on the true state of his nappy (full or clean) and this answer can either be “yes” or “no”. I then listen to his answer and make my decision whether to check the state of his nappy or not a function of what answer he gave. Let me reproduce the normal form depiction of this game again here:

The one-shot game only has equilibria in which I do not trust Oscar’s statement and always check his nappy. This is bad for both of us. It would be better for both of us if Oscar was truthful and I believed him. This, I then argued in the previous post in this series can be made an equilibrium outcome if the two players play the grim trigger strategy as suggested by all these proverbs. Note that I keep assuming that I (as a player in this game) can always find out about the true state of the nappy sooner or later, an assumption that has rightly been pointed out not to be completely plausible in all cases. [One could here talk about the more recent literature on repeated games with imperfect monitoring, but I will refrain from doing so at this point – the reader may want to consult the 2006 book by Mailath and Samuelson on Repeated Games and Reputations]

The grim trigger strategy is as follows. I believe Oscar as long as he was always truthful in the past (and as long as I was always trusting in the past). Once I catch Oscar lying (or once I have not trusted Oscar) I never again believe him and always check his nappy from then on. Oscar is truthful as long I have always trusted him (and as long as he has always been truthful). Once he catches me not trusting him (or once Oscar himself was untruthful) Oscar will always so no from that point on. The statements in brackets probably seem strange to someone not used to game theory, but they are needed for the statements below to be fully correct.

This grim trigger strategy, then, is a subgame perfect equilibrium, provided Oscar’s discount factor . Why? I have already argued that, in this case, Oscar would not want to deviate from the equilibrium path of being truthful, because lying would lead to my never trusting him again and this is sufficiently bad for him if his discount factor is sufficiently high. I certainly have no incentive to deviate from this equilibrium path, because I get the best payoff I can possibly get in this game. But subgame perfection also requires that the threat, when we are asked to carry it out, is in itself also equilibrium play. So suppose Oscar did lie at some point and I (and Oscar) are now supposed to carry out the threat. What do we do? Well, we now play the equilibrium of the one-shot game forever and ever. But this is of course an equilibrium of the repeated game, so the grim trigger strategy described here does indeed constitute a subgame perfect equilibrium.

So according to Reinhard Selten’s definition the punishment of never ever believing Oscar again is a credible threat.

But others have argued that Reinhard Selten’s notion of a credible threat is only a minimal requirement and further requirements may be needed in some cases. I do not know what Reinhard Selten thought of this, but I guess he would have agreed. So what is the issue?

Oscar and I when we look at this game should realize that we would both like to play the truthful and trusting equilibrium path. To incentivize us to do so, especially Oscar in this case, we need to use this threat of me never again believing him if I catch Oscar lying. But suppose we are in a situation in which I have to carry out this threat. Then we would both agree that we are in a bad equilibrium and that we would want to get away from it again. In other words we both would want to renegotiate again. But if Oscar foresees this, that I would always be willing to renegotiate back to the truthful and trusting outcome of the game, then his incentives to be truthful are greatly diminished.

With something like this in mind Farrell and Maskin (1989) and others have put forward different versions of equilibria of repeated games that are **renegotiation-proof**, that is immune to renegotiation.

They call a strategy profile of a repeated game a **weakly renegotiation proof** equilibrium if the set of all possible prescribed strategy profiles (after any potential history) are Nash equilibria in the subgame and cannot be Pareto-ranked. This means that for any two possible prescribed strategy profiles (or continuation equilibria as they call it) must be such that one person prefers one of the two while the other person prefers the other. Note that this is not the case in the grim trigger equilibrium of the repeated nappy changing game. In this subgame perfect equilibrium both Oscar and I prefer the original equilibrium of the game over the one carried out after Oscar is lying.

So what is weakly renegotiation proof in the repeated nappy-changing game? Well, I have made some calculations and the best weakly renegotiation proof equilibrium for me (in terms of my payoffs) that I could find is this: On the equilibrium path Oscar is truthful and I randomize between trusting Oscar with a probability of and with a probability of I play “do not check (regardless of what Oscar says)”. For this to work I have to randomize using dice or something like this in such a way that Oscar can verify that I correctly randomize. If Oscar ever deviated I then verifiably (to Oscar) randomize by playing “check nappy regardless of Oscar’s answer” with a probability of and trusting Oscar with a probability of . Oscar then continues to be truthful. Oscar incentivizes me to behave in this way (after all I am letting Oscar do what he wants sometimes, which is not what I would want) by punishing me, if I ever deviate, with the following strategy. He would continue to be truthful but ask me to play “do not check (regardless of what Oscar says)”. This complicated “construction” works only if the punishment is not done forever, but only for a suitably long time after which we go back to the start of the game with the equilibrium path behavior. Whenever one of us deviates from the prescribed behavior in any stage we simply restart the punishment phase for this player.

This probably all sounds like gobble-dee-gook and I admit it is a bit complicated. Before I provide my final verdict of this post let me just clarify this supposedly best-for-me renegotiation proof equilibrium. Suppose is essentially one half. Then in the prescribed strategy profile, on the equilibrium path, I am randomizing between trusting Oscar (which I find best given that he is truthful) with a probability of 2/3, that means two thirds of the time. But one third of the time I leave Oscar in peace even when he tells me his nappy is full. As this happens one half of the time I actually leave him in peace despite a full nappy in one sixth of all cases. I have to do this in order for my punishment of him to be actually preferred by me over the equilibrium path play. In the punishment I mostly switch probability from leaving Oscar in peace to checking him always, which is something that I do not find so bad, but that Oscar really dislikes.

Well, I do not know if you find this very convincing, but I do think that the grim trigger strategy is not really feasible when it comes to teaching kids not to lie. What I actually use in real life is a simple trick. I do not punish behavior only within the nappy-changing game itself. I use television watching rights, something outside the game I just described. The great thing about this is that while Oscar does not like it when his television time is reduced I am actually quite happy when he watches less television and so this works as a renegotiation proof punishment. But the fact that I do this can be explained by the failure of the grim trigger strategy to be renegotiation proof.

]]>The nappy-changing game as I have written it down in my post on lying (which you may need to read before you can read this post) can also be seen as the game between the boy and his elders. There are two states of nature. Either there is a wolf or there is not. The boy, who is watching the sheep, knows which state it is and the elders, who are somewhere else, do not. The boy has four (pure) strategies: never say anything, be honest (cry wolf when there is one, be quiet when there is none), use “opposite speak”, and always cry wolf. The elders who listen to the boy’s cry also have four (pure) strategies: always come running, trust the boy, understand the boy as if he was using opposite speak, and never come running. Supposedly, the elder’s preferences are just as mine are in the nappy-changing game. They would like to come running if there is a wolf, and they would like to keep doing whatever it is they are doing when there is no wolf. The boy’s preferences seem to be the same as Oscar’s in the nappy-changing game. If there is a wolf the boy would like to see his elders to come running to help, but the boy would like the elders to come running even when there is no wolf (he gets bored I suppose). The one slight difference between the two games seems to be that the assumed commonly known probability of a wolf appearing, is now less than a half (if we assume that the payoffs are still just ones and zeros). Well, what matters is that the ex-ante expected payoff of coming running is lower than the ex-ante expected payoff of staying put. We infer this from the elders’ supposed actions of staying where they are when they do not believe that there is a wolf. If the elders had found a wolf attack really disastrous and at the same time sufficiently likely, then after finding the boy not trustworthy, they would have decided to come always, that is to watch out for wolves themselves. The fact that they let the boy do the watching (and to then ignore his warnings – because they do not believe him) tells us that without further information about the likelihood of the presence of a wolf, they prefer to stay where they are (probably doing something important) and risk losing one sheep to a wolf over keeping constant watch for wolves.

In any case the same model as the nappy-changing game, but now with , now takes account of the supposed (long-run) behavior in this story. The game still has only two pure equilibria and they involve the boy either crying wolf in both states (or not doing so in both states), but now with the effect that the elders never come.

]]>

Both statements are sufficient for a first quick side discussion I want to provide here as they both contain “even when he speaks the truth.” As a child, I have been made aware of this idiom on a few occasions. While I recall that I always understood it to mean that I should not lie, I also recall that the statement in itself puzzled me. I thought that if this liar speaks the truth then of course I will believe him. It took me some time to realize that there is a specific information structure assumed in this statement that is not made explicit. It should really say that “a liar is not believed even when he speaks the truth, and the truth is not known by the listener”. This addition was probably omitted for two reasons, one it makes the statement shorter, and two it should be obvious that this is what is meant. In other words, any statement made by someone generally known to be a liar will not be taken at face value. It will be ignored. This means that after a liar makes a statement we know as much as before, no more and no less. Note that this is true in the nappy-changing game between my child, Oscar, and I that I described in my previous post. Here is a brief summary of this game. I ask Oscar if his nappy is full (after some initial but uncertain evidence pointing slightly in this direction). Oscar can make his answer depend on the true state of his nappy (full or clean) and this answer can either be “yes” or “no”. I then listen to his answer and make my decision whether to check the state of his nappy or not a function of what answer he gave. Let me reproduce the normal form depiction of this game again here (with ).

We found that the only equilibrium of this game is that Oscar lies (in that he either always says yes or always says no – regardless of the state of his nappy) and that I do not believe him and always check his nappy. Now note that this equilibrium is bad for both Oscar and me. Oscar is faced with the reality that I ignore his answer and check him no matter what he says, which is very annoying to him. I am faced with the reality that I cannot trust Oscar and have to check his nappy even in those cases when it is clean. Thus we have that this little liar (a bit too strong a term really for my little son) is not believed even when he speaks the truth, that is, even when his nappy is not full.

Looking at the matrix we can see that we here have a situation that is somewhat reminiscent of the prisoners’ dilemma. There is a potential outcome in this game that is a Pareto improvement, that means it is better for both of us Oscar and me, than the equilibrium outcome. If Oscar was truthful and I could trust him we would both be better off. I would not have to check his nappy when it is clean and Oscar would now only be bothered when the nappy is clean. In the matrix this can be seen as the payoffs in this case are instead of .

Isn’t there some way of getting these payoffs and making Oscar honest and me trusting? Well, there is hope. The nappy changing game is one that Oscar and I play many times. It is really what the literature calls a repeated game. True, the is not always the same – sometimes I have stronger suspicions that the nappy is full than at other times – but this is not so important for the discussion. The big question in this repeated game is the question of how forward looking the two players are. Well, as a grown-up I am very forward looking. This means my discount factor, with which I discount the future relative to the present, is very close to one. I value payoffs in the future almost as much as in the present. For Oscar this is unclear. In fact I believe that the older he gets the higher his discount factor becomes. As a very young child he did not seem to care one bit about what happens even in one hour. The now was everything. More recently (he is already six years old now and we have not played the nappy game in a very long time – but we do play similar games) he can be easily incentivized to do something now with a promise or a threat about tomorrow or next week or even xmas when it is quite far away.

You will see that the discount factor plays an important role in the possibility of achieving higher payoffs in the nappy changing game. Let us see what we can do in the repeated game. Note first that in this game I will always learn the true state of the nappy eventually. So I can always check later at some point whether Oscar was truthful or not. This is very important of course. Lying is much easier when there is no chance of being detected. This would be an interesting topic for another blog post.

Recall that I said that there was more information in the German saying than in the English one. But clearly both statements are to be understood as a threat. If you lie you will be called a liar and liars won’t be believed. This is supposedly a bad thing also for the liar, as it is in my nappy-changing game. The German saying is more explicit about what induces people to call you a liar. In fact, according to the German saying, you only have to lie once to be called a liar. Literally translated it says “He who has lied once will not be believed even when he speaks the truth.” The German saying prescribes a strategy in the repeated game that the literature calls the “grim trigger” strategy. It is essentially as follows. I trust Oscar as long as he was always truthful in the past. If he was not truthful even once (and no matter how long ago this was) I will never believe him anymore and I will always check his nappies from then on. Oscar’s strategy is to be truthful at all times unless I have, at one point, not been trusting.

Under what circumstances is this strategy a Nash equilibrium in the repeated game? If Oscar is always truthful then I am always trusting and Oscar gets a payoff of every time. If he lies at one point by saying no even though the nappy is full he gets a payoff of one once and then zero ever after. With (the usual) exponential discounting and with denoting the discount factor, this means that Oscar prefers to be truthful if or, equivalently, if . Recall that . So if Oscar is sufficiently forward looking, the grim trigger strategy described in the German saying would indeed incentivize Oscar to be truthful at all times.

I think that there is one lesson we can take from this discussion. If we want to teach our kids to be truthful we may have to wait until they are old enough to be sufficiently forward looking. But on the issue whether the grim trigger strategy really works, and whether this is really a feasible way to teach honesty, I have more to say in my next blog post.

]]>In this and the next two blog posts, using the language of game theory, I want to discuss the incentives to lie and how one could perhaps teach children not to lie.

I don’t think I need to provide empirical evidence that children lie on occasion. In case you forgot your own childhood you may want to look at Eddie Izzard’s treatment of this subject.

To fix ideas let me tell you about a game I used to play with my kids when they were very little. I call it the nappy-changing game. We used to play it often. The situation is always more or less as follows. I am on nappy duty and one of my little ones, let’s call him Oscar, is busy playing. Walking past him, I get a whiff of an interesting smell. I ask Oscar “Is your nappy full?” and Oscar invariably answers with a loud “No“.

How can we rationalize this “data”? First I need to describe the game between the two of us. The game, crucially, is one of incomplete information. While I believe it is safe to assume that Oscar knows the state of his nappy, I do not. This is the whole point of the game of course. If I already knew everything there would be no point in Oscar lying. And if Oscar does not know the state of the nappy himself, one could also hardly call his behavior lying. It would just be an expression of his ignorance. But I am pretty sure Oscar knows the state of his nappy. So let us assume that Oscar’s nappy can only be in either of two states: full or clean.

A game, to be well defined, needs to have players, strategies, and payoffs (or utilities). The players are obvious, Oscar and I. The strategies, taking into account the information structure, are as follows. I always ask the question, so let this not be part of the game. Then Oscar can say “Yes” or “No” and can make his choice of action a function of the state of the nappy. This means he has four (pure) strategies: always say yes (regardless of the state of the nappy), be truthful (say yes when the nappy is full and say no otherwise), use “opposite speak” (say no when the nappy is full and say yes otherwise), and always say no. I listen to Oscar’s answer and now have four (pure) strategies as well: always check Oscar’s nappy (regardless of Oscar’s answer), trust Oscar (check the nappy if he says yes, leave Oscar in peace if he says no), understand Oscar’s answer as opposite speak (check nappy if he says no, leave Oscar in peace if he says yes), and always leave Oscar in peace.

Let us now turn to the payoffs in this game. My payoffs are as follows. I want to do the appropriate thing given the state of the nappy. So let’s say I receive a payoff of one if I check Oscar’s nappy when it is full and also if I do not check Oscar’s nappy when it is clean (I will find out eventually!). In all other (two) cases I receive a payoff of zero. I, thus, receive a zero payoff when I check the nappy when it is not full and also when I do not check the nappy when it is full (as I have said, I will find out eventually!). One could play with those payoffs but nothing substantially would change as long as we maintain that I prefer to check the nappy over not checking when it is needed, and I prefer not checking over checking when it is not needed. What about Oscar’s payoffs. I think it is fair to assume that he always prefers not checking, i.e. that I leave him alone. I am sure he would eventually also want me to change him, but much much later than I would want to do it, and I will eventually find out and change him. So I think it is ok to assume that Oscar prefers to be left in peace at the moment of me asking, regardless of the state of the nappy. So let us give him a payoff of one when he is left alone and a payoff of zero when I check his nappy (in either state).

There is one thing I still need to do with this model. I need to close it informationally. The easiest way to do this is to assume that ex-ante there is a commonly known (between Oscar and myself) probability of the state of the nappy being full. Let us call it and let us assume (recall the whiff I got) that . Now the assumption of a commonly known probability of the nappy being full is a ridiculous one, it is I am sure never true. But it allows me to analyze the game more easily, and I believe that in the present case, it is not crucial. I believe that the eventual equilibrium of this game will be quite robust to slight changes in the informational structure. I leave it to the readers to think about this for themselves.

With all this I can write this game down in so-called normal form, as a 4 by 4 bi-matrix game.

Oscar chooses the row, I choose the column, and the numbers in the matrix are the ex-ante expected payoffs that arise in the various strategy combinations. In each cell of the matrix Oscar’s payoff is the first entry, mine the second. Once all this is in place it is easy to identify equilibria of this game. Note that as my strategy to never check (never c) is strictly dominated by my strategy to always check (always c). My ideal point would be that Oscar is truthful and I can trust him, but Oscar in that case has an optimal deviation to always say no. In that case I better do not trust him and instead always check his nappy. This is indeed the only pure strategy equilibrium of this game. Well there is also one in which Oscar always says yes and I always check him, but this is really the same. Note that language has no intrinsic meaning in this game. The meaning of language in this game could only potentially arise in equilibrium.

So what did we learn from this so far? Clearly Oscar’s behavior (of lying) is not irrational (it is a best reply to my behavior). But it has the, from his – and also my – point of view, unfortunate side effect that I do not trust him, so his lying does not fool me. This game is, by the way, an example of a sender-receiver game. See Joel Sobel’s paper on Signaling Games. In fact it is an example of a special class of sender-receiver games: so called cheap-talk games. See Joel Sobel’s paper on Giving and Receiving Advice for further reading. In the language of these games the lying equilibrium between Oscar and me is called a pooling equilibrium. It is called so, because the two kinds of Oscar, the one with a full and the one with a clean nappy, both send the same “message”. The two Oscars play this game in such a way that I cannot differentiate between them. Hence ther term pooling.

In the next post I will take this game up again and consider what can happen if we play this game over and over again, as my kids and I did in the nappy days.

]]>

In my discussion of this topic I again follow fairly closely chapter 3 of Ariel Rubinstein’s “Economic Fables”. The key new concept is that of a market “value” of objects. In another lecture I stress that the “value” for a thing in our world (the world of human beings) always derives from people wanting to have it. And it is hard to define a single objective value of a thing. People tend to have different subjective values for things. Consider a veal cutlet. To some people, those who can be seen to occasionally eat one, a veal cutlet seems to have a fairly high value. To a vegetarian, however, a veal cutlet has little value (if any). If a vegetarian doesn’t like someone else eating a veal cutlet, then we have yet another problem, one of externalities (which I will address in a later lecture).

It turns out that in our conception of a market one can meaningfully identify a single value for each object. I would not call it an objective value. In the lecture I talk about prices, but market value is actually a better term as we do not have money in the market I describe. I stay with the example of the three kids and their three presents:

The present is the initial allocation given to the three kids and below each child in the table is that child’s preference ranking over the three possible presents.

A market outcome is then defined as follows: There is a market value (or price) for each possible figure. Each child sells the figure initially allocated to him or her for the price of this figure and buys her favorite figure among those that he or she can afford given all the prices such that at the end each child has exactly one figure. The final allocation is then called the market allocation at the given market values (or prices).

So in the example, can the outcome of Eva and Franz trading their figures be a market allocation? If so, what are the market values underlying this market outcome? I let the students work this out for themselves first. But this is how it works. For this trade to result in a market allocation (according to the above definition) the market values or prices for the three figures have to satisfy certain conditions. Let us start with Eva. She initially owned the pirate and now has the nurse. So prices must be such that she can afford the nurse by selling the pirate. So we have , if we let denote the price of the nurse and the price of the pirate. As Eva is not interested in the ghost (given she gets the nurse) her choices reveal nothing about the price of the ghost, which we denote by . Let us now look at Franz’s choices. He initially owned the nurse and now has the pirate. Analogously to what we inferred from Eva’s choices we now get that and we also do not learn anything about the price of the ghost from Franz’s choices as he does not value the ghost above the pirate. Let us now look at Maria. She still has the ghost. How can we “explain” this with prices? Well, it must be that she can afford neither the nurse nor the pirate. So, altogether, we must have that . But this works! So we have found market values (or prices) as well as a market allocation starting from the initial allocation. In this situation the market value of the nurse and the pirate is equal and higher than that of the ghost (because no one really likes the ghost all that much). The fact that Maria likes the pirate more than the nurse does not enter the market values. So a single set of market values allows us to rationalize the eventual allocation as a market allocation.

Now can there be another market allocation starting from this initial allocation? The answer is no. And this is, in fact, a very general finding but I will not go into the exact boundaries of when this result is true and when not. Let us see how this works in the present context. We have found two other Pareto efficient allocations that derived from a series of bilateral trades. Why are they not market outcomes according to the above definition? Consider the case that first Franz and Maria trade and then Eva and Franz. This leads to a final allocation of Eva having the ghost, Franz the pirate, and Maria the nurse. What conditions would the supposed market values or prices of the three figures have to satisfy for this final outcome to be called a market outcome according to the above definition? Let us start with Eva. She must be able to afford the ghost with her pirate and yet not be able to afford the nurse. This means that . Let us now look at Maria. She must be able to afford the nurse with her ghost. We must have that . Putting this together we get that . But this is impossible! So no single set of market values can explain this final allocation starting from the initial allocation. What is the problem here? Well, Eva could interject when she sees Franz and Maria trading. When Franz is about to offer Maria his nurse for her ghost, Eva could wave her pirate in front of Franz’s face and state that she would also be willing to accept the nurse in exchange. It seems possible that Franz then would rather trade with Eva as she has the more valuable figure for him (and he still has to give up the same figure in both cases). Maria could do little to prevent that. Whether this necessarily has to happen when three children trade their toys, I do not know, but this in any case is the going definition of a market outcome and market prices. It strikes me as not entirely silly.

To finish this I want to come back to the free trade zone discussion. Suppose we only have Franz and Maria in the free trade zone. They would trade nurse and ghost and this would also be the market allocation with market prices such that the price for the nurse and the ghost are equal: . The two figures are equally valuable in this market. Now if Eva were to enter this free trade zone before Franz and Maria trade, the market allocation would be the one we discussed above in which Eva and Franz trade nurse and pirate and Maria is stuck with her ghost. In this case . In this market, the ghost is no longer as valuable as the nurse and, thus, not as valuable as it was before. The reason for this is that Franz now found, in Eva’s pirate, a better and for him affordable substitute for the ghost.

The video (in German) is here:

]]>

Warum hacke ich auf diesem anständigen Hamburger Qualitätsblatt eigentlich so rum? Nun, wenn es sich hier bloß um harmlose Missverständnisse im Zuge wirtschaftsjournalistischer Berichterstattung handeln würde, wäre das bedauerlich aber ungefähr so bemerkenswert wie “Hund beißt Mensch”. Doch in diesem Fall fühlt sich eine Zeitung dazu berufen, ihren Lesern zuerst zu zeigen wie wenig sie von „der Wirtschaft“ verstehen um sie anschließend darüber zu belehren was man von „der Wirtschaft“ im Zeitalter der Globalisierung, Digitalisierung, Automatisierung und anderen -ierungen einfach wissen muss.

Und wie glaubt unser hanseatisches Qualitätsblatt dies bewerkstelligen zu können? Natürlich mit einem Online-Quiz. So weit so gut. Das Problem beginnt schon bei den ersten Fragen: “Was schätzen Sie: Bei wie viel Punkten lag der deutsche Aktienindex (DAX) am letzten Freitagabend ungefähr?“ Sorry, aber das muss wirklich niemand wissen. Und zwar nicht nur, weil der Punktstand eines Aktienindex für sich genommen genau gar nichts aussagt, sondern auch, weil das diese Art von Wissen ist, die im Zeitalter der Digitalisierung völlig unnütz ist. Faktenwissen kann ich Googeln. Zahlen interpretieren und Zusammenhänge verstehen – darum sollte es bei ökonomischer Bildung gehen.

Das Quiz versucht das dann auch. Und hier beginnt der eigentliche Skandal. Hier die erste Frage in der Rubrik „Ökonomisches Denken“:

Ja, genau. Die richtige Antwort wird als falsch gewertet.

Ein ökonomisch gebildeter Mensch sollte wissen, dass die verkaufte Menge und der Preis gemeinsam von Angebot und Nachfrage bestimmt werden. Wenn der Preis steigt, kann damit ebensogut ein Anstieg, ein Rückgang oder keine Veränderung der Menge einhergehen – das kommt ganz darauf an, ob der Grund für den Preisanstieg in einer Verschiebung der Angebots- oder der Nachfragekurve oder beidem liegt.

Das ist die Art von Basiswissen, die jeder Erwachsene haben sollte und über die die Redakteure der „Zeit“ offenbar nicht verfügen.

Natürlich kann sich die „Zeit“-Redaktion rausreden und sagen sie hätten ja eigentlich die nachgefragte Menge gemeint und nicht die verkaufte. Aber eben darin liegt der ökonomische Analphabetismus: nicht zu verstehen, dass zwischen Nachfrage und Verkaufsmenge ein wichtiger konzeptioneller Unterschied liegt. Wer das nicht versteht, sollte sich nicht anmaßen seine Leser über „die Wirtschaft“ belehren zu wollen.

]]>Ein eindrückliches Beispiel lieferte neulich „Die Zeit“. In diesem Artikel möchte Hermann-Josef Tenhagen uns über “10 Dinge, die wir über die Wirtschaft wissen müssen” belehren. Schon beim ersten Punkt muss dem guten Ökonomen die Grausbirn’ aufsteigen.

Gebrauchtwagenhändler haben einen schlechten Ruf. Früher war der noch schlechter. Ich habe mir immer Danny de Vito als Gebrauchtwagenhändler in der alten Zeit vorgestellt. Mit dem Bild im Kopf vom dicken, kleinen Mann mit Zigarre im Mund kann man gut erklären, warum ein Markt Regeln braucht. Denn erst seit Gebrauchtwagenhändler die Qualität ihrer verkauften Autos für eine Zeit nach dem Kauf gewährleisten müssen, kann ich dort ein Auto kaufen, ohne davon auszugehen, dass die Karre an der nächsten Ecke stehenbleibt. Und erst seit dieser Zeit haben faire Gebrauchtwagenhändler eine Chance gegen Konkurrenz, die nur ihre Kunden besch… Markt braucht Regeln, um zu funktionieren.

Es ist eine gute Übung für VWL-Ersties eine paar Minuten darüber nachzudenken, wo das Problem bei dieser Argumentation ist. (Es gibt mehr als eins.)

Hier ist das Hauptproblem.

Herr Tenhagen ignoriert die Möglichkeit, dass in einem freien Markt gute Gebrauchtwagenhändler einen Anreiz haben freiwillig Garantien zu gewähren. Eine freiwillige angebotene Garantie hilft den Käufern, gute Gebrauchtwagen von schlechten zu unterscheiden. Eine verpflichtende Garantie zerstört dieses Signal und damit auch den Markt für billige Gebrauchtwagen.

In einem Markt ohne verpflichtende Garantie habe ich als Käufer die Wahl zwischen einem Auto mit Garantie um 12.000 Euro oder dem gleichen Auto beim Händler nebenan um 8.000 Euro aber ohne Garantie. Als Käufer kann ich entscheiden ob ich die 4.000 Euro extra für die Garantie zahlen will, oder ob ich lieber Geld spare und mich dafür dem Risiko aussetze eine Schrottkarre zu erwischen. Der Familienvater mit geregeltem Einkommen und geringer Risikobereitschaft wird eher die Garantie bevorzugen. Der prekär beschäftigte VWL-Student, der nebenbei ein bisschen mit Bitcoin spekuliert, könnte sich auf die Schrottkarren-Lotterie einlassen. Als Gebrauchtwagenhändler werde ich nur dann die Garantie anbieten, wenn die dadurch zu erwartenden Kosten 4.000 Euro nicht übersteigen.

Im Gleichgewicht muss der Preisunterschied zwischen dem Auto mit Garantie und dem ohne genau den Qualitätsunterschied zwischen den angebotenen Autos ausgleichen.

Was passiert, wenn nun alle Gebrauchtwagenhändler dazu verpflichtet werden, eine Garantie zu gewähren? Diejenigen Händler, die vorher nicht bereit waren die 4.000 Euro Gewährleistungskosten zu tragen, werden auch jetzt nicht wie durch Zauberhand dazu bereit sein. Und diejenigen Käufer, denen 4.000 Euro extra für ein Auto mit Garantie zu viel war, werden auch nicht plötzlich bereit sein mehr zu zahlen. Der etwas schäbige Gebrauchtwagenhändler wird vom Markt verschwinden, und der VWL-Student wird sich eben kein Auto mehr leisten können.

Die Pflichtgarantie führt nicht dazu, dass alle schäbigen Gebrauchtwagenhändler plötzlich geläutert werden und nur mehr hochqualitative Autos anbieten. Sie führt lediglich dazu, dass das Angebot von billigen, weil weniger qualitätvollen Autos zurückgeht – zulasten der Käufer mit der geringsten Zahlungsbereitschaft. Die Regel, die Herr Tenhagen als dringend notwendig erachtet, ist nicht nur nicht notwendig, sie ist sogar schädlich: Denjenigen, die sowieso die Garantie genommen hätte, bringt sie nichts, und die anderen, die auf die Garantie gerne verzichtet hätten, verdrängt sie vom Markt.

Ich finde es schon irgendwie problematisch, dass eine angebliche Qualitätszeitung wie “Die Zeit” ihren Lesern ökonomischen Analphabetismus unter der Schlagzeile “Grundwissen Ökonomie” verkauft. Es ist ein ökonomischer Analphabetismus, der, wenn er zur Grundlage von Wirtschaftspolitik dient, schwerwiegende Folgen haben kann.

]]>Historically, the idea of basing macroeconomic models on explicit microfoundations initiated in the 1970s leading to the demise of old-style Keynesian models which relied heavily on ad-hoc restrictions such as a constant aggregate savings rate. With the famous Lucas-critique declaring that ad-hoc restrictions cannot be considered invariant to changes in economic policy, a methodological standard came to dominance in the profession which demands explicit microfoundations as a pre-condition for doing proper macroeconomic modelling. The subsequent points are central to this doctrine:

I. Explicit microfoundations are needed to make models “robust” to the Lucas-critique.

II. Explicit microfoundations provide the basis for “checking” the internal consistency of the underlying thought.

III. As a pre-condition to be certain on i) and ii), the microfoundations have to be expressed using the precise language of mathematics.

Although this all seems quite convincing at first sight, the whole idea nevertheless rests on one particularly troublesome misconception of what (macro?)economic models usually represent. In the standard view, we see them as simplified representations of reality – as approximations to a complex world. “Think of it like a map! If you design a map of the Austrian highway system, you leave out irrelevant aspects like the trees guarding the highway.” – Right? Ok…, so our models are approximations! Approximations of what? Of the real world! Which looks how? Well, of course we cannot know everything in perfect detail – the reality is rather complex, but…but you know how to design proper approximations to it? How do you make proper approximations to something you do not really know because it is too complex?

In my view, the majority of (macro)economic models are indeed best seen as approximations, but as approximations of what somebody thinks about the real world rather than of the real world itself. They are formal models of the fuzzy “models” that we have in our brain – models of models, approximations of simplifications. To see this, consider the example below which you may easily find in a standard macro-paper.

“For sake of simplicity, suppose that an infinity of identical firms produce according to Y=f(K,L) with Y giving output, K denoting the capital stock and L the amount of labour.” How do we proceed on this if we read that?

a. Translate the equation Y=f(K,L) into words: “Ok, so…production uses capital and labour as inputs.”

b. Guess what the author might want to say about the real world:

- “So there is an infinity of firms in the model. Does he/she mean that there is an infinity of firms in the real world? – I guess not. So how many firms does he/she mean – 1000, 1 000 000?
- “Does he/she mean that all firms are the same in the real world? – I hope not!”
- Ah…“for sake of simplicity” – so the assumption was taken although he/she anyway means something else – if so…what?! Hm…
- “Maybe he/she means that analyzing market power of firms is not necessary for the respective purpose of the model?” – Sounds better. Or maybe, he/she means that market power is generally negligible…– whatever. I just stick to the latter interpretation.

Note that this is a pretty simplified example. In macro models you typically have various actors and feedback effects between consumers, producers, the central bank etc. If you let 10 people conduct the upper steps for such models you will usually get 10 different interpretations. To overcome this, you may introduce some form of heterogeneity in the model, try to get a slightly more realistic expression of competition and so on. You will nevertheless end up with mathematical expressions that do not correspond to what you actually want to say about people´s behavior and their respective interactions. In other fields, the difference between the formal model and the model you have in mind may be small, in macro, the gap is usually rather pronounced.

What does that imply now for the future of macroeconomics? I assume here that one is willing to follow some form of McCloskey´s view of economists as “persuaders”, i.e. we are interested in changing the fuzzy “models” in our brain or in other peoples´ brainswhile the formal ones are only tools for achieving this. It follows:

i) Explicit microfoundations may help to address the Lucas-critique, but they cannot make it immune since other people may simply not interpret the parameters of the formal microfoundations as structural. More importantly, a model that is not explicitly microfounded may be reasonably interpreted by people to be robust by adding an informal story. Both proceedings end up with an informal judgement. Explicit microfoundations are therefore neither necessary nor sufficient to address the Lucas-critique and by using them we do not overcome the final step of informal, fuzzy, subjective judgements.

ii) Since the formal model on paper and the fuzzy model in our brain are distinct, the internal consistency of the formal structure is neither necessary nor sufficient for the consistency of the underlying thought.

iii) Mathematical models are not an intrinsically precise way of communicating economic ideas. Ordinary speech may promote clarity since it describes the fuzzy models in our brains directly rather than approximating them with the often pretty rough formal elements available.

With all this, I neither want to say that we should completely depart from explicit microfoundations nor that we should abandon mathematical representations. I think both are powerful tools for bringing macroeconomics forward. There is just no reason to apply them dogmatically without thinking about whether doing so makes sense for the purpose at hand and it is certainly unjustified to impose this standard on others when judging their contributions, at least if one´s arguments in favor of this standard are based on I)-III). Finally, given that the gap between the formal and the fuzzy model is often pretty sizeable, we cannot stick to simply throwing models at each other. They can be great tools for thinking but in the end, somebody will have to give the actual argument about the source of e.g. the recent financial crisis. This necessitates using and describing the relevant parts of his/her fuzzy model that would have optimally advanced using the formal ones. And: Doing so requires fuzzy, ordinary language, not math!

]]>

The paper does nothing but compute long-run averages and standard deviations and draw graphs. No regressions, no econometric witchcraft, no funny stuff. Yet its findings are fascinating.

Some of the results confirm what „everyone already knew, kind of“:

- Risky investments like equities and real estate yield 7% per year in real terms.
- The risk premium (equities/housing vis-a-vis short term bond rates) is usually between 4 to 5%.
- There is no clear long-run trend (either up or down) in rates of return. (Take this, Karl Marx!)

Some of the results are interesting, but not particularly puzzling:

- The return on total wealth (average of the rates of return on all assets weighted by their share in the economy’s aggregate portfolio) exceeds the rate of growth of GDP. This confirms Piketty’s claim that r > g. In terms of the Solow model it means we are living in a dynamically efficient regime: we cannot make both present and future generations better off by saving less. Perhaps the most interesting aspect of this finding is its robustness: it holds for every sub-period and for every country. It really seems to be a „deep fact“ about modern economies.
- The return on risk-free assets is sometimes higher, sometimes lower than the growth rate of GDP. For instance, before the two World Wars, the differential between the risk-free rate and growth was mostly positive, so that governments had to run primary surpluses to keep debt stable. Post-1950, the differential was mostly negative.
- Negative returns on safe assets are not unusual. Safe rates were negative during the two World Wars as well as during the crisis of the 1970s. In recent times safe rates went negative again in the aftermath of the global financial crisis. These findings don’t disprove the „secular stagnation“ hypothesis of Summers et al. but they do put it in historical perspective. It seems that rates were unusually high during 1980s and the recent downward trend could just be a reversion to the mean.

But some results are really puzzling – even shocking from the point of view of standard finance theory:

- The return on risk-free assets is more volatile than the return on risky ones. I haven’t yet digested this fact fully. Does this mean that “risk-free asset” is a total misnomer? No, because „risk-free“ refers to the unconditional nature of the payoff of an asset, not the variability of its return. A bond is „risk-free“ because it promises a fixed cash flow irrespective of the state of the economy. Stocks are called risky, not because their returns are volatile, but because the dividends they pay are conditional on the performance of the company. So does this mean that people’s time discount rate varies a lot? Why? It can’t be consumption growth variability – because consumption is quite smooth. What’s going on?
- Housing offers the same yield as equities, but is much less volatile. Jorda et al refer to this as the housing puzzle. I’m not sure how puzzled I should be by this. I surely found the high average yield of real estate assets surprising. However, from what I know about house price indices and the myriad measurement issues surrounding them, I feel that one should be very cautious about the housing returns. I definitely would like someone who knows more about this look carefully at how they calculated the returns (
*paging Dr. Waltl*!). One potential solution to the puzzle I can see would be differences in liquidity. Housing is super illiquid, equities are quite liquid. Couldn’t the high return on housing just be an illiquidity premium?

There is much, much more in the paper, but those were the points that I found most striking. I’m sure this will be one of the most important papers of the past year and will be a crucial source for researchers in finance, growth, and business cycle theory. Plenty of food for thought.

]]>