To fix ideas I give the students a particular example with three kids. I call them Eva, Fritz, and Maria. They each receive one figure, Eva a pirate, Franz a nurse, and Maria a ghost. Assume that we are able to see inside their heads and see the kids’ preferences over these three figures. These are given in the following table.

I then ask the students which of these kids would be interested in trading their presents. The answer is that actually every pair of them would potentially be happy to trade presents. I then run through all (I think) possible sequences of trades.

Suppose first that Eva and Franz trade. This leads to Eva receiving her favorite figure, the nurse, and Franz receiving his favorite figure, the pirate. Maria is still left with her ghost. So what has happened? This is what we call a Pareto improvement. A Pareto improvement is a change in allocation that keeps everyone at least as happy as before and makes at least one person strictly happier.

A quick aside: The term Pareto improvement is named after a certain Vilfredo Pareto. He is no longer with us. You can google his name if you like. My policy with names in this course is this: I try to avoid them. I generally try to avoid talking about famous dead people, because I want the students to judge concepts, ideas, and results for themselves. I don’t want students to “believe” in concepts, especially not just because the concept came from some famous dead person. This does not mean that students wouldn’t benefit from a course on the history of economic thought. One of the key things students learn in such a course, in my opinion, is to see how the times influence and constrain economic thinking. This allows them to appreciate how certain older “models” need to be adapted to fit modern times, but also how certain, perhaps forgotten, ideas from older “models” might still have relevance today.

But back to Pareto improvements. Is there another possibility of trade after Eva and Franz trade? My answer would be no. Why? Because Eva and Franz already have the thing they like best and so why would they trade? So I would say that there are no more trades after the first one. When I asked the students, they said that they thought that Franz and Maria could possibly trade. This would give Maria her most preferred figure, the pirate, and would give Franz his second most preferred figure, the ghost. When I asked them why they think Franz would be willing to give up his pirate for the ghost, students said that maybe he likes Maria. This is an interesting point. This means that my depiction of preferences in the above table is not complete (or I do not take into account that as they are friends, Franz and Maria will probably meet again and can follow an unwritten contract between them in which it says that Franz gives Maria her favorite toy and in return Maria will give Franz some smarties in school the next day). If I wanted to capture a situation in which Franz cares about what figure Maria gets, I would have to change Franz’s preferences. His preferences would then not only be over which figure he gets himself but over the allocation of figures for him and Maria. This is not impossible to do and of course quite relevant in some cases. We would then say that allocating figures has externalities. One child getting a better figure according to her own preferences affects some other child’s preferences directly. This is a situation we come back to later in the course and will be the beginning of a discussion of “market failures”.

Assuming for now that the three children only care about what figure they themselves get, we have that after Eva and Franz trade there are no more trades.

I then go through other trade options. For instance, Eva and Maria could trade, giving Eva the ghost and Maria the pirate, and then Eva could trade her ghost with Franz’s nurse. There are quite a few options of trade sequences. Any trade leads to a Pareto improvement and trading stops when there are no more possible Pareto improvements. Such a final situation is called Pareto efficient. We will encounter Pareto efficiency throughout the course. Whenever we say that a market under some assumptions is efficient we mean Pareto efficient. It is one of the basic and most helpful concepts in economics, but also one of the most misunderstood, I feel. It is my goal here to explain to the students exactly what Pareto efficiency means and what it does not mean. One thing we already see in this example is that Pareto efficiency has little, if anything, to do with what one would call fairness. If, for instance, Eva and Franz trade, we get a Pareto efficient allocation but Maria is not very happy: she still has her worst figure, while the other two got their best figures. This doesn’t seem particularly fair. I will keep repeating this point that Pareto efficiency has nothing to do with fairness throughout the course in various different contexts.

By the way, the different sequences of (bilateral) trade in the example led to three different possible Pareto efficient allocations. In terms of the kids’ preference ranking these allocations can be expressed in this table:

Can we compare these further? Some students might think that the second allocation, the 1-2-1 allocation (meaning Eva gets her favorite figure, the nurse, Franz his second favorite, the ghost, and Maria her favorite, the pirate), is better than the other two. But this is not clear. It could be that Franz really would like the pirate much much more than the ghost, while both Eva and Maria don’t care that much about which figure they get. Of course it could also be that Franz cares little about all this, and Eva and Maria care a lot. So we cannot really so easily compare two Pareto efficient allocations. This will be more interesting but also more radical, when we introduce money into trade, which I do in the fourth lecture.

I then move on in the third lecture to use this example to talk a little bit about free trade agreements. On this topic see also my blog post on the Wachau, which I also mention to the students. Suppose that two of the three children, Franz and Maria, currently have a free trade agreement, but that Eva is not part of that agreement. Would both members of the “Franz-Maria free trade zone” like to add Eva into this zone? The answer here is at least “not necessarily” and actually probably “no”. Why not? In the Franz-Maria zone trade takes place between the two of them and Franz gets the ghost and Maria the nurse – both get their second favorite figures. Suppose that before they trade they wonder whether they should let Eva participate in their trading. Maria might well be against this, as she might be justifiably worried that Franz will then trade with Eva and not with her. After all Eva has Franz’s favorite figure, the pirate, and Franz has Eva’s favorite figure, the nurse. But if Franz and Eva trade, Maria will be stuck with her least favorite, the ghost, whereas if she only had Franz to trade with she would at least get the preferred nurse. This strikes me as not so different from the current situation in the US in which the US steel producers (Maria) are not so happy about cheaper foreign steel imports (Eva). Of course steel users in the US (Franz) would prefer the cheaper foreign steel. We will have more to say about free trade agreements later on in the course.

I then move on to discuss a slight modification of the example to make another point. Suppose that instead of Franz we have Fritz at the party and suppose that Eva’s present is the ghost, Fritz’ is the pirate, and Maria’s is the nurse. Eva and Maria have the same preferences as before but Fritz’ are different from Franz’. The situation is given in this table.

Now, who would like to trade? Actually no bilateral trade is feasible, but they could all trade together. You could imagine that the three children sit around a round table, Fritz to the right of Eva, Maria to the right of Fritz, and Eva to the right of Maria. Then they could all trade by handing their respective present to the child sitting to the right of them. Everyone should agree to this and all are happier than before. So what am I saying with this example? Bilateral trade, no matter how often this is repeated, is not sufficient to lead to a Pareto efficient allocation. Sometimes more people need to get together at the same time. Later in the course, when I talk about money, we will see that this latter case is probably much more common. At the end of the third lecture I start thinking about a central market and what the central market might lead to. But most of this is done in the fourth lecture, so I refrain from discussing this now.

I have one more issue I discuss in the third lecture. This is very much taken from chapter 3 of Ariel Rubinstein’s “Economic Fables” and is a nice way to demonstrate that Pareto efficiency cannot be everything we want from a good system of allocating things to people who want things. I go back to the first example of Eva, Franz, and Maria and ask the students what the allocation of presents would look like under some specific alternative allocation protocols. Suppose, for instance, that I – the father of the birthday child and party organizer – put the kids in some arbitrary (or perhaps not even arbitrary – perhaps showing favoritism) order and tell the kids that they are allowed to choose one and only one figure in the order that I put them in. Suppose the order is Eva, Franz, and then Maria. Then Eva takes the nurse, Franz is lucky and his favorite, the pirate, is still available and so he takes the pirate, while Maria, finally, saw both of her more preferred figures disappear before it is her turn to take the only remaining and her least favorite figure, the ghost. Note that this also leads to a Pareto efficient allocation. Actually it is very easy to see that this procedure always leads to a Pareto efficient allocation. Now change the sequence and you will see that in every case you will end up with some Pareto efficient allocation. Different sequences will lead to different allocations. But this was also true in the trade example. Different initial allocations (the presents) lead to different final Pareto efficient allocations. Instead of me, the party organizer, determining this order, we could have used any other way to come up with some order. We could have let the kids fight, or do a race, or play some rock-scissors-paper. There are many possibilities.

One potential problem with, say, kids fighting over things is that such a system does not provide particularly strong incentives for especially weak kids to acquire something in the first place, if they know that it will just be taken away from them again. How incentives matter in the design of economic systems will be discussed another lecture, though.

Here is the video (in German):

]]>

Just think of basic micro or macro and the definition of a market or an economy in equilibrium. There the term is not used to describe consistency in the derivation of the outcome, but mainly refers its characteristics – for example that supply and demand are balanced. Go further in the curriculum and think of an equilibrium in game theory. While it is also derived in a way which is consistent with the stated assumptions, its description states more than that – for example that it is a combination of strategies for which no individual has an incentive to unilaterally deviate.

Therefore, equilibrium approaches in my opinion go beyond detecting an outcome that is logically implied by assumptions and step-by-step analytics. They also tend to presume an outcome of a certain type and thereby risk the neglect of other outcomes, strategies, behaviour, and thereby even whole issues that may be highly relevant in reality.

In case my concern is not clear, a discussion of Rubinstein’s famous e-mail game may help. The e-mail game may be described as the following: A couple wants to meet and prefers being together over being separated. However, if it rains they prefer to meet inside, otherwise the prefer to meet outside. Whether it rains or not is determined by nature and only one person, let’s assume the woman, knows the weather for sure. If it will rain, she sends an e-mail to the man. Every received e-mail is read and automatically triggers a response, but every e-mail also gets lost with a certain small probability. That means that the e-mail conversation may last for a long time and even forever, but the probability for the latter case tends to be zero.

Because of the small but nevertheless positive probability for an e-mail to get lost, both parties will never know for sure how many e-mails have been sent. The woman knows whether she sent an e-mail or not, but she is confused about the state where one or two e-mails were sent (captured by the partition Pw). While it may be that the second e-mail – sent by automatic response from the man’s account – got lost, it also may be the case that her e-mail did not pass through in the first place. The moment the second e-mail passes through, the third e-mail is triggered automatically and she can distinguish that state from the ones before. However, she again cannot distinguish between the state of three and four e-mails sent – because if she would know about the fourth e-mail, she would have automatically sent the fifth, being in another state. The man faces a similar incompleteness of information (captured by the partition Pm). He in turn is confused about whether none or one e-mail was sent, just like he is confused about whether two or three e-mails were sent and so on.

Rubinstein thereby shows that the strictly formal approach does not lead to an equilibrium in which they meet outside in the nearby game even if there is a high probability for the information to pass on. In fact, the formal result of the game described above is that none of the two will risk to go outside as there is no state (described in terms of e-mails sent) about whose appearance exists common knowledge. However, the example not only shows how easy simple games may get complicated in formal term, but also shows how misleading the strictly formal conclusion can be with regard to an underlying issue. It was about a couple who wants to meet, inside on rainy days, outside otherwise. They both know their preferences. They differ only in the information they have – first about the state of nature and second about how many e-mails are sent. The second issue however should not be the one of primary interest. Instead a social scientist and therefore economist should just ask: how many e-mails have to be sent that they both know that they both know about the weather and therefore human beings of these days will coordinate for the preferred equilibrium.

One e-mail sent just states that it is rainy and the woman knows about it. Two e-mails sent means that the man received this important information, but the women does not know that yet. Three e-mails sent means that the woman knows that the man knows. Four e-mails sent means that the man now knows that the woman knows that he knows. Five e-mails mean that the woman now knows that the man knows that the women knows that the man knows. At the latest after the sixth and seventh e-mail both know that they reached the aspired situation where both know that they both know.

While they can never be sure that their last e-mail passed through, they reach a state where human beings of these and thereby the economic agents of interest will not care about it. Agents may differ with regard to the number of e-mails they require in order to believe in a successful coordination, but I claim that there are not much of them who require more than the five to seven e-mails.

So, while the formal equilibrium approach provides some insights in favour of a theoretical statement about mutual and common knowledge, it risks to draw too much attention towards the wrong issue or at least away from non-equilibrium outcomes that may be highly probable in reality. I think that this is a general issue of equilibrium economics, which are worthwhile and helpful in many regards, but always have to be done as well as interpreted with caution.

]]>

A call option on a stock of some company is an example of a financial derivative on this stock. It is the right (but not an obligation) to buy the stock at a pre-specified future date and at a pre-specified price, the so-called strike price.

Before I discuss pricing such a call option in a hugely simplified model of stock prices, I provide an example of a financial derivative on a sport bet. The nice thing about this example is that I do not have to hugely simplify the world because a sport bet is already much simpler than a stock. While the price of a stock at some future date can probably take any of many possible prices, a sport event can usually result only in a few possible outcomes.

I then return to the football (soccer) games from the previous part of the second lecture and introduce a financial derivative. In fact, let us take the game between Benfica Lisbon and Manchester United. Suppose that, for some reason, I would like to buy an asset that pays me 3 euros if Benfica wins, 2 euros if there is a draw, and 1 euro if Man U wins. What should be the price of this asset? You might think you have to think deeply about the likelihood of Benfica winning, of a draw, and of Man U winning. This is not necessary. Through the betting odds we already know how the market assesses these probabilities. But we do not even need to think about it like this. We can simply construct the same payoff schedule as this new asset has by placing appropriate bets on the three possible outcomes of this game. Recall that the odds were 4,75 on Benfica winning, 3,6 on a draw, and 1,78 on Man U winning. So if we place 3/4,75 = 0,63 euros on Benfica winning, 2/3,6 = 0,56 euros on a draw, and 1/1,78 = 0,56 on Man U winning, we get the exact same payoff schedule as the asset I am interested in. To do so I have to place a total of 0,63 + 0,56 + 0,56 = 1,75 euros. This is, therefore, the price of this new asset. If the price were any different people could do arbitrage. If the price of the new asset were below 1,75 euros then one could do arbitrage by buying the new asset and selling (“shorting”) the three bets in the proportion I gave. If the price of the new asset were above 1,75 euros one could do arbitrage by selling (“shorting”) the asset and buying the three bets (by placing the above stated euro amounts on these three bets). This is perhaps a little be more easily said than done, but if there was an arbitrage opportunity I am sure people would find a way to exploit it.

I then finally turn to a simplified world in which we derive the price of a call option on a stock. The simplification is this. I assume that the stock can only have one of two prices at the time at which the call option can be exercised. These values are 98 or 100 euros. Let’s call a price of 98 the bad state and a price of 100 euros the good state. In addition let us also assume that one can borrow and save money with an interest rate of zero. In other words the savings technology is to keep money under your mattress. Finally, let us suppose we are interested in a call option with a strike price of 99 euros. This means that the holder of the call option has the right to buy the stock at 99 euros at the exercise date if she so wishes. Finally let us assume that the stock price today is 98,50 euros. We can summarize all this in a table.

Why does the call option have monetary payoffs of 1 and 0 in the respective two states? Suppose that the stock turns out to have a price of 100 when you can exercise your option. We are in the good state. Then you will exercise your option, because you can buy something for 99 euros that you can immediately sell again for 100 euros. So you gain 1 euro in this case. Now suppose that the stock turns out to have a price of 98 euros. We are in the bad state. Then you will not exercise your option as paying 99 for something that is only worth 98 is foolish. So you get nothing from your option in the bad state.

So how can we determine the price of the call option, denoted by C in the table? Again, you do not need to think about the likelihood of the good and the bad state. All you need is to see that there are no arbitrage opportunities. How could you make arbitrage? Well, you realize that if you buy half a stock and borrow 49 euros, then you have the following payoffs at the exercise date: If the state is good you can sell your one half of a stock, now worth 50 euros, and return the 49 euros to whoever you borrowed it from. So you receive a total payment of 1 euro. If the state is bad you do the same thing but now, as half of one stock is worth only 49 euros you come out with zero. If you follow this plan you get the exact same payoff schedule as the call option provides: 1 euro in the good state, zero in the bad state. How much does it cost you to follow this plan? You borrow 49 euros and pay 49,25 euros (half of 98,50 euros). So you have to pay 25 euro-cents today to get the exact same payoff schedule as the call option delivers at the exercise date. And so only a price of 25 euro-cents for the call option would make this world arbitrage-free.

“Financial engineering”, the determination of arbitrage-free prices, which gave rise to the “Black-Scholes” formula for option prices and its generalizations, is based on this simple idea. Financial engineering is mathematically more difficult because stocks can have more than two values at the exercise date, many options can be exercised at any time up to a final date, and the creation of a payoff equivalent portfolio of other assets often requires many minute portfolio adjustments over time. But conceptually this is what is going on.

The video (in German) is here:

]]>

How should we read this table? Each column represents one football game between some teams A and B. For each football game you can bet on the event that team A (the home team) wins the game, that the game ends in a draw – coded as x – and that team B (the away team) wins. The number in a cell represents the betting odds for the respective event. For instance the number 1,1 represents the betting odds on the home team (team A) winning in the first of the three games. What does this number mean? If you place 1 euro on team A winning and team A actually ends up winning you receive 1 euro and 10 cents back. So in the event of team A winning you gain 10 euro-cents. If the game ends in a draw or in team B winning you get nothing back, so you simply lose your euro.

I ask the students to study this table of betting odds and to tell me if they find any of these odds implausible. I tell them that they may want to use a calculator to help them with this question. Their answers are all over the place – after all this is a pretty difficult question. I then reveal to them that two of these games are real games and one is a fake game, and that in fact the betting odds for the fake game are impossible in a world without arbitrage opportunities.

The first two games took place in the evening after the lecture, on Wednesday the 18^{th} of October 2017. The first was Bayern Munich against Celtic Glasgow with odds of 1,1 on Bayern, 11 on a draw, and 21 on Celtic (I believe Bayern did win that night). The second was Benfica Lisbon against Manchester United with odds of 4,75 on Benfica, 3,6 on a draw, and 1,78 on Man U (I believe Man U did win that night). These are fine odds as we shall see below.

But let me first turn to the last game, which was not a real game. What is wrong with these odds of 1,9, 4,2, and 5? Well, with these odds one could do arbitrage. What you could do is to place euro amounts on all three bets proportionally to the reciprocal of the odds. The reciprocal of the betting odds is what I would call the event’s “implicit probability”. The implicit probabilities are then 1/1,9 = 0,5263 for team A winning, 1/4,2 = 0,2381 on a draw, and 1/5 = 0,2 on team B winning. Suppose you take a target 100 euros and place these on the three bets proportionally to the implicit probabilities. So you would place 52,63 euro on team A winning, 23,81 on a draw, and 20 on team B winning. In total you would have placed 52,63+23,81+20=96,44 euro on the three bets. Note that some of the 100 euros remain in your wallet. Now what can you win with this betting scheme. Well, only three things can happen. Either team A wins, or there is a draw, or team B wins. How much money will you receive in these three events? If team A wins you get 1,9 euros for every euro placed on team A. As you have placed 52,63 euros on team A you get 52,63*1,9 = 100 euros back if team A wins. If there is a draw you get 4,2 euros for every euro placed on a draw. As you have placed 23,81 euros on a draw you get 23,81*4,2 = 100 euros back if there is a draw. If, finally, team B wins, you get 5 euros for every euro placed on team B. As you have placed 20 euros on team B you get 20*5=100 euros back if team B wins. So no matter what happens in this game you always get 100 euros back. But you have only placed 96,44 euros. So you win 3,56 euros in every possible case! You have made arbitrage! If you think 3,56 euros is a miserly sum for all the trouble then multiply all your bets with 1000 and you win a sure 3560 euros without risk.

Now note that you cannot do this with the two real games. In both cases the sum of the implicit probabilities exceeds one and, because of that, there are no arbitrage opportunities. You can try to find arbitrage opportunities in sports (or other) betting odds, but I doubt that you will find any.

There is a nice joke about economics that has to do with the topic of arbitrage. It is as follows. Two people walk along the street, one of them is an economist. While they are walking they spot a 100 dollar bill on the pavement. The non-economist starts to scoop down to pick up the 100 dollar bill. The economist says: “Don’t bother! This can’t be a real 100 dollar bill. If it were a real 100 dollar bill, someone would already have picked it up.” My takeaway from this is twofold. If you do see a 100 dollar bill on the street, of course, do pick it up. But I wouldn’t start walking miles and miles of streets in the hope to find many 100 dollar bills waiting for you to pick them up.

Here is the Video (in German):

]]>

In the second lecture I then take up another goal most people share: people, “ceteris paribus”, tend to prefer more money over less. The expression “ceteris paribus” means “all else equal”. I might be reluctant to accept extra money if this means someone is allowed to hit me on the head. But I generally will be happy to receive extra money if this does not come with any extra obligations.

This idea that, ceteris paribus, people prefer more money over less money implies that the world should not contain easy “arbitrage” opportunities. “Arbitrage” is making money without risk. I discuss the consequences of the absence of arbitrage in three contexts. I first consider exchange rates. I then give the students examples of supposed exchange rates of three fictional currencies, A, B, and C. These exchange rates are given in this table:

The table is to be read as follows. The entry in row A and column B is the amount of money in B-currency that 1 unit of money in the A currency can buy. In other words, 1 A buys you 0,75 B’s. I ask the students how they feel about this system of exchange rates. Does it seem plausible or implausible?

After their responses, which are not so important at this point, I give them another system of exchange rates between fictional currencies D, E, and F, given in this table:

Again, I ask the students how plausible this system of exchange rates seems to them. Again, their answers are not so important at this point (but there are some who seem to see where this is going). I then reveal that the first three currencies are the US-dollar, the british pound, und the euro with exchange rates taken at 8am CET on the 17^{th} of October 2017. These are perfectly fine exchange rates. We will understand why this is so in a moment. The second batch of three currencies, D, E, and F, are currencies taken from the Harry Potter books. In fact I have taken them from a 2017 paper by Daniel Levy and Avichai Snir, entitled “Potterian Economics”, who in turn have taken these from the Harry Potter books.

One D is one gold galleon from the Harry Potter world, one E is one US dollar, and one F is 1 gram of gold. Apparently one gold galleon weighs 5,50 grams of gold and can be bought for USD 7,50. There is also a US-dollar price of gold which is USD 12,40 for each gram of gold. Here is the full table again:

So what is wrong with these exchange rates? Well, you (or at least the supposedly rather poor Weasley’s) could make arbitrage. Take one gold galleon, which constitutes 5,50 grams of gold, sell this gold for US dollars and get 5,50*12,40 = 68,2 US dollars. Then buy yourself 68,2*0,13 = 9 gold galleons. This means without any risk you can turn 1 gold galleon into 9 gold galleons with two simple transactions. The Weasley’s and everyone else in the Harry Potter world should be able to lay their hands on as many gold galleons as they desire.

Of course it is possible that such trading is not allowed for some reason in the Harry Potter world or perhaps magically simply impossible. But the fact is that in our world we cannot really observe such a system of exchange rates as it would allow arbitrage. Arbitrage would be exploited pretty quickly and exchange rates would quickly change in such a way that there are no more arbitrage opportunities.

You can check for yourself that the exchange rate system between the three real currencies of our world does not allow arbitrage in any way.

In other words, the idea that people, ceteris paribus, prefer more money over less implies that any visible arbitrage opportunities would be quickly exploited, which in turn implies that for any number of currencies with exchange rates only of these are “free”. Of course real economic activities determine these “free” exchange rates, but the point is that once exchange rates are given the remaining exchange rates are completely determined by the absence of arbitrage opportunities.

I then point out that this implies, among other things, that the exchange rate between any two currencies cannot only depend on the economic interactions between the two countries behind the two currencies alone. Why not? Think of three currencies (and respective countries) in a world full of currencies (and respective countries). The economic interaction (such as imports and exports) between any two small countries can be fairly arbitrary. Thus, if it were true that the bilateral economic interaction is the sole determinant of the exchange rate between the two respective currencies, then any system of exchange rates between the three currencies is possible. But we know that the absence of arbitrage poses a severe restriction on these exchange rates. Proof by contradiction.

And here is the Video (in German):

]]>

I do not, at this point, tell the students that this is my goal. So what do they have to do? I ask them to take out their smartphones or laptops and to go to google maps. I ask them to put in the address of the lecture room that they are in and then put in the address that they came from before they went to class. I ask them to click on “directions” between the two addresses and to choose the mode of transport that they used (walking, biking, or driving – public transport does currently not work for Graz in google maps – if they chose public transport I ask them to imagine what route they would have taken if they had walked or biked or driven). I then ask them about their choice of route. The results are as follows: Of 244 responses, 45.5% (111) state that they chose exactly the route google recommends, 13.5% (33) state that they choose one of the two or so other routes google recommends, 27% (66) state that they choose a route very similar to the ones google recommends, and the remaining 13.9% (34) state that they choose quite a different route.

I then make them do some more work and ask them another question. I ask them to manually put in the actual route they have taken and then to compute the %-difference in the length of time this route takes (according to google maps) relative to the best route according to google maps. The results are as follows: Of 216 responses, 22.2% (48) state that their own route is faster than the best google maps route (according to google maps), 48.6% (105) state that they require between 0 and 10% more time, 13.9% (30) require between 10 and 20% more time, 6% (13) require between 20 and 30% more time, and then the remaining few require more than 30% more time (2.3% state they need more than 100% more time!).

Before we discuss these results in class, I ask one last question: How many routes are generally possible for them? Of the 223 responses, 6.3% (14) say “one” route, 36.3% (81) say “a few”, 28.3% (63) say “some”, 19.3% (43) say “many”, and 9.9 (22) say “very many”.

I then begin the discussion of these results with first telling them that they have, in principle, infinitely many routes. They could go from one area in Graz to another via Linz or round the world or do a few loops. So, given the infinite possibilities, it is pretty remarkable that google maps is able to “predict” 45% of all of students’ actual routes exactly and is able to predict the time it takes for the students to get from A to B mostly within at most a 20% error band. What is behind google maps “predictions”? It is the simple idea that people would like to get from A to B in as little time as possible. In fact 22% of students had even claimed that they can beat google maps’ best time by taking an even better route.

So what have students learned from this? One, people have goals and pursue these goals, sometimes maybe not even consciously. In the present context the students’ primary goal seems to have been to waste as little time as possible on their way to the university. Second, if an analyst understands these goals people have, this analyst is able to predict human behavior reasonably well. Third, this prediction is not perfect. The reason for this may be at least two things: people have other goals as well (such as being safe on their way to work, or perhaps they care about the beauty or pollution level of their way to work), and people are not machines and they can make “mistakes” or have “whims” of the moment. All this also gives students a first idea as the accuracy of economic “laws”. Most of these are not precise “laws” as some of the laws of the natural sciences, but there are clear “patterns” of behavior or “regularities” that we can understand to some extent.

This is split into two videos (In German):

]]>

Because of this I tell the students about the Journal of Economic Literature classification codes (JEL codes). I do this in order to show the students the breadth of topics that are all covered by what we call “economics”. I browse through these codes and somewhat randomly discuss some of them in some more detail (from studying unemployment to creating a market for kidneys). But my point with this is that economics covers probably a lot more than most students would have expected.

I then move on to tell the students how the profession of economics as a science works nowadays. I tell them that researchers write “papers” and that these are published in journals. I tell them that there are more than 1000 journals with probably more than 50 papers per year each, so at least 50.000 papers a year, much more than any single person can read. I tell them about quality differences in journals and I tell them about the peer review process. I also explain the (implicit) criteria for evaluating papers: correctness, originality (you cannot publish something that is not new), importance, plausibility, readability, and whether the paper changes our view of something.

After this I state my opinions about what the students should do if they want to become an economics researcher. They should study economics (in order to know what questions have already been and have not been answered). They should study mathematics, because this is the language of economic theory. They should study statistics (which requires mathematics), because this is the toolset for doing empirical research. They should learn how to write (academically) and how to give talks and presentations, because this is how science is communicated to other scientists and the rest of the world. All of this, they learn to some extent in the courses in economics, but in order to become a researcher, this is not sufficient. They should take up courses from the math and stats departments as well.

Here is the corresponding video (in German):

]]>

I have the following goals with this course. One, I would like students to see the current relevance of what I teach. Two, I do not want to use more math than some simple algebra. Three, I would like students to be able to assess the plausibility of what I say by themselves. Four, I would like the course to have a natural progression of topics. Five, I want to convey (some of) the key insights of economics. Six, I want students to understand the different approaches we can and do take to tackle economic problems and to be able to decide when which approach is more appropriate. Anyway, it is a bit of a challenge and not everyone will completely agree with my choices as to what goes in the class and what does not and how I teach it. But I would here like to put on record what my choices are.

I do not follow a particular textbook. I recommend a few sources to the students: “The economic way of thinking” by Paul Heyne, “The cartoon introduction to economics” by Yoram Bauman, and some popular science books such as Dani Rodrik’s “Economics rules”. I also recommend that students google terms that pop up in class and read Wikipedia entries, but also warn them that these are not always perfect.

In posts to follow I will describe what I teach and also offer a link to the Videos (but these are in German). The first video offers general information about the course:

]]>

In 1927, a Russian economist by the name of Eugen Slutsky wrote a paper entitled “The Summation of Random Causes as the Source of Cyclic Processes“. At the time Slutsky was working for the Institute of Conjuncture in Moskow. That institute was headed by a man called Nikolai Kondratiev.

This was in the early days of the Soviet Union, before Stalin managed to turn it into a totalitarian hellhole, a time when the Communist leadership was relatively tolerant towards scientists and even occasionally listened to their advice. The institute’s job was basically to collect and analyze statistics on the Russian economy in order to help the Party with their central planning. But Kondratiev seemed to take the view that it would be best to allow the market to work, at least in the agricultural sector, and use the proceeds from agricultural exports to pay for industrialization. Lenin apparently took the advice and in 1922 launched the so-called New Economic Policy which allowed private property and markets for land and agricultural goods and re-privatized some industries which had been nationalized after the October Revolution. This policy turned out to be rather successful – at least it ended the mass starvation which War Communism had caused during the years of the Russian civil war.

But then Lenin died and Stalin took over and decided that time had come to get serious about socialism again and finally abolish private property and markets for good. Dissenting voices like Kondratiev’s clearly couldn’t be tolerated in this great enterprise, so in 1928 Kondratiev was sacked and the institute was closed down. Some time later, Kondratiev was arrested, found guilty of being a „kulak professor“ and sent off to a labor camp. Even there he continued to do research until Stalin had him killed by firing squad during the Great Purge of 1938.

But I’m digressing, so back to Slutsky. His 1927 paper was written in the wake of Kondratiev’s 1925 book “The Major Economic Cycles“. That book claimed that capitalist economies exhibit regular boom-bust waves of about 50 years duration, known today as Kondratiev Waves. Other „conjuncture” researchers had claimed the existence of shorter waves.

Slutsky’s first observation was that when you really look at time series of aggregate economic output, you don’t see regular waves, but a lot of irregular fluctuations. So trying to find deterministic, sinusoidal waves in economic time series is probably not a very fruitful exercise.

Slutsky’s second observation was that when you draw a long series of independently and identically distributed random variables (modern terminology, not his) and then take some moving average of them… you get a time series that looks an awful lot like real-world business cycles!

He showed that in two ways. First, he performed simulations. Remember this is 1927 – so how did he simulate his random numbers? Well, the People’s Commissariat of Finance ran a lottery. So Slutsky took the last digits of the numbers drawn in the lottery (this is the basic series shown in figure 1). He then computed a bunch of different moving average schemes one of which is shown in figure 2. See the boom-bust cycles in that picture? Pretty cool, huh?

But Slutsky didn’t just show cool graphs. He also had a beautiful argument for why these moving averages looked like recurrent waves:

We shall first observe a series of independent values of a random variable. If, for sake of simplicity, we assume that the distribution of probabilities does not change, then, for the entire series, there will exist a certain horizontal level such that the probabilities of obtaining a value either above or below it would be equal. The probability that a value which has just passed from the positive deviation region to the negative, will remain below at the subsequent trial is 1/2; the probability that it will remain below two times in succession is 1/4; three times 1/8; ans so on. Thus the probability that the values will remain for a long time above the level or below the level is quite negligible. It is, therefore, practically certain that, for a somewhat long series, the values will pass many times from the positive deviations to the negative and vice versa.

(For the mathematically minded, there’s also a formal proof just in case you’re wondering.)

Since it was written in Russian, the paper went unnoticed by economists in the West until it came to the attention of Henry Schultz, professor at the University of Chicago and one of the founders of the Econometric Society. He had the paper translated and published in Econometrica in 1937.

And so Slutsky’s „random causes“ provided the first stepping stone for the modern business cycle theories which explain how random shocks produce, via the intertemporal choices of households, firms and government agencies, the cyclical patterns we see in aggregate time series.

P.S.: All this time you have probably asked yourself: Slutsky, Slutsky,… that name rings a bell. Oh right, the Slutsky Equation! Yep. Same guy.

]]>

Paul Krugman and others have rightly pointed out that Mankiw’s toy example, its cuteness notwithstanding, provides little to no insight into the real policy debate now going on in the US, because (i) the US is not a small open economy and (ii) there is evidence that much of corporate profits are monopoly rents rather than returns to capital, which casts doubt on the relevance of perfect competition models.

Indeed, there’s a new paper documenting that mark-ups (difference between price and marginal costs) have increased in practically every industry in recent decades. The paper has not yet gone through peer review, so it’s probably wise not to jump to conclusions from it. Nevertheless, it’s useful to think about potential implications.

One of the basic results in public finance is that taxes on rents produce no deadweight loss. So if corporate profits are just monopoly rents, we can tax them away at zero social cost. Right?

Wrong.

Consider the textbook model of monopolistic competition in that Nobel-price winning 1980 paper by Krugman. Each individual producer has a monopoly over her variety of a differentiated good (think VW having a monopoly over VW Beatle cars or Apple over iPhones). She faces the demand curve

where q is her individual output, Q is the aggregate output of the industry as a whole (i.e. the „size of the market“), p is the price of the individual firm’s output and P the aggregate industry price index. Finally, s is the elasticity of substitution across varieties.

The total cost curve of the individual producer is

where f is fixed costs and 1/a are marginal costs (a being marginal productivity) which we assume to be constant. Let t be the corporate tax rate and pi gross profits, then net profits of the individual monopolist are given by

Our monopolist, being the ruthless corporate egomaniac that she is, strives to choose q such as to maximize net profits. It should be clear from looking at the expression above, that her maximization problem is unaffected by t. That is, the corporate tax doesn’t matter for her decision at all. She would produce just the same q and charge just the same p whether t would be 0 or 0.5 or 0.9, *everything else equal*.

The last phrase is of the essence here. Everything else will *not* remain equal. But I’m getting ahead of the story here.

So what will be the profit-maximizing decision? Setting marginal revenue equal to marginal costs and solving for p yields

The revenue is

and net profit is

Notice that the price is increasing and profits are decreasing in marginal costs.

What will the industry equilibrium look like? As is well known, in an industry with free entry and identical costs, every firm will produce the same quantity, charge the same price and make zero profits in equilibrium. But to analyze the effect of taxing firm profits, we better have a model where firms make non-zero profits.

Let’s do it the way Melitz did it in his famous 2003 paper. Let’s assume first that firms do not know their their productivity before entering the market, but only the distribution of productivities. Let G(a) be that distribution. Second, firms need to pay fixed entry costs z to set up shop in the industry. Third, once a firm has established itself, it runs a constant risk of becoming bankrupt and exiting the market given by the probability d.

As Melitz has shown, the industry equilibrium can be represented by two equations in two unknowns: the average profit of active firms, , and the critical level of productivity, a*, at which a firm just breaks even. That critical level a* is implicitly defined by

All firms with higher productivity stay in the market, while all others exit immediately. We can define as the (weighted) average productivity among active firms. This average turns out to be uniquely determined by a* for any given distribution G(a). What’s more, it can be shown that the average profit of active firms is equal to the profit of a firm operating with the average productivity abar (sounds like a trivial point, but requires some algebra). Under some harmless conditions on G(a), the average net profit of active firms is a decreasing function of the cut-off productivity:

Notice what an increase in the tax rate does to the average net profit function: it makes the whole thing shift downward in pi-a space. This is going to be the key for all later results, so let’s repeat it in plain words. Raising the corporate tax rate reduces net profits for any given cut-off productivity level.

The second condition to pin down the industry equilibrium comes from the entry/exit dynamics. A potential monopolist contemplating entering the industry compares the fixed entry cost z to the present value of expected net profits which she would earn now and in the future. The latter is given by the product of the probability of drawing an above-critical productivity level [1-G(a*)] and the present value of net profits pibar/d. In equilibrium, no potential monopolist wants to enter, so net profits of surviving firms must be

This defines an increasing relationship between average net profits and the productivity cut-off. Notice that this relationship is unaffected by the tax rate.

The last two equations pin down and a*. The first is called the „zero cut-off profit condition“, the second the „free-entry condition“. All endogenous variables can be calculating from those. In particular we can calculate the price index and the number of firms in equilibrium:

where L is total labor supply (the size of the economy). Notice that P decreases in N and N decreases in pibar.

So what happens if t increases?

The zero cut-off profit curve shifts downwards while the free-entry curve remains the same. Therefore, in the new equilibrium, we get both a lower net profit and a lower cut-off productivity level a*. How is that possible? Well, suppose a* would remain the same initially. Those firms with productivity above a* would earn lower profits than before the tax increase. But then, no would-be monopolist would want to enter the market anymore while active firms still exit at rate d, which implies that the number of competitors decreases. Due to the decrease in competition, the aggregate price level and hence the revenue of active firms increases. This means that some firms whose productivity is slightly below a* are now able to make positive net profits. The result is a drop in the cut-off productivity that is sufficiently low as to induce potential firms to enter the industry.

Notice that the fall in the cut-off productivity counteracts the negative effect of the tax increase on average net profits such that the latter falls less than one-for-one with the tax hike.

Summing up, as a result of the corporate tax we have

- fewer firms
- lower average productivity
- a higher price index, which implies lower real wages
- which implies lower social welfare.

I have not found a cute formula to calculate the precise welfare effects, but I would venture a guess that the effect depends mostly on the elasticity of substitution s and the shape of the productivity distribution G(a).

Anyway, I think I have shown that a corporate tax is socially costly even if all corporate profits are due to monopoly rents. The negative effects on market entry are the key here. Fairly obvious extensions to an open economy are left for the reader.

Update: Here’s the story in a graph:

]]>