Our starting point is Goffman’s Relations in Public Chapter 1.II on “Vehicular Units”. Goffman is here interested in the norms that regulate traffic, especially but not only pedestrian traffic. He first quotes Edward Alsworth Ross, Social Control, New York: The Macmillan Company (1908), page 1: “A condition of order at the junction of crowded city thoroughfares implies primarily an absence of collisions between men or vehicles that interfere one with another.”
Goffman on page 6 then states the following: “Take, for example, techniques that pedestrians employ in order to avoid bumping into one another. These seem of little significance. However, there are an appreciable number of such devices; they are constantly in use and they cast a pattern on street behavior. Street traffic would be a shambles without them.”
In this post I want to take up this claim and provide a model that allows us to discuss how people avoid bumping into each other. I will use Goffman’s work to help me to identify the appropriate model for this issue.
Let me first identify the players. It seems that, while there are many people involved in street traffic, typically we encounter these people one by one. So I think for a first attempt it might be sufficient to study the situation of two people who are currently on course to bump into each other and who are trying to get past each other in order to avoid a collision. So we have two players in often fairly symmetric positions. Now here is one statement by Goffman (on page 8) about actions: “Pedestrians can twist, duck, bend, and turn sharply, and therefore, unlike motorists, can safely count on being able to extricate themselves in the last few milliseconds before impending impact.” Despite the fact that Goffman mentions so many possible actions I will for a first attempt consider only two. Try to pass on the left or try to pass on the right. But if we feel it may be useful we can go back and think more about the possible moves pedestrians can make. Now what about payoffs? Talking about cars or road traffic, Goffman, on page 8, states that “On the road, the overriding purpose is to get from one point to another.” For pedestrian or street traffic he states “On walks and in semi-public places such as stadiums and stores, getting from one point to another is not the only purpose and often not the main one”. He has more to say about payoffs on page 8: “Should pedestrians actually collide, damage is not likely to be significant, whereas between motorists collision is unlikely (given current costs of repair) to be insignificant.” All this strikes me as important to understand pedestrian traffic. Let me see why. Suppose we ignore these last few statements, especially the one about pedestrians often having more than one purpose. We might then be tempted to say, and perhaps this is a good model of car traffic, that the game is simple. We have two players (the drivers facing each other), each has two possible choices (pass on the left L or pass on the right R) and if they pass each other that’s great (they both get a payoff of say one) and if they bump into each other that’s awful (they both get a payoff of say zero). In other words the game can be written in matrix form as follows:
What are the evolutionary stable norms of behavior in this game? They must be a Nash equilibrium, which means no player should have an incentive to deviate from the norm. Could the norm be that everyone passes on the right? Yes! If everyone passes on the right, you would be foolish to pass on the left, because that would mean you bump into everyone and get a payoff of zero! If you instead also pass everyone on the right, you indeed do get past everyone and you enjoy your payoff of one. Completely analogously the norm could be that everyone passes everyone on the left. And indeed both of these norms exist for car traffic. In Japan people drive on the left, in Chile they drive on the right (most of the time). Recall that Goffman was well aware of the possibility of different norms being possible (in different societies or places) – see the previous post.
A quick aside: game theory experts will have noted that the game has a third Nash equilibrium, an equilibrium in so-called mixed strategies. Under a Harsanyi purification (Harsanyi, 1973) interpretation of this mixed equilibrium we could describe it like this. Half of all people pass on the left and the other half of all people pass on the right. This is an equilibrium, because if that’s indeed what the others are doing you are equally well of passing on the left and passing on the right: half of the time you avoid an accident and the other half of the time you are dead (have an accident) either way. This is an equilibrium, but not an evolutionary stable one. Why not? Suppose slightly more than half of all people pass on the left. After a while you might notice this and then you find it slightly better to also start passing people on the left. But then the more people pass on the left the better this strategy becomes and gradually we move towards the norm of everyone passing on the left. This is probably more or less how the whole thing evolved in the early days of cart traffic. You may want to read Peyton Young’s “Individual Strategy and Social Structure” Princeton University Press (2001).
This all seems fine for cars, but what about pedestrians who supposedly, according to Goffman, use many “devices” and “techniques […] in order to avoid bumping into one another”? There seems to be absolutely no need for this here. So I think something is missing from this game. We should recall that pedestrians, according to Goffman, have side interests in addition to getting from A to B as fast as possible. It does not seem to be the pedestrian’s only goal to get past the oncoming person, the pedestrian might have a slight preference for which side would be better for her. Think of a person that you face in a corridor and that person, after they pass you, would like to turn left to the bathroom for instance. This person probably has a slight preference, when possible, to pass you on her left. The problem with this now is that you don’t necessarily know that she wants to go to the bathroom, and thus, you don’t know that she prefers passing you on her left. That’s why she might want to use a “device” – a signal of some sort – that tells you that she wants to pass on the left. Ok, so hold on a moment. We need to proceed slowly. I first need to discuss this game without “devices” so that we can see why “devices” might be useful. So how do I take into account these possible side issue preferences that pedestrians might have? Well, I need to modify the payoffs people get and these payoffs are now different for different people and I need to make the information about these preferences private. What I mean is that I will assume that everyone knows their own preferences (or payoffs) – they know whether or not they want to go to the bathroom on the left after they pass you – but you, their “opponent” do not know. So how does this work? I will simply change the game as follows:
What are these u’s and v’s? You should think of each u and v representing a possible person with a particular preference for passing left and right. A person with a u (or v) close to a half is a person who cares only about getting past their “opponent” and does not care in any way on which side this happens. A person with a u (or v) less than but close to one is a person who would much prefer to pass their “opponent” on the right. Say this person really urgently needs the bathroom just behind you on her right. A person with a u (or v) greater than but close to zero is similar but has a strong preference to pass on the left.
How do we do this? Well, this is one of Harsanyi’s great contributions to the body of game theory. We assume that both u and v are drawn from some distribution F on a subset of the real line that includes the interval from zero to one. Then every person learns their own u (or v) but learns nothing (as yet) about their opponent’s v (or u). Every person only knows that her opponent’s v (or u) is random and that the randomness is described by the cumulative distribution function F. In fact we here make a radical assumption and one that we should probably challenge later. We assume that not only does every person believe her opponent’s v (or u) is distributed according to F, but we also assume that everyone knows this fact, and that everyone knows that everyone knows this, and so on ad infinitum (as game theorists like to say). In short we assume that this distribution F that governs the likelihood of the various preference types you might encounter is common knowledge among the two players. Modern game theory also has ways of dealing with deviating from this assumption. But for the moment we shall assume it. Under an evolutionary interpretation this assumption is less worrisome than one might initially think, but we should probably come back to it.
So how do we “solve” this game? There are two ways one can look at a game with incomplete information. One can either consider each possible person (with a specific u or v) separately – the so-called interim view – or one can consider the problem from the ex-ante point of view, where each person has a strategy for all possible u’s that this person could end up with. These two approaches are equivalent but sometimes one is easier than the other for the analyst. Here the second, the ex-ante, approach is easier. So consider a person who many times throughout her life has to navigate pedestrian traffic. In each situation she might have a different u. Sometimes she just wants to get past her opponent, her u is a half, sometimes she wants to turn left right after passing her opponent, her u is close to zero, sometimes she wants to turn right right after passing her opponent, her u is close to one. She develops a strategy as a function of her u. Now what would be a good strategy? Suppose there is some norm of behavior that people follow, a function from their u’s to passing left or right. For some such norm, what would be the best individual response to this norm? As you with your u do not know your opponent type, their v, knowing the norm that is in place only tells you with what probability (or frequency) your opponents will choose left or right. Suppose you know this probability of opponents going left (from your knowledge of the norm and the distribution function F) and call this probability , then what is your implicit tradeoff between going left and going right? Recall that we are at the moment studying a situation where people do not communicate with each other (they do not use any “devices”). Well, if you go left you avoid bumping into each other with probability and you do bump into each other with the remaining probability . Your average (or expected) payoff from going left yourself is, thus, . Similarly, your average (or expected) payoff from going right is . When is left (strictly) better than right for you? Well if and only if . Calculating we get .
This means that, whatever the norm is, your best response to this norm is to use a simple cut-off strategy. Basically what you do is this. You observe the frequency of people going left and right (induced, as we said, by the combination of the prevailing norm and the distribution of preference types F) and you choose left yourself if your u for this interaction is less than the observed frequency of left and choose right otherwise.
But if this is your best response to this norm, then it is everybody’s best response to this norm and it will become the norm itself! So everyone will be using the same cut-off strategy! But what will the cut-off be? Well if everyone uses a cut-off of say x, some real number between zero and one, then the probability that people use action left is the probability that their u is less than x, which is given by . So if the cut-off people use is x, the probability of people going left is F(x) and this is the best response cut-off they will use. So we must have that x=F(x).
So any stable norm must at least satisfy that we are in equilibrium, meaning that x=F(x). But is this enough for a stable norm of behavior? Not quite. To discuss this it is best to consider two examples of possible distributions F that could be present in different places of human pedestrian traffic.
Suppose the preference u is, like so many things in life, normally distributed. Let’s say it is normally distributed with a mean of a half and a relatively low variance so that not too many people have a u less than zero or more than one. Please excuse the low tech (but I think sufficient) rendering of this example:
What are the Nash equilibria of this game with such a distribution F? There are three. First we have F(x)=x for a value of x that is positive but pretty close to zero. What does this mean? It means that the norm is such that almost everyone attempts to pass others on the right except for very few people who have a very strong interest to pass on the left. This is an equilibrium that is pretty close to the equilibrium of the car-driving game of always driving on the right. There is a similar equilibrium with x=F(x) where x is just less than but very close to one. Here almost everyone attempts to pass others on the left except for very few people who have a strong interest to pass on the right. There is another equilibrium, however, at x equal to one half, where we also have x=F(x). Here we have that everyone who has the slightest inclination for passing on the left attempts to pass on the left and everyone who has the slightest inclination for passing on the right attempts to pass on the right. This is a mayhem equilibrium. But is it stable? No. Why not? Suppose that people use a slightly larger cut-off than one half, call it y. Then we find that, as F is quite steep at one half, F(y) > y. This means that no people’s best response cut-off F(y) is higher than the prevailing cut-off of y. So we expect people to adjust their cut-off upwards. This will go on until we reach the other equilibrium with a cut-off close to one. Similarly, a cut-off of just less than one half will lead to lower and lower cut-offs and eventually to the equilibrium cut-off close to zero.
So what have we achieved? Not so much. The whole situation is very similar to the much simpler game without the u’s and v’s and all that. So, again, it seems that we would not need any “devices” and “techniques” of “scanning” and “intention display” (Relations in Public, pages 11 and 12) in this situation. Even without this we obtain a stable norm of behavior in which there are (almost) no collisions. I will come back to this after another example.
Suppose now that the place of pedestrian traffic that we are interested in has a very different F. Suppose that most people have a relatively strong preference for either left or right. For instance you can imagine a doorway that people need to get through before they then want to turn left or right pretty quickly after that. For these people, encountering each other in the doorway, the density f behind the cumulative distribution F is probably best described as being relatively high around low and high values of u and relatively low for medium values of u close to one half. Let us assume that F is symmetric around one half. Let us also assume that still there is almost no weight (in f) on values of u less than zero and larger than one. A picture of this situation:
Now what equilibria do we get here? Actually we get only one equilibrium and it is a mayhem equilibrium. It is cutoff equilibrium with cutoff x equal to one half, much as the mayhem equilibrium in the normal distribution case. But now the mayhem equilibrium is stable. Why? Because F is rather flat around the value of one half, if we consider a cut-off of y that is slightly larger than one half we have that F(y) < y and the best response cut-off is thus smaller than the y cut-off and we expect that the cut-off evolves back to a value of one half.
By the way, what I have described here is essentially the paper “Evolution in Bayesian Games II: Stability of Purified Equilibrium” by Bill Sandholm, Journal of Economic Theory, 136 (2007), 641-667.
Now you might say that we do not often observe such a stable mayhem equilibrium and you are probably right. In fact this is where we should finally introduce Goffman’s “devices” and “techniques” of “scanning” and “intention display” (Relations in Public, pages 11 and 12). The way I would model this (and this is now finally ongoing research I am currently undertaking with Yuval Heller at Bar Ilan University) is as follows. I would allow the players after they know their own u to send one from a set of possible messages to their opponent, to be understood as their “intention display”. I would assume both players are “scanning” for messages of their opponent and that the players can then condition where they try to pass their opponent on the two observed messages. You may want to think about this as players making a slight movement towards the left or right (this can be done a long time before the two actually meet) with the idea of signaling their intention as to where they would prefer to pass their opponent. What Yuval and I find so far is that for many distributions F (including the two I mentioned before) there is a universal and simple strategy (or norm) that is evolutionary stable. If you have a u less than one half you send the message to be read as “I intend to pass on the left” and if you have a u greater than one half you send the message to be read as “I intend to pass on the right”. If both send the same message they follow through with their displayed intentions. If they send different messages – that is, a slight conflict of interest is revealed – they fall back on a background norm of always passing on the left (or always passing on the right). We are not quite done yet with this project, but I hope you will be able to read about it very soon.
So how did game theory add to Goffman’s study? In many ways I think. First, we had to be very explicit about the various strategies (potential norms) people could be following in our model. Second, we can then explain why a specific norm among all the potential norms is expected (a stable equilibrium). Third, the formal analysis allows us to identify conditions under which different potential norms are stable or unstable. Fourth, we can now ask new questions. For instance, is a stable norm of behavior in pedestrian traffic efficient (maximizes the sum of utilities)? The answer, by the way, is typically no. And finally, the theory is so explicit in its predictions that it can be tested.