Penalty taking: some game theory and hypothesis testing

One of my colleagues sent me an article in the Financial Times from March 17 entitled “How to save a penalty: the truth about football’s toughest shot. On star goalie Diego Alves, game theory and the science of the spot kick.” I found the article interesting for two reasons.

  1. It has a fun discussion of the psychology and game theory of taking penalty kicks. It points to the paper by Ignacio Palacios-Huerta in which he shows that professional soccer players take penalties in a way that is consistent with Nash equilibrium (or minmax) behavior. The FT article also includes an interesting interview with Ignacio Palacios-Huerta and his “analysis of ideal penalty-taking strategies for the then Chelsea manager Avram Grant before the Champions League final against Manchester United in 2008.”
  2. The FT article highlights Diego Alves, Valencia’s goalkeeper, and argues that he is particularly good at stopping penalties. The FT article argues that Diego Alves’ stopping record (he stopped 22 of 46 penalties – a very high number compared to the average stopping rate of 25% of all goalkeepers combined) cannot be explained by chance alone.

In this blog post I want to comment on the 2nd point. It is actually wrong. And it is wrong for an interesting reason. Moreover the mistake made is very easy to make and is a very common one.

Continue reading

Advertisements

A hypothesis test in P.G. Wodehouse’s 1934 “Right Ho, Jeeves”

Reading on (you may want to read my previous post before this one), I found another beautiful example of hypothesis testing in literature with a pretty clear p-value calculation. I am now reading P. G. Wodehouse’s “Right Ho, Jeeves”, first published in 1934 I believe.

Continue reading

a hypothesis test in a a milne’s 1922 “the red house mystery”

I am doing some summer reading and just came across a nice literary example of one of the key methodological approaches in science: hypothesis testing.

What do we do when we perform a hypothesis test? We form a theory and call it our null hypothesis.  We then look at data and ask ourselves how probable it is that we would see this data (or something like it) if the null hypothesis were true. This probability is called the p-value. If this probability is very low, we then abandon our null hypothesis in favor of its opposite.

Detective stories are generally a good potential source for examples of this approach, as detectives constantly entertain theories or hypotheses that have to be revised or rejected as new evidence is found. The present example is special in that the author really gives us all the steps of such a test in a specific setting, including the calculation of the p-value, that is the probability of seeing such data as was observed under the assumption that the null hypothesis is true. Continue reading