A mistake in probability theory in David Hume’s “Of Miracles”

When should a rational individual believe in a miracle?

David Hume, the great skeptical philosopher, answered: practically never. His argument ran as follows: Miracles are extremely rare events and thus have a very low prior probability. On the other hand,  people can be misled rather easily either by their own senses or by other people. Therefore, the rational reaction to hearing a miracle story is to reject it, except the evidence supporting it is overwhelming. “Extraordinary events require extraordinary evidence” became a popular summary of Hume’s point of view.

Here is a famous passage from Hume’s “Of Miracles” explaining the point:

When anyone tells me, that he saw a dead man restored to life, I immediately consider with myself, whether it be more probable, that this person should either deceive or be deceived, or that the fact, which he relates, should really have happened. I weigh the one miracle against the other; and according to the superiority, which I discover, I pronounce my decision, and always reject the greater miracle.

This argument sounds intuitively plausible and compelling, but it is mistaken. In fact Hume is committing an elementary error in probability theory, which shouldn’t be held against him since “Of Miracles” predates the writings of Bayes and Laplace.

In the language of modern probability theory, Hume asks us as to compare the prior probability that miracle X occurred, $\displaystyle Pr(X)$, to the probability of seeing the evidence Y supporting miracle X even though X did not in fact occur, i.e. the conditional probability of Y given the negation of X, $\displaystyle Pr(Y | \neg X).$ Econometricians would call the latter the likelihood of Y under the hypothesis not-X. If $\displaystyle Pr(X) < Pr(Y | \neg X),$ Hume says we should reject X in favor of not-X.

But this inference is unwarranted. What a rational observer ought to ask is: Given the evidence Y, is it more likely that X occurred or that it didn’t occur? We are looking for the posterior odds of X conditional on Y: $\displaystyle \frac{ Pr(X | Y)} { Pr(\neg X | Y) }.$

Bayes’ theorem immediately gives us what we are looking for: $\displaystyle \frac{ Pr(X | Y)} { Pr(\neg X | Y) } = \frac{ Pr(Y | X) }{Pr(Y | \neg X) } \frac{ Pr(X) }{Pr(\neg X)}$

This equation makes it clear that even if Hume’s inequality $\displaystyle Pr(X) < Pr(Y | \neg X),$ holds, it is possible that the posterior odds of X are greater than 1. All we need for such as result is that the likelihood of having evidence Y under the hypothesis that X occurred is sufficiently higher than the likelihood of Y under the alternative hypothesis that X did not occur. In econometric terms, the likelihood ratio must exceed a critical value which depends on the prior odds against the miracle: $\displaystyle \frac{ Pr(Y | X) }{ Pr(Y | \neg X) } > \frac{ Pr(\neg X) }{ Pr(X) }.$

To conclude: A rational observer is justified in believing a miracle if the evidence for it is sufficiently more likely under the hypothesis that the miracle really did occur than under the hypothesis that it didn’t so as to offset the low prior odds for the miracle. Just comparing the low prior probability of a miracle to the probability of receiving false evidence in favor of it is not enough and can be misleading.

Penalty taking: some game theory and hypothesis testing

One of my colleagues sent me an article in the Financial Times from March 17 entitled “How to save a penalty: the truth about football’s toughest shot. On star goalie Diego Alves, game theory and the science of the spot kick.” I found the article interesting for two reasons.

1. It has a fun discussion of the psychology and game theory of taking penalty kicks. It points to the paper by Ignacio Palacios-Huerta in which he shows that professional soccer players take penalties in a way that is consistent with Nash equilibrium (or minmax) behavior. The FT article also includes an interesting interview with Ignacio Palacios-Huerta and his “analysis of ideal penalty-taking strategies for the then Chelsea manager Avram Grant before the Champions League final against Manchester United in 2008.”
2. The FT article highlights Diego Alves, Valencia’s goalkeeper, and argues that he is particularly good at stopping penalties. The FT article argues that Diego Alves’ stopping record (he stopped 22 of 46 penalties – a very high number compared to the average stopping rate of 25% of all goalkeepers combined) cannot be explained by chance alone.

In this blog post I want to comment on the 2nd point. It is actually wrong. And it is wrong for an interesting reason. Moreover the mistake made is very easy to make and is a very common one.

A hypothesis test in P.G. Wodehouse’s 1934 “Right Ho, Jeeves”

Reading on (you may want to read my previous post before this one), I found another beautiful example of hypothesis testing in literature with a pretty clear p-value calculation. I am now reading P. G. Wodehouse’s “Right Ho, Jeeves”, first published in 1934 I believe.

a hypothesis test in a a milne’s 1922 “the red house mystery”

I am doing some summer reading and just came across a nice literary example of one of the key methodological approaches in science: hypothesis testing.

What do we do when we perform a hypothesis test? We form a theory and call it our null hypothesis.  We then look at data and ask ourselves how probable it is that we would see this data (or something like it) if the null hypothesis were true. This probability is called the p-value. If this probability is very low, we then abandon our null hypothesis in favor of its opposite.

Detective stories are generally a good potential source for examples of this approach, as detectives constantly entertain theories or hypotheses that have to be revised or rejected as new evidence is found. The present example is special in that the author really gives us all the steps of such a test in a specific setting, including the calculation of the p-value, that is the probability of seeing such data as was observed under the assumption that the null hypothesis is true. Continue reading