Sandbox

From ChanceWiki
Revision as of 14:58, 11 February 2014 by Bill Peterson (talk | contribs)
Jump to navigation Jump to search

Warren Buffett's billion dollar gamble

In a widely publicized announcement on January 21, 2014, Quicken Loans is offering a billion (sic) dollar prize to any contestant who can fill out the bracket perfectly in the March NCAA basketball tournament. Because they don't have a spare billion in the bank, they have insured against the possibility of a winner with Berkshire Hathaway (BH), paying an undisclosed premium believed to be around 10 million dollars. Is this is good deal for BH? Some relevant data: to win you must predict all 63 game winners correctly. The number of entries is limited to 10 million. The prize is actually 500 million cash (or 1 billion over 40 years). Presumably Warren Buffett asked his actuary "are you very confident that the chance of someone winning is considerably less than 150". How would you have answered?

I put this forward as an interesting topic for open-ended classroom discussion. First emphasize that the naive model (each entry has chance 1 in 263 to win) is ridiculous. Then elicit the notions that a better model might involve some combination of

modeling typical probabilities for individual games modeling the strategies used by contestants empirical data from similar past forecasting tournaments. Here are two of many possible lines of thought. (1) The arithmetic

(5 million)×(3/4)63≈1/14

suggests that if half the contestants are able to consistently predict game winners with chance 3/4, then it's a bad deal for BH. Fortunately for BH this scenario seems inconsistent with past data. Because the same calculation, applied to entries in a similar (but only 1 million dollar prize) ESPN contest last year, says that about

(4 million)×(3/4)32≈400

entries should have predicted all 32 first-round games correctly. But none did (5 people got 30 out of 32 correct).

(2) The optimal strategy, as intuition suggests, is to predict the winner of each game to be the team you think (from personal opinion or external authority) more likely to win. For various reasons, not every contestant does this. For instance, as an aspect of a general phenomenon psychologists call probability matching, a contestant might think that because some proportion of games are won by the underdog, they should bet on the underdog that proportion of times. And there are other reasons (supporting a particular team; personal opinions about the abilities of a subset of teams) why a contestant might predict the higher ranked team in most, but not all, games. So let us imagine, as a purely hypothetical scenario, that each contestant predicts the higher-ranked team to win in all except k randomly-picked games. Then the chance that someone wins the prize is about Pr(in fact exactly k games won by underdog)×10 million(63k) provided the second term is ≪1. The second term ≈0.15 for k = 6 and ≈0.02 for k = 7. The first term cannot be guessed -- as a student project one could get data from past tournaments to estimate it -- but is surely quite small for k = 6 or 7. This suggests a worst-case hypothetical scenario from BH's viewpoint: that an unusually small number of games are won by the underdog, and that a large proportion of contestants forecast that most games are won by the higher-ranked team. But even in this worst case it seems difficult to imagine the chance of a winner becoming anywhere close to 1/50.

Other estimates

A brief search for other estimates of the chance that an individual skilled forecaster could win the prize finds Jeff Bergen of DePaul University asserts in this youtube video a 1 in 128 billion chance. Ezra Miller of Duke University is quoted as saying a 1 in 1 billion chance. Neither source explains how these chances were calculated.

Submitted by David Aldous