Chance News 11
"Then there was the man who drowned crossing a stream with an average depth of six inches." - W.I.E. Gates
Here is a Forsooth from the December 2005 issue of RSS News.
The current rate of shrinkage they calculate at 8% per decade; at this rate there may be no ice at all during the summer of 2060
Investing in a poker player
Texas Hold'em poker is sweeping the globe as a favorite pastime of gamblers, young and old, novices and experts.
The following web site discusses a proposition from an amateur poker player to gain financial backing for entry into the 2006 World Series of Poker.
The 2006 WSOP will likely have at least 8,000 participants, each ponying up $10,000 for the buy-in. The winner could take home $10 million. I will take vacation time and travel to Las Vegas to participate in this event, using $1,000 of my own money to complete the buy-in with nine other sponsors.
Any cash winnings garnered from the tournament will be split twelve ways. Each of the nine outside sponsors will receive a 1/12th cut, I will receive a 1/6th cut, since I am taking vacation time and paying for travel and lodging. A final l/12th of the winnings will be donated to a non-profit charity voted on by the nine outside sponsors, to be assessed equally from the nine sponsors as a tax write-off.
I am an experienced and successful online multi-table tournament player (my cashes are currently 155% of my buy-ins for the year), and I also have in-person tournament experience in Atlantic City. I recently came in first place in an online tournament against 450 players: Link to Pokah! page. This alone should demonstrate my potential for finishing in the money at the WSOP. (The top 10% are paid.)
Once I have nine sponsors, I will have an attorney draw up a binding financial contract. If for any reason I cannot attend the WSOP, all contributions will be returned to investors.
The notion of players receiving outside backing in a big-stakes tournament is not new. Most recently, though, the 2nd-place finisher in the 2005 WSOP had a 50-50 deal with one financial angel investor. Steve Dannenmann has a bachelor’s degree from the University of Baltimore and was a CPA and mortgage banker before winning $4,250,000 in the WSOP, his first ever.
A friend of Dannenman's, Jerry Ditzell, split the $10,000 entry fee with him -- each put up half of the money. After Dannenman won, they went to the cashier's cage at the Rio casino and split the prize. According to Dannenman, they had no agreement to do so in writing, but that "it was a gentleman's agreement".
Can playing tournament poker be legitimately described as an "investment" (one with admittedly significant risk, but potentially high return)?
What factors would you look for to determine the attractiveness of this investment opportunity?
A game show for probabalists
A game show for the probability theorist in us all
New York Times, Dec. 14, A19
This article describes the new NBC game show called "Deal or No Deal" The rules are described on the NBC website as:
The rules are simple. Choose a briefcase. Then as each round progresses, you must either stay with your original briefcase choice or make a "deal" with the bank to accept its cash offer in exchange for whatever dollar amount is in your chosen case. Once you decide to accept or decline the bank's offer, the decision is final.
To fully understand the game you should play it here. Choose "game" from the options and go to the bottom of the page that comes up and choose "Start game".
The Times article observes that it is not known how the bank determines its offers. Kourlas says that, at a meeting at his house to discuss the game, some thought the decisions my be based on probability concepts such as expected values and others thought that it had "psychological--but not logical--coherence.
Of course the game as played on the Internet the bank clearly has a strategy for determing the offers and if this were known we would have an optional stopping problem if we were ineresting only in the expected amount we win.
(1) The amounts that are in the briefcases at the beginning of the game are:
(1) What is the expected amount in your initial suitcase?
(2) Assume that the banker always offers the expected value of the amounts in the remaining suitcases. Would any strategy give you a higher expected winning than just accepting the banker's first offer?
(3) If the bank does not offer the expected amount in the remaining suitcases, what is your optimal strategy to maximize your expected winning?
(4) Why might you not want to use expected value in deciding on your strategy for playing this game?
(5) Here is a remark from the Freakonomics Blog.
Guessing the banker's offer is fun to do. Interestingly, in the Australian and Dutch version, this task is relatively simple: the offer as a percentage of the average remaining prize increases with every round, starting from about 5% to finally 100%. This rule can explain about 95% of the variation in the offers. I wonder if the US bank uses the same rule?
Does this seem to fit what is done on the internet version of the game?
Sugested by Norton Starr and submitted by Laurie Snell.
New Form of Literary Criticism
Note: This article was discussed also by Paul Alper but there is no reason we can't have two discussions of the same article. Laurie
Da Vinci novel breaks code for success
The Guardian, Dec. 28, 2005
The discipline of statistics suffers when it practitioners venture into fields without the aid of a content expert. There is a temptation to deal with something which has popular appeal; use of multiple comparisons of easily acquired computer data can lead to inane predictions. In the past, ludicrous forecasts related the winners of presidential elections to whether the American League or the National League won the World Series; or, the sexual orientation of an individual depends upon whether a forefinger is longer than the ring finger. According to the British newspaper, The Guardian, of December 28, 2005, statisticians are now into literary criticism, or at least what makes a book a bestseller.
The team of statisticians headed by Dr. Alvai Winkler, formerly of Middlesex University, "assumes that much of success lies in the title" of the work. "Comparing these with a control group of less successful novels by the same authors, they found that the winning books had three common features; they had metaphorical, or figurative titles instead of literal ones; the first word was a pronoun, a verb, an adjective or a greeting; and their grammar patterns took the form either of a possessive case with a noun, or of an adjective and noun or of the words The ... of ..."
Dr. Winkler states: "When we tested our model on 700 titles published over 50 years, it correctly predicted whether a book was a bestseller or not for nearly 70% of cases. This is 40% better than random guesswork [(70%-50%)/50% = 40%]. It is far from perfect but given the nature of the data and the way tastes change 70% accuracy is surprisingly good." However, despite the data dredging, the article points out that Harry Potter came in at 51% and The Da Vinci Code scored only 36%. The Winkler team, in an effort to avoid having its analysis look foolish, backpedals and predicts Dan Brown "will have a real bestseller next year with The Solomon Key. Though its title structure is identical to The Da Vinci Code, they count it as figurative 'due to its reference to the Greater and Lesser Keys of Solomon, medieval books about black magic.'" In other words, for "yes," read "no."
Whether or not Dan Brown's new book approaches the financial and literary success of The Da Vinci Code will, of course, depend on The Da Vinci Code. And one hopes, at least to some extent on what is inside the covers and presumably to chance as well (state of the economy, natural disasters, the phases of the moon, etc.). Titles come and titles go as in Hemingway's identical Fiesta and The Sun Also Rises, not to mention Agatha Christie's penchant for multiple naming of the same book. The reader is encouraged to look up the original title of her brilliant novel, And Then There Were None, to see how unconsciously prejudiced we used to be. Her last book, Sleeping Murder, scored 83% and was deemed "the most perfect title." Nevertheless, this Agatha Christie fan claims it can't compare with some of her earlier novels when she was in her prime, regardless of the subsequent rechristenings of the titles.
Unquestionably, under any conceivable criterion, a candidate for the dullest book title in creation would be Statistics in Britain, 1865-1930: The Social Construction of Scientific Knowledge by Donald MacKenzie. Never judge a book by its title; MacKenzie gives a fascinating presentation of the political and social mindset of the pioneers of statistics who invented, among other concepts, regression, correlation, and the t-test in order to advance a particular agenda which, embarrassingly enough, had eugenics in the forefront. Every practitioner of statistics needs to read this book.
Do you think Winkler's claim that 70% correct is 40% better than random guesswork makes sense? If not how might you compare 70% correct to guesswork?
Submitted by Paul Alper.
The use of words like "The" sounds a very weak predictor to me. I wish I knew enough to change from "sounds" to more sound statistical reasoning. Have Winkler et al performed their tests of significance?
I forwarded the Da Vinci news item to the Self-Publishing YahooGroup, and a publisher said in reply to:
> A trio of statisticians have developed a model to help users of the > self-publishing website Lulu.com (http://www.lulu.com/uk) produce more > successful books.
"Self-publishing website Lulu.com" is of course an oxymoron. Lulu is a subsidy publisher or a very high priced POD printer, depending on how you use them.
Marketing is still the road to success. Marketing in turn often depends on a low unit cost.
The whole exercise described in The Guardian can be likened to teaching pigs how to fly better during snowstorms. Separated from the Lulu connection the advice as to titles may have some merit. But since "The Da Vinci Code" is referenced somehow and since that title does not fit any of the the recommended patterns the whole concept is a bit suspect. -- J.C.
Another poster said:
I'm still bothered by this title "formula" (see excerpt below).
It's is a recursive tautology: they're saying that they built this formula by studying best seller titles, and so, for instance, it could pick "accurately" that the title Sleeping Murder, had an 83 percent probability of producing a top bestseller. But MOST fans of Agatha Christie (or John La Carre) don't care in the least what the book is called (and the author's name is usually in bigger type on the cover -- because all Christie fans will buy all her books -- no matter the title!! And you can't count "new" readers of Agatha Christie as being 'caught' by her title, because they are of COURSE affected by her name. (Is it even possible that anyone who reads mysteries have not heard of her?!)
And which came first - the title or the best-seller status? If they could figure out some way to, maybe, remove the effect of the author's name recognition (maybe use only titles of first books?) it MIGHT produce a better formula.
Mothers Know Best. Or Do They?
Pestering a busy statistican
Wall Street Journal, Dec. 27, 2005, A1
Anna Wilde Mathews in London and Peter Wonacott in Moradbad India.
In a sense, we--statisticians and lay people--were spoiled by the famous study some 50 years ago relating lung cancer to cigarette smoking. Spoiled because the connection was so blatantly obvious in that smokers had a five to ten-fold increase in lung cancer over nonsmokers. Since then, most studies which tried to find culprits or saviors have produced far less striking results. A, let us say, mere 20%, as opposed to a 500 or 1000%, difference between a treatment and a placebo would be regarded nowadays as an achievement. Too many things which ought to promote health just don't seem to pan out when a careful experiment is done.
One exception would seem to be the one our mothers drilled into us: eat your vegetables! According to the Wall Street Journal article, a 1992 article in the British Medical Journal by Ram Singh claims, "Heart attack victims who ate more fiber, fruits and vegetables for a year cut their risk of death during that period by almost half." The WSJ reports that "Singh's study has been cited more than 200 times in other scientific articles and guidelines for doctors." The newspaper further states that in other journals Singh "offered eye-popping evidence about the cardiac-health benefits of a good diet" which should include "fish oil, mustard oil, zinc, magnesium," and, of course those old standbys, "fruits and vegetables." Patients at his hospital are handed a card which advises in addition to the usual fruits and vegetables, "eating papaya, walnuts, lentils" and "a glass of whiskey every other day and about 15 minutes of yoga daily."
Except perhaps for the whiskey, most of us would automatically nod our heads in agreement because it makes such good intuitive sense. Unfortunately, intuitively sensible though the advice may be, the WSJ article points out there is considerable reason to believe that Singh's results are bogus. By 1993 critics doubted that he could have conducted five distinct trials involving so many patients in such a short period. Furthermore, there were allegations that he "had tried multiple treatments simultaneously on patients and then written articles as if only one treatment was being used at a time." In addition, "He used some of the same patients in more that one study." Moreover, another paper of his to the BMJ had improper randomization with only the younger patients getting--you guessed it--the fruits and vegetables
When the BMJ editor asked for the raw data, Singh who works in India, apologized because "termites had eaten crucial data." A British statistician was asked to review Singh's work and eventually concluded it was "either fabricated or falsified," and "was full of basic statistical errors and contradictions." The strangest aspect of the whole affair is the length of time the BMJ editor took to unravel the assertions made by Singh's original paper. When the editor was alerted to the alleged problems in 1993, a comedy of errors ensued. The statistician tapped to do the checking changed jobs and thus, several years went by. All the while, the editor was trying to hide his detective work from Singh. In 2002--ten years after publication--Singh "sent a copy of one of his articles with a serrated edge that he said had been gnawed by termites." Three further years go by--we are then half way through 2005--before "the BMJ carried a headline: 'Suspicions of fraud in medical research--Who should investigate?' along with a photo of a list of Dr. Singh's publications."
Whether or not mothers know best, the mother of my child continues to believe that fruits and vegetables must be good for you despite the lack of statistical evidence. If she is typical, the public consequently feels it is a shortcoming of statistics if it can't verify the obvious. Fortunately for family harmony, unlike former President H.W. Bush, our daughter likes broccoli. As far as we know, she isn't into whiskey or fish oil.
Submitted by Paul Alper.
Of Dice and Men
There are not many laughs in an elementary statistics class. Or for that matter, in an advanced one. Nor is it a wise strategy to admit when asked that one is a statistician. The best tactic is to be vague as in, "I work at a university," followed by "I am a teacher" and only then when pressed, reluctantly reveal that the discipline is "statistics." The common response from the interrogator is either stunned silence, repugnance or outright hostility. Stephen Senn's Dicing with Death [Cambridge University Press, 2003] opens with a similar incident from his youth. A young woman asks him "What are you studying?" and all he can answer is statistics instead of anything remotely romantic such as Russian or drama or even physics.
Although the attitude towards statistics on the part of the uninformed may not have progressed, Senn has moved up enough in the corporate and academic world to write an amusing, iconoclastic book about the subject, managing to combine humor and deep insight. His specialty is pharmaceutical clinical trials, "experiments on human beings to establish the effects of drugs," a subtopic of statistics not known for chuckles. However, he delights in puns and the history of the subject. The puns are, by and large, juvenile--the title of this wiki is stolen from page 159 of his book-- but my impression is that this will make the book even more appealing to the general public and to his fellow practitioners who tend to share his sense of humor. For example, the title of the second chapter is, "The Diceman Cometh", a pun on O'Neill's play, The Iceman Cometh; a history of the mathematician Siméon-Denis Poisson and his contribution to probability and statistics is coupled with a section called, "Other fish to fry;" the astronomer Edmond Halley is famous for his comet but he also did pioneering work regarding life tables so Senn has a section entitled, "Orbits to obits;" "Praying for reign" is the title of the section regarding Francis Galton's study of the longevity of the royal family when prayed for; the next to last page has "The lore of large numbers" which is a pun on the mathematical notion, the law of large numbers.
Notwithstanding the jokes, Senn has accomplished the unattainable. He has composed an enjoyable, indeed grippingly funny, book that contains many nuggets of insight both for the statistician and the public in general. He is unabashedly enthusiastic about statistics and its place in the universe: "If you think that statistics has nothing to say about what you do or how you could do it better, then you are either wrong or in need of a more interesting job." The book's strength is not only his passionate love affair with the subject but also his refusal to be swayed by the argument that, according to Stephen Hawking, for every equation the book sales is cut in half.
Judging by how difficult it was to get a copy from my local library system, I can only assume that Hawking got it right. The public is poorer for it not being readily available. Only someone born and raised in Switzerland would attempt to decipher all the feuding Bernoullies with their confusingly repetitive first names; or correct Orson Welles' famous Harry Lime speech in The Third Man--the one about the cuckoo clocks which according to Senn, originated in the Black Forest of Germany and not in Switzerland. The public should also be conversant with his description of two statistical issues, Simpson's paradox and regression to the mean because of their importance to clinical trials, a subject of ever-increasing importance.
The Simpson's paradox example given by Senn has to do with diabetes; the (aggregated) percentage of non-insulin dependent patients who die is larger than the (aggregated) percentage of insulin dependent patients who die, yet when a third variable, age (above or below 40 years of age) is introduced, the inequality, now that the data is disaggregated, turns the other way regardless of age.
This reversal comes as a shock to most people who can't conceive that such a phenomenon could possibly exist. Regression to the mean, on the other hand, is a much talked about topic even if it is mangled in the press. Regression to the mean, as Senn points out, implies that it is not wise to do a before and after study of a medical procedure without a placebo arm; the patients most (least) ill will do better (worse) on average merely because of the regression to the mean and not necessarily because of the medication.
There is much more to the book including a trashing of the opposition to the MMR vaccine in Britain and a defense of the company that made breast implants. His analysis indicates that the public fails to appreciate--due in good part to trial lawyers who stand to make a fortune on litigation-- that problems attributed to a treatment such as a vaccine or a breast implant require comparisons with those who are not treated. Senn manages to supply substance and interesting trivia and is possibly the one statistics book a lay reader ought to enjoy on many levels.
Submitted by Paul Alper.
Da Vinci novel breaks code for success
Da Vinci novel breaks code for success, John Ezard, December 28, 2005, The Guardian.
A trio of statisticians have developed a model to help users of the self-publishing website Lulu.com produce more successful books. The article explains that the group compared 54 years of fiction number ones from the New York Times and the 100 favourite novels in the BBC's Big Read poll with a control group of less successful novels by the same authors. They found that the best books had three common features:
- They had metaphorical, or figurative titles instead of literal ones.
- The first word was a pronoun, a verb, an adjective or a greeting.
- Their grammar patterns took the form either of a possessive case with a noun, or of an adjective and noun or of the words 'The ... of ...'.
Successful books based on this formula include Agatha Christies' last thriller Sleeping Murder (1976) and Philip Pullman's His Dark Materials.
The article quotes one of the model's lead author, Dr Alvai Winkler
When we tested our model on 700 titles published over 50 years, it correctly predicted whether a book was a bestseller or not for nearly 70% of cases. This is 40% better than random guesswork. It is far from perfect but given the nature of the data and the way tastes change 70% accuracy is surprisingly good.
However, the formula fares rather badly with some well know books. The article highlights that the Harry Potter books have a low score because their titles count as literal, though with correct grammar patterns. The Da Vinci Code is written off for being literal, as is Catch-22 and Dickens' Bleak House and a number of others.
The article finishes by advising Dan Brown to take heart. The Lulu team predicts he will have a real bestseller next year with The Solomon Key. Though its title structure is identical to The Da Vinci Code, they count it as figurative "due to its reference to the Greater and Lesser Keys of Solomon, medieval books about black magic".
- What justification is there in relying solely on the book's title to predict sales?
- Assuming the model works well for bestsellers, do that imply that it will work equally as well for self-published books?
Submitted by John Gavin.
Laughter in the Supreme Court
The Green Bag, Autumn 2005
Jay D. Wexler
The Green Bag, a publication that describes itself as "an entertaining journal of law" published an analysis of the relative humorousness of the nine United States Supreme Court Justices. This would seem to be an impossible task, but the Jay Wexler found some solid data to support his investigation.
Fortunately, however, scholars of the Court now possess some hard data that can help us determine, in a more or less scientific fashion, the relative funniness of the Justices. Prior to the most recent term, transcripts of oral arguments held at the Court did not refer to the questioning Justice by name, instead merely identifying each Justice’s inquiry or remarks by the word “Question.” In the 2004–2005 term, however, for the first time, the Court Reporter started revealing the names of the speaking Justices. Because the Court Reporter also indicates, with the notation "(Laughter)," when the courtroom has reached a certain level of mirth, it is now possible to determine how many times during the term any particular Justice’s comments induced a substantial amount of laughter.
The author acknowledges that the use of the notation "(Laughter)" is indeed subjective and ponders whether there is bias in the data.
It will be suggested here, by skeptical readers, that the Court Reporter may be biased in favor of or against one Justice or another, thus rendering any reliance on his notations unreliable. This may or may not be true, but I will not pursue the point. I considered calling the Court Reporter and asking whether he is in fact biased in favor of or against one Justice or another, but I mean, come on, what do you think he’s going to say?
and also mentions another limitation
Of course, this methodology is far from perfect. For one thing, the Court Reporter does not distinguish between types of laughter, either in terms of duration or intensity; a quip that has resulted in a series of small chuckles, in other words, may count just as heavily in this methodology as a joke that brought down the house. Nor does the Court Reporter distinguish between the genuine laughter brought about by truly funny or clever humor and the anxious kind of laughter that arises when one feels nervous or uncomfortable or just plain scared for the nation’s future.
The results are quite striking. The number of laughing episodes, by justice is
- Scalia 77,
- Breyer 45,
- Kennedy 21,
- Souter 19,
- Rehnquist 12,
- Stevens 8,
- O'Connor 7,
- Ginsburg 4,
- Thomas 0
Apparently Justice Scalia has a reputation for a sharp wit, and this data seems to support that opinion.
The author provides an additional analysis that corrects for the fact that not all Justices attended all oral arguments by computing a LEIPAA statistic (Laughter Episodes Instigated Per Argument Average). Justice Rehnquist, in particular, missed a large number of oral arguments during his unsuccessful battle against cancer.
The author does not mention that some Justices talk more during oral arguments. In particular, Justice Thomas almost never says anything during oral arguments. See Jurist Mum Come Oral Arguments The Washington Post, October 11, 2004.
1. The author does not try to assess whether the differences in counts could be explained by sampling error. What sort of statistical model would you use to test the hypothesis that all nine Justices provoke laughter in equal amounts?
2. How would you adjust this model to account for the difference in speaking times by the nine Justices?
Submitted by Steve Simon.
Drawing interface for Flickr
Drawing interface for Flickr, David Pescovitz, boingboing blog, January 4, 2006.
Retrievr is a web-based tool that searches a database of Flickr images based on a rough sketch. The search tool relies on wavelet transformations of images and a statistical regression model, called the logit model.
Wavelets are used to analyze functions at different levels of detail. They are similar to the Fourier transform, but they encode both frequency and spacial information. By saving the 20 largest wavelet coefficients for an image (and throwing away all of the smaller coefficients), it is possible to create a very small signature for each image. When a sample image is matched against a database of images, their signatures are compared to find the 'most similar' database images to the sample image, using the logit model. This webpage gives a high level overview.
In their technical paper, the authors explain that the idea behind the logit model is that there exists some underlying continuous variable, such as the “perceptual closeness” of two images, that is difficult to measure directly. This continuous variable is used to choose a binary (positive/negative) outcome, such as “image T is the intended target of the query Q”, which can be easily measured. The logit model provides weights, which can be used to compute the probability that a given input variable produces a positive outcome. These probabilities are used to rank targets in order of decreasing probability of a match.
When Pescovitz was asked if the model worked, he commented
Yes! That is, it depends. (Mainly on your expectations!) In my experience, the results are usually fairly good, sometimes even stunning - considering the artistic sophistication most of us are able to come up with (gallery forthcoming); and in the cases they're not so stellar, they are at least entertaining ;-) But clearly, the approach has its limits.
He goes on to say
One thing to keep in mind is that retrievr doesn't do object/face/text recognition of any kind, so if you're drawing an outline sketch of a chair, it almost certainly won't get you one back (except your index only contains images of chairs). The same holds for corporate logos, icons &c. It helps to think of it as matching the most pronounced shapes and slabs of colors.
The authors computed the probabilities for 85 positive match images and 8,500, randomly chosen, mismatches, using SAS, in about 30 seconds on an IBM RS/6000 machine. They comment
[the weights] appear to give very good results, and they can be computed much more quickly than performing a multidimensional continuous optimization directly.
You can try sketching an image and seeing the database matches for yourself.
- Does the algorithm's suggestions match your expectations of what similar images should look like?
- Are you lack of drawing skills the reason for the lack of fit or is it just a matter of interpretation?
- Fast Multiresolution Image Querying (non-technical) - webpage giving a highlevel overview of the search algorithm.
- Fast Multiresolution Image Querying (technical), Charles E. Jacobs Adam Finkelstein David H. Salesin, Department of Computer Science and Engineering, University of Washington. Appendix A (page 10) explains how the weights in the model were chosen. In particular, the authors explain why they chose a logit model to approximate a far too slow, multi-dimensional, continuous optimization.
Sumitted by John Gavin.