Chance News 54: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
 
(165 intermediate revisions by 7 users not shown)
Line 1: Line 1:
==Quotations==
==Quotations==
<Blockquote?Do not put your faith in what statistics say until you have carefully considered what they do not say.  ~William W. Watt</blockquote>
<Blockquote>Do not put your faith in what statistics say until<br> you have carefully considered what they do not say.  </blockquote><div align=right>William W. Watt</div align=right>


==Forsooth==
-----


Item 1
==Forsooths==


Item 2
<blockquote>Duke Energy customers voiced their concerns on Thursday night about a planned 13.5 percent rate increase. Under the plan, people with an average monthly bill of 100 dollars a month would go up about 18 dollars.</blockquote>
<div align=right><i>WRAL TV 5</i>[http://www.wral.com/news/news_briefs/story/5978240/], Raleigh, NC, September 11, 2009</div align=right>
-----
 
<blockquote>The AMA is for Obama’s health plan, but only 29% of doctors belong to the AMA, so 71% are opposed to Obama’s health plan.</blockquote>
 
This was a comment from an audience member at a town hall meeting with Senator Mark Warner on September 3, 2009.  The meeting was telecast by C-SPAN3.<br>
 
-----
The next two Forsooths are from the September 2009 RSS News
 
<blockquote>I must write again about the misleading adverts by  GMPTE in the papers re the Congestion Charge. In their latest round of propaganda they state there will be a 10 per cent increase in bus services. With 10 councils in Greater Manchester this works out at a one percent increase per council. If Stockport’s bus companies run 200 buses in the morning peak, a one per cent increase will give two extra buses; is that what you want?</blockquote>
 
<div align=right> Letter to Stockport Express<br>
26 November 2008</div align=right.>
 
<blockquote>People who have personalised number plates on their cars are most likely to live in Scotland, a survey has found.</blockquote>
 
<div align=right>BBC News Scotland<br>
11 January 2009</div align=right.>
 
== Keynes' Game for professional investments==
 
Jeff Norman told us about another interesting game. 
The game was descried  in terms of professional investment by the famous British Economist John Maynard Keynes in his book The General Theory of Employment, Interest and Money, 1936. Here he writes:
 
<blockquote> Professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the price being awarded to the competitor whose choice most nearly corresponds to the average preference of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees </blockquote>
 
Keynes used this game in his argument against the Efficient-market hypothesis theory, which is defined  by Answers.com as:
<blockquote> Professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the price being awarded to the competitor whose choice most nearly corresponds to the average preference of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees </blockquote>
 
Keynes used this game in his argument against the Efficient-market hypothesis  (EMH) theory witch is defined at [http://www.answers.com/topic/efficient-market-hypothesis  Answers.com] as:
 
<blockquote>An investment theory that states that it is impossible to "beat the market" because stock market efficiency causes existing share prices to always incorporate and reflect all relevant information. According to the EMH, this means that stocks always trade at their fair value on stock exchanges, and thus it is impossible for investors to either purchase undervalued stocks or sell stocks for inflated prices. Thus, the crux of the EMH is that it should be impossible to outperform the overall market through expert stock selection or market timing, and that the only way an investor can possibly obtain higher returns is by purchasing riskier investments.<br><br>
 
That efficient market hypothesis is a controversial subject and discussed on many websites.  We can see this in an [http://www.investorsinsight.com/blogs/thoughts_from_the_frontline/archive/2009/08/07/six-impossible-things-before-breakfast.aspx article] by  John Mauldin who is president of Millennium Wave Advisors, LLC, a registered investment advisor. Here you will also see more about Keynes' game and its relation to the EMF.</blockquote>
 
We read [http://www.investorsinsight.com/blogs/thoughts_from_the_frontline/archive/2009/08/07/six-impossible-things-before-breakfast.aspx here] Keynes game can be easily replicated by asking people to pick a number between 0 and 100, and telling them the winner will be the person who picks the number closest to two-thirds the average number picked. The chart below shows the results from the largest incidence of the game that I have played - in fact the third largest game ever played, and the only one played purely among professional investors.
 
<center> http://www.investorsinsight.com/cfs
 
<center> http://www.investorsinsight.com/cfs-file.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/thoughts_5F00_from_5F00_the_5F00_frontline/jm080709image010_5F00_2F080074.jpg </center>
 
The highest possible correct answer is 67. To go for 67 you have to believe that every other muppet in the known universe has just gone for 100. The fact we got a whole raft of responses above 67 is more than slightly alarming.
 
You can see spikes which represent various levels of thinking. The spike at fifty reflects what we (somewhat rudely) call level zero thinkers. They are the investment equivalent of Homer Simpson, 0, 100, duh 50! Not a vast amount of cognitive effort expended here!
 
There is a spike at 33 - of those who expect everyone else in the world to be Homer. There's a spike at 22, again those who obviously think everyone else is at 33. As you can see there is also a spike at zero. Here we find all the economists, game theorists and mathematicians of the world. They are the only people trained to solve these problems backwards. And indeed the only stable Nash equilibrium is zero (two-thirds of zero is still zero). However, it is only the 'correct' answer when everyone chooses zero.
 
The final noticeable spike is at one. These are economists who have (mistakenly...) been invited to one dinner party (economists only ever get invited to one dinner party). They have gone out into the world and realised the rest of the world doesn't think like them. So they try to estimate the scale of irrationality. However, they end up suffering the curse of knowledge (once you know the true answer, you tend to anchor to it). In this game, which is fairly typical, the average number picked was 26, giving a two-thirds average of 17. Just three people out of more than 1000 picked the number 17.
 
I play this game to try to illustrate just how hard it is to be just one step ahead of everyone else - to get in before everyone else, and get out before everyone else. Yet despite this fact, it seems to be that this is exactly what a large number of investors spend their time doing.
 
<b>Additional Reading</b>
 
(1) [http://www.westga.edu/~bquest/2002/market.htmThe Efficient Market Hypothesis on Trial]:A Survey by Philip S. Russel and Violet M. Torbey<br>
 
(2) A mathematician plays the stock market by John Paulos
 
John Paulos wrote us: I discussed Keynes' game and the 80% (or 66.66%) game in my book, A Mathematician Plays the Stock Market. I also wrote about the efficient market paradox, a kind of market analogue of the liar paradox: The Efficient Market Hypothesis is true if and only if a sufficient number of investors believes it to be false.<br>
 
Submitted by Laurie Snell
 
==Measuring Emotion on the Web==
 
[http://www.nytimes.com/2009/08/24/technology/internet/24emotion.html "Mining the Web for Feelings, Not Facts"]<br>
by Alex Wright, <i>The New York Times</i>, August 23, 2009<br>
 
There's a lot of data on the web, but it isn't data in the numeric sense.
 
<blockquote>The rise of blogs and social networks has fueled a bull market in personal opinion: reviews, ratings, recommendations and other forms of online expression.</blockquote>
 
There are serious reasons to sift through this data.
 
<blockquote>For many businesses, online opinion has turned into a kind of virtual currency that can make or break a product in the marketplace. Yet many companies struggle to make sense of the caterwaul of complaints and compliments that now swirl around their products online.</blockquote>
 
A new methodology, sentiment analysis, attempts to summarize the positive and negative emotions associated with these reviews and ratings.
 
<blockquote>Jodange, based in Yonkers, offers a service geared toward online publishers that lets them incorporate opinion data drawn from over 450,000 sources, including mainstream news sources, blogs and Twitter. Based on research by Claire Cardie, a Cornell computer science professor, and her students, the service uses a sophisticated algorithm that not only evaluates sentiments about particular topics, but also identifies the most influential opinion holders.</blockquote>
 
<blockquote>In a similar vein, <i>The Financial Times</i> recently introduced Newssift, an experimental program that tracks sentiments about business topics in the news, coupled with a specialized search engine that allows users to organize their queries by topic, organization, place, person and theme. Using Newssift, a search for Wal-Mart reveals that recent sentiment about the company is running positive by a ratio of slightly better than two to one. When that search is refined with the suggested term “Labor Force and Unions,” however, the ratio of positive to negative sentiments drops closer to one to one.</blockquote>
 
This work isn't easy.
 
<blockquote>Translating the slippery stuff of human language into binary values will always be an imperfect science, however. "Sentiments are very different from conventional facts," said Seth Grimes, the founder of the suburban Maryland consulting firm Alta Plana, who points to the many cultural factors and linguistic nuances that make it difficult to turn a string of written text into a simple pro or con sentiment. "'Sinful' is a good thing when applied to chocolate cake," he said.</blockquote>
 
<blockquote>The simplest algorithms work by scanning keywords to categorize a statement as positive or negative, based on a simple binary analysis ("love" is good, "hate" is bad). But that approach fails to capture the subtleties that bring human language to life: irony, sarcasm, slang and other idiomatic expressions. Reliable sentiment analysis requires parsing many linguistic shades of gray.</blockquote>
 
Submitted by Steve Simon
 
<b>Questions</b>
 
1. No algorithm is going to be perfect, but some may provide sufficient accuracy to be useful. How would you measure the accuracy of a sentiment algorithm? How would you decide whether the accuracy was sufficient for your needs?<br>
 
==Assigning points to books==
 
[http://www.nytimes.com/2009/08/30/books/review/Straight-t.html "Reading by the Numbers"]<br>
by Susan Straight, <i>The New York Times</i>, August 27, 2009<br>
 
There's a program in many schools to encourage reading. But some people don't like it.
 
<blockquote>At back-to-school night last fall, I was prepared to ask my daughter’s eighth-grade language arts teacher about something that had been bothering me immensely: the rise of Accelerated Reader, a 'reading management' software system that helps teachers track student reading through computerized comprehension tests and awards students points for books they read based on length and difficulty, as measured by a scientifically researched readability rating. When the teacher announced during the class presentation that she refused to use the program, I almost ran up and hugged her.</blockquote>
 
The problem, according to Ms. Straight, is that the system does not give enough credit for the classics.
 
<blockquote>Many classic novels that have helped readers fall in love with story, language and character are awarded very few points by Accelerated Reader. <i>My Antonia</i> is worth 14 points, and <i>Go Tell It on the Mountain</i> 13. The previous school year, my daughter had complained that some of her reading choices that I thought were pretty audacious — long, well-written historical novels like Libba Bray’s <i>Great and Terrible Beauty</i> and Lisa Klein’s <i>Ophelia</i>, recommended by her college-age sister — were worth only 14 points each. <i>Sense and Sensibility</i> is worth 22.</blockquote>
 
Indeed the article has a clever graphic showing a handwritten equation "<i>Sense + Sensibility</i> = 22". Instead of giving points to the classics, the system assigns heavy point totals to the Harry Potter books.
 
<blockquote><i>Harry Potter and the Order of the Phoenix</i> topped out at 44 points, while <i>Harry Potter and the Deathly Hallows</i> and <i>Harry Potter and the Goblet of Fire</i> were worth 34 and 32.</blockquote>
 
The points are assigned using a formula system (ATOS).
 
<blockquote>ATOS employs the three statistics that researchers have
found to be most predictive of reading difficulty: the number
of words per sentence, the number of characters per
word, and the average grade level of the words in the book.
[http://doc.renlearn.com/KMNet/R003520002GE7114.pdf] </blockquote>
 
This formula does have its problems. A Wikipedia article notes that
 
<blockquote>The Accelerated Reader's method for determining grade level is critically flawed. <i>Lord of the Flies</i> is considered 5th grade level and James Joyce's <i>Ulysses</i> is considered 7th grade level. There is more to a grade level of a book than the word count and length. <i>Lord of the Flies</i> and <i>Ulysses</i> score at very younger audience levels in the simplistic AR rubric, yet most 5th and 7th graders would not and probably could not read these books because of the story and narrative structure which is much more mature than the AR mathematical reduction of word count and length.
[http://en.wikipedia.org/wiki/Accelerated_Reader]</blockquote>
 
This issue is acknowledged in an article written by the company that produces the Advanced Learning System, republished at a school web site.
 
<blockquote>Advances in technology and statistical analysis have led to improvements in the science of readability, but there are still some things that readability formulas cannot do—and will never be able to do. All readability formulas produce an estimate of a book’s difficulty based on selected variables in the text, but none analyzes the suitability of the content or the literary merit for individual readers. This decision is up to educators and parents, who know best what content is appropriate for each student. [http://www.fresno.k12.ca.us/technology/ar/documents/Readability.pdf]
</blockquote>
 
Submitted by Steve Simon
 
<b>Questions</b>
 
1. Is it wrong to assign more points to a Harry Potter book than a Jane Austen book?<br>
 
2. Could the ATOS formula be adapted to give greater weight to the classics? How?<br>
 
==Earthquake probability maps==
[http://geology.com/news/2007/01/earthquake-probability-maps.html "Geology News - Earth Science Current Events"] refers interested readers to a website created by the U.S. Geological Survey, which enables people to create earthquake probability maps for specific regions.  ASCII files of raw data used in creating the maps are also available.<br>
 
==Conditional probability in search and rescue==
 
[http://www.chron.com/disp/story.mpl/metropolitan/6596008.html “Coast Guard looks for lessons in Matagorda miracle”]<br>
by Jennifer Latson, Houston Chronicle, September 1, 2009<br>
 
Three missing fishermen were found by a recreational yachtsman after 45 flights, 250 hours, 6 days, and 86,000 square miles of searching by the U.S. Coast Guard.  The Coast Guard began the search on Saturday, August 22 and suspended it on Friday, August 28; the yachtsman found the fishermen on Saturday, August 29.
 
<blockquote>The Coast Guard's search and rescue system, built on probability statistical models, lists the average probability of being detected from a plane as 78 percent. ….  [T] the crew made the difficult decision to suspend the search on Friday. A strict reading of the probability models might have called off the search as early as Wednesday. But searchers were hopeful, swayed by the pleas of relatives who said the men were skilled survivalists: outdoorsmen, fishers and hunters.</blockquote>
 
<b>Discussion</b><br>
 
1.  What do you think that the "<i>average probability</i> of being detected from a plane" refers to – one flight, one time period, one geographic region, one rescue mission, one person, one boat, …? 
 
2.  Suppose that the Coast Guard only measures the relative frequency of finding people or boats after it receives reports of alleged missing boaters.  Can you see any benefit to including, in their calculations, simulations with people or boaters randomly placed in bodies of water?
 
3.  Based on the Matagorda experience, would you recommend that the Coast Guard break its “average probability of being detected from a plane” into two conditional probabilities?  What would those be?<br> 
 
4.  Do you think that the Coast Guard <i>could</i>, <i>would</i>, or <i>should</i> have extended its search even more, if the probability of detecting the fishermen had been higher, based on the fact that the men were “skilled survivalists”?<br>
 
==Confidence in hurricane predictions==
 
[http://www.cpc.ncep.noaa.gov/products/outlooks/hurricane2009/August/hurricane.shtml “NOAA: 2009 Atlantic Hurricane Season Outlook Update”]<br>
Issued August 6, 2009<br>
 
Predictions for the 2009 Atlantic hurricane season were produced by the National Oceanic and Atmospheric Administration for the Atlantic hurricane region.<br>
 
<blockquote>….  This combination of climate factors indicates a 50% chance of a near-normal hurricane season for 2009, and a 40% chance of a below normal season. An above-normal season is not likely ....<br>
 
The outlook indicates a 70% probability for each of the following seasonal ranges: 7-11 named storms, 3-6 hurricanes, 1-2 major hurricanes ….<br>
 
These predicted ranges have been observed in about 70% of past seasons having similar climate conditions to those expected this year. They do not represent the total range of activity seen in those past seasons.</blockquote>
 
<b>Discussion</b><br>
 
1.  How likely is an above-normal season?<br>
 
2.  Statistically speaking, would you call the 70% figure a <i>probability</i>?  What would you have called it?<br>
 
3.  Based on the given seasonal range of named storms, can you estimate the (arithmetic) mean number of named storms and the standard error of the measurement?<br>
 
==Short-term probability in lawsuit==
[http://www.reuters.com/article/pressRelease/idUS201498+31-Aug-2009+BW20090831 “... Filing of Class Action Lawsuit Against … ProShares Fund”]<br>
Reuters, August 31, 2009<br>
 
A law firm filed a class action lawsuit in Maryland on behalf of shareholders in the UltraShort Financials ProShares Trust Fund (SKF).  DJFI refers to the Dow Jones Financial Index.  Here are excerpts from the claim:
<blockquote>… Defendants failed to disclose the following risks: (a) the mathematical probability that SKF's performance will fail to track the performance of the DJFI over any period longer than a single trading day; … (c) that SKF is not a directional play on the performance of U.S. financial stocks, but dependent on the volatility and path the DJFI takes over any time period greater than a single day; … (f) that based upon the mathematics of compounding, the volatility of the DJFI and probability theory [it can be inferred that] SKF was highly unlikely to achieve its stated investment objectives over time periods longer than a single trading day.</blockquote>
 
<b>Discussion</b><br>
 
Consider part (f) of the complaint.  Suppose that the probability of SKF’s performance tracking the DJFI over a single trading day was as high as, say, 75%.<br>
 
1.  How do you think that either the “mathematics of compounding” or “probability theory,” by themselves, would make it highly unlikely that SKF could track the DJFI over more than a single trading day, say 5 trading days?  What assumption(s) are you making?<br>
 
2.  How do you think that including the “volatility of the DJFI” as a condition could affect your answers to the preceding question?<br>
 
==Conditional entropy in text analysis==
 
[http://www.time.com/time/world/article/0,8599,1919795,00.html “Decoding the Ancient Script of the Indus Valley”]<br>
by Ishaan Tharoor, <i>TIME</i>, September 1, 2009<br>
 
The Indus Valley refers to a 300,000-square-mile region in modern-day Pakistan and northwestern India.  Its urban culture is estimated to be at least 4500 years old.
 
<blockquote>The group examined hundreds of Harappan texts and tested their structure against other known languages using a computer program. Every language, they suggest, possesses what is known as "conditional entropy": the degree of randomness in a given sequence. In English, for example, the letter "t" can be found preceding a whole variety of other letters, but instances of "tx" or "tz" are far more infrequent than "th" or "ta." … Quantifying this principle through computer probability tests, they determined the Harappan script had a similar measure of conditional entropy to other writing systems, including English, Sanskrit and Sumerian. If it mathematically looked and acted like writing, they concluded, then surely it is writing.</blockquote>
 
==Cable news polls a la The Daily Show==
[http://www.thedailyshow.com/watch/mon-august-17-2009/poll-bearers “Poll Bearers”]<br>
Jon Stewart, <i>The Daily Show</i>, August 17, 2009<br>
 
Jon Stewart opened his August 17 show by commenting:
<blockquote>I believe that we are now more united than ever, and I have got mathematical proof to back me up.  The more you watch cable news the more you see how unified Americans are.</blockquote>
 
Stewart showed three video clips from FOX News:<br>
(a)  “93% of you who know how to text say no, that we should not be talking to moderate factions of the Taliban.”<br>
(b)  “93% said yes,” the Republican Party is better off without Arlen Spector.<br>
(c)  “100% of you say yes, the town halls are making a difference.”<br>
 
and four video clips from Lou Dobbs on CNN:<br>
(d)  “96% are outraged that big business and socio-ethnocentric special interest groups are trying to kill the most effective program in the fight against illegal immigration.”<br>
(e)  “97% of you say that it’s more important for the federal government to enforce our immigration laws than to count illegal aliens.”<br>
(f)  “94% of you say you are outraged that you are expected to tighten your belt.”<br>
(g)  “98% of you say it’s time illegal aliens said, ‘thank you’ for all the help and support they get in this country.”<br>
 
He also showed two video clips from Ed Schultz’ polls on MSNBC.<br>
 
Stewart reflected:
<blockquote>These numbers seem a tad on the high side.  Are they trustworthy?  Is it possible these polls don’t reflect public opinion so much as the ability of those shows’ viewers to repeat the opinions they have just heard the host that they’re watching express?</blockquote>
 
At the end of the show, Stewart showed two somewhat contradictory poll results about paying for the increased cost of a reformed health care system:<br>
(h)  “92% said no, we don’t need a tax hike," on FOX.<br>
(i)  “94% said yes, it’s fair to tax the top 1.2% of the richest Americans," on MSNBC.<br>
 
<b>Discussion</b><br>
 
The FOX, CNN, and MSNBC program hosts asked viewers to text their responses to on-air questions, each of which had two possible answer choices.<br>
 
1.  Why is it important to see the exact questions and answer choices in each poll?<br>
2.  Result (c) was practically instantaneous after the question was posed.  Can you suggest some reason(s) for the 100% response in this result?<br> 
3.  Identify some problems with the method these programs used, in terms of providing reliable information about the attitudes of the American people in general, or even of a program’s viewers in particular?<br>
4.  Would it make sense to compute an estimate of the standard errors in these poll percentages?  Why or why not?<br>
 
==Oscar Nominees Brings New Voting Rules==
[http://mediadecoder.blogs.nytimes.com/2009/09/01/with-rise-in-oscar-nominees-comes-new-voting-rules/ The New  York Times].<br>
Michael Cieply<br>
September 3, 2009<br>
<center> http://graphics8.nytimes.com/images/blogs/carpetbagger/24oscars_cb.jpg</center>
 
Here we read:
 
<blockquote>The Oscar process is clearly still in process. On Monday, the [http://www.oscars.org  Academy of Motion Picture Arts and Sciences] said its decision last June to double the number of best picture nominees to 10 brings with it a change in voting method.<br><br>
 
The best picture will now be chosen by a preferential voting system, rather than the single-choice voting used in other categories.In the single-choice system, voters pick their film and the one with the most votes wins. Oscar voters will now be expected to rank their best picture choices, one through 10. Without such ranking, the wider field of nominees raised the possibility that a film would win top honors though it was preferred by only a small plurality of voters. </blockquote>
 
The preferential voting system is also called the single transferable vote. It is described in more detail by the [http://blogs.wsj.com/numbersguy/The Numbers Guy] who writes for the wall Street Journal. He writes:
 
<blockquote>Voters will rank the 10 nominees from 1 to 10. PricewaterhouseCoopers staffers who oversee the voting for the Academy will place the ballots into 10 piles, each one for ballots with one of the films ranked at the top. If one pile has 50% of the ballots, it wins. If not, the ballots in the smallest pile are added to the pile of the second-place film listed, and the procedure continues until one film has 50% of the votes.</blockquote>
 
He goes on to say
<blockquote>I [http://blogs.wsj.com/numbersguy/voting-math-doesnt-always-add-up-564/ wrote] in February that voting theorists aren’t so keen on the Oscars’ system for choosing winners from the slate of nominees. The problem is that some don’t like the new system much, either. Steven Brams, a professor of politics at New York University, points out several problems. “Some voters, by raising an alternative from last to first place, [could] cause it to lose — just the opposite effect of what one would want a voting system to induce,” Brams said. Also possible is the scenario in which “some voters, by actually voting as opposed to not showing up at the polls, hurt the alternatives they rank highest.” And when there is a Condorcet winner — one who beats every other contender in head-to-head matchups — it can often lose in the Oscars’ new system. Brams advocated for a system he helped develop, called approval voting, that he said was superior on these counts.</blockquote>
 
Approval voting is a system in which you can vote for as many candidates as you like, as long as there are more than two candidates on the ballot. In an earlier article in the New York Times. Brams gave examples where he thought the best movie did not win the voting.  He says, Just look at the 1976 best picture race. The five nominees were "All the President's Men," "Bound for Glory," "Network," "Taxi Driver" and "Rocky," the eventual winner. "I cannot believe that 'Rocky' would have won a head-to-head contest with "Taxi Driver.
 
Bob Norman ( Professor emeritus at Dartmouth college) has as current research methods of voting, He provided for us the following comments on the new Oscar voting method.
 
<blockquote>The Academy of Motion Picture Arts and Sciences announced it was doubling the number of best-picture nominees, to 10. The Academy also announced that it was changing its voting system. Called preferential voting by some, and single-transferable vote (STV) by others. This voting procedure is by the The Numbers Guy (see above) <br><br>
 
Let’s examine whether their chosen system of voting is a good one.<br><br>
 
In The [http://www.thewrap.com/ind-column/academy-makes-big-changes-best-picture-voting_5700 Wrap  August 31] Steven Bond wrote about the change: “There are certain mathematical dangers with more nominees,” says the Academy’s executive director, Bruce Davis, who revealed the new rule exclusively to TheWrap. “You could really get a fragmentation to the point where a picture with 18 or 20 percent of the vote could win, and the board didn’t want that to happen.”<br><br>
 
Eventually, one film will wind up with more than 50 percent.<br><br>
 
Bond goes on to say: The process is designed to discern a true consensus and uncover, in Davis’ words, “the picture that has the most support from the entire membership.”<br><br>
 
Using preferential voting, a. k. a. STV, the winning picture can still have initially only18-20 percent of the first-place vote. After the elimination process has run its course in a close vote, some film will have more than 50% of the voters over its remaining rival. That winning film could be a divisive one that was disliked intensely by nearly half the voters. It would hardly the preference of the voters.<br><br>
 
The Numbers Guy tells us: Davis, of the Academy, told me that any voting system is open to criticism, but it’s hard to find a consensus on the best alternative, as I wrote in the February column. “Though no voting system is perfect, for the Academy’s purposes, it is difficult to point to a better system than the preferential system.”<br><br>
 
It is true that with more candidates there are potentially more mathematical complications. It is also true that no voting system is perfect. Preferential voting has more than its share of problems, including several that are more serious than the one just noted. One of the most serious is that while their system proudly makes good use of the second choices of all voters who favored one of the eliminated films, it ignores the second choices for the runner-up, and in a close decision that make a lot of difference. <br><br>
 
This problem is apt to occur when the runner up is a divisive candidate, one liked well by many but less than half the voters.  To illustrate, suppose that when all but three films have been eliminated, the voters are divided between 40 who like Film 1 and 60 who don’t like it at all. It could easily happen that 25 of the 60 like film 2 best, while 35 like film 3 best. By their preferential voting system, film 2 is eliminated and nearly all of those 25 voters’ second choices get transferred to film 3, which is declared the winner. This ignores the second choices of the voters who like Film 1 best even if they have a three-to-one preference for Film 2 over Film 3. If they do, then a majority of the voters, 55 of 100, like Film 2 better than the winner, Film 3. <br><br>
 
If you want a system that, as Davis says, finds “the picture that has the most support from the entire membership,” simply ask the membership which films they want to support. That is, let them vote for as many as they like. The one with the most votes, the most supported film, would win. It’s that simple.</blockquote>
 
Submitted by <br>
Laurie Snell
 
For another voting problem, this with respect to selecting the site of the 2016 Summer Olympics Games, see [http://www.forbes.com/forbes/2009/0622/sports-international-olympic-committee-on-my-mind.html “The Olympics of Voting”], by John Mark Hansen and Allen R. Sanderson, <i>Forbes Magazine</i>, June 22, 2009.
 
==Consistency of a tennis star==
[http://online.wsj.com/article/SB10001424052970204731804574384972861404690.html “Roger Federer May be More Machine Than Man”]<br>
by Matthew Futterman, <i>The Wall Street Journal</i>, August 31, 2009<br>
 
This brief article describes the “remarkable level of consistency that is the hallmark of great tennis champions” such as Roger Federer and Pete Sampras, and Andre Agassi during his “rejuvenation” period of 1998-2004.<br> 
 
For the period 2005-2009, some Federer stats are given:<br>
(a)  Serve games won:  89-90%<br>
(b)  1st serve percentage: 62-64%<br>
(c)  1st serve points won:  76-79%<br>
(d)  2nd serve points won:  57-59%<br>
(e)  1st return points won:  31-35%<br>
(f)  Break points converted:  41-44%.<br>
 
<b>Discussion</b><br>
 
1.  What statistical term would you use to refer to Federer’s “consistency”?<br>
2.  What else would you like to know, in order to evaluate whether Federer’s “consistency” was unusual or not?<br>
3.  Do you think that “consistency” is a sufficient condition for a tennis champion?  A necessary condition?<br>
 
==Fat tails in investing==
[http://online.wsj.com/article/SB125236248576990713.html “Some Funds Stop Grading on the Curve”]<br>
by Eleanor Laise, <i>The Wall Street Journal</i>, September 8, 2009<br>
 
This article discusses how the classic portfolio-construction models have failed to include extreme events in their risk assessments by basing their analyses on normal distributions of stock-market returns.  Apparently Benoit Mandelbrot recognized this flaw in the models in the 1960s, but analysts were reluctant to move to fat-tailed models because “the math was so unwieldy.”<br>
 
Morningstar has now “built fat-tailed assumptions into its Monte Carlo simulations, which estimate the odds of reaching retirement financial goals.”  While the classic model assumed that a 60% stocks-40% bonds portfolio’s losing a fifth of its value was a once-in-111-year event,  Morningstar’s model predicts that it would be a once-in-40-year event.  The author notes that 1931 was the last year that portfolio losses as great as the recent 2008 loss occurred.<br>
 
One problem in performing these analyses is identified:
<blockquote>Number-crunchers have a smaller supply of historical observations to construct models focused on rare events.<br>
“Data are intrinsically sparse,” says [one analyst].</blockquote>
 
There is also a problem with investments based on assumptions that predict more frequent “market shocks,” and that is reduced returns on more conservative portfolios.  One fund lost only 0.5% of its value over the last year, compared to S&P’s 16% decline during that period; however, it rose only 4% over the last three months, compared to the S&P’s 8% rise.<br>
 
The article discusses other measures of risk that have been adopted recently and transmits advice from an analyst:
<blockquote>Pimco's Mr. Bhansali is unimpressed. Since it is so difficult to forecast extreme events, investors should focus on their potential consequences rather than the probability they will occur, Mr. Bhansali says.<br>
As for comprehensive measures of risk, he says, "they fail you in many cases when you need them the most."</blockquote>
 
A blogger [http://online.wsj.com/article/SB125236248576990713.html?mg=com-wsj#articleTabs%3Dcomments] refers interested readers to Mandelbrot’s <i>The (MIS)Behavior of Markets</i>[http://www.amazon.com/MIS-Behaviour-Markets-Fractal-Reward/dp/1846682622/ref=sr_1_5?ie=UTF8&s=books&qid=1252507856&sr=1-5] and several Taleb books[http://www.amazon.com/s/ref=nb_ss?url=search-alias%3Dstripbooks&field-keywords=taleb&x=0&y=0] and states, “The market has been proven to be NON-random. Normal distribution statistics assumes randomness in price movements. They are not.”<br>
 
A second blogger, a financial analyst,[http://online.wsj.com/article/SB125236248576990713.html?mg=com-wsj#articleTabs%3Dcomments] feels that we should “train the users of probability distributions to better understand the significance of tail information,” instead of trying to “’fix’ probability distributions by ‘distorting’ them.”<br>
 
A third blogger, a physicist,[http://online.wsj.com/article/SB125236248576990713.html?mg=com-wsj#articleTabs%3Dcomments] objects to applying physics formulas to economics:
<blockquote>The fundamental problem, in mathematical jargon, is that economic time series are probably not stationary, meaning that it is not possible to extract their statistical behavior from historic data; the forms and values that fit the past are not valid predictors, even statistically, of the future. Trying to do so is somewhere between tilting at windmills and defrauding the customers. It is amazing that the "quants", who are generally very smart people, do not or refuse to recognize this. Perhaps there is no money to be made saying the problem is intrinsically insoluble, while if you have a tool that you pretend is useful you can get rich (often) or even famous … peddling it.</blockquote>
 
==Some interesting suggestions==
 
[http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2652013 Perceptual mislocalization of bouncing balls by professional tennis referees]
 
Suggested by Jeff Norman
 
[http://boss.blogs.nytimes.com/2009/09/01/Should-entrepreneurs-minimize-credit-card-debt/ Should Entrepreneurs Minimize Credit Card Debt?]
 
Suggested by Jimmy K Duong
 
[http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html How How Did Economists Get It So Wrong?]
 
Suggested by Laurie Snell

Latest revision as of 20:22, 20 June 2013

Quotations

Do not put your faith in what statistics say until
you have carefully considered what they do not say.

William W. Watt

Forsooths

Duke Energy customers voiced their concerns on Thursday night about a planned 13.5 percent rate increase. Under the plan, people with an average monthly bill of 100 dollars a month would go up about 18 dollars.

WRAL TV 5[1], Raleigh, NC, September 11, 2009

The AMA is for Obama’s health plan, but only 29% of doctors belong to the AMA, so 71% are opposed to Obama’s health plan.

This was a comment from an audience member at a town hall meeting with Senator Mark Warner on September 3, 2009. The meeting was telecast by C-SPAN3.


The next two Forsooths are from the September 2009 RSS News

I must write again about the misleading adverts by GMPTE in the papers re the Congestion Charge. In their latest round of propaganda they state there will be a 10 per cent increase in bus services. With 10 councils in Greater Manchester this works out at a one percent increase per council. If Stockport’s bus companies run 200 buses in the morning peak, a one per cent increase will give two extra buses; is that what you want?

Letter to Stockport Express
26 November 2008

People who have personalised number plates on their cars are most likely to live in Scotland, a survey has found.

BBC News Scotland
11 January 2009

Keynes' Game for professional investments

Jeff Norman told us about another interesting game. The game was descried in terms of professional investment by the famous British Economist John Maynard Keynes in his book The General Theory of Employment, Interest and Money, 1936. Here he writes:

Professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the price being awarded to the competitor whose choice most nearly corresponds to the average preference of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees

Keynes used this game in his argument against the Efficient-market hypothesis theory, which is defined by Answers.com as:

Professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the price being awarded to the competitor whose choice most nearly corresponds to the average preference of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees

Keynes used this game in his argument against the Efficient-market hypothesis (EMH) theory witch is defined at Answers.com as:

An investment theory that states that it is impossible to "beat the market" because stock market efficiency causes existing share prices to always incorporate and reflect all relevant information. According to the EMH, this means that stocks always trade at their fair value on stock exchanges, and thus it is impossible for investors to either purchase undervalued stocks or sell stocks for inflated prices. Thus, the crux of the EMH is that it should be impossible to outperform the overall market through expert stock selection or market timing, and that the only way an investor can possibly obtain higher returns is by purchasing riskier investments.

That efficient market hypothesis is a controversial subject and discussed on many websites. We can see this in an article by John Mauldin who is president of Millennium Wave Advisors, LLC, a registered investment advisor. Here you will also see more about Keynes' game and its relation to the EMF.

We read here Keynes game can be easily replicated by asking people to pick a number between 0 and 100, and telling them the winner will be the person who picks the number closest to two-thirds the average number picked. The chart below shows the results from the largest incidence of the game that I have played - in fact the third largest game ever played, and the only one played purely among professional investors.

http://www.investorsinsight.com/cfs
http://www.investorsinsight.com/cfs-file.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/thoughts_5F00_from_5F00_the_5F00_frontline/jm080709image010_5F00_2F080074.jpg

The highest possible correct answer is 67. To go for 67 you have to believe that every other muppet in the known universe has just gone for 100. The fact we got a whole raft of responses above 67 is more than slightly alarming.

You can see spikes which represent various levels of thinking. The spike at fifty reflects what we (somewhat rudely) call level zero thinkers. They are the investment equivalent of Homer Simpson, 0, 100, duh 50! Not a vast amount of cognitive effort expended here!

There is a spike at 33 - of those who expect everyone else in the world to be Homer. There's a spike at 22, again those who obviously think everyone else is at 33. As you can see there is also a spike at zero. Here we find all the economists, game theorists and mathematicians of the world. They are the only people trained to solve these problems backwards. And indeed the only stable Nash equilibrium is zero (two-thirds of zero is still zero). However, it is only the 'correct' answer when everyone chooses zero.

The final noticeable spike is at one. These are economists who have (mistakenly...) been invited to one dinner party (economists only ever get invited to one dinner party). They have gone out into the world and realised the rest of the world doesn't think like them. So they try to estimate the scale of irrationality. However, they end up suffering the curse of knowledge (once you know the true answer, you tend to anchor to it). In this game, which is fairly typical, the average number picked was 26, giving a two-thirds average of 17. Just three people out of more than 1000 picked the number 17.

I play this game to try to illustrate just how hard it is to be just one step ahead of everyone else - to get in before everyone else, and get out before everyone else. Yet despite this fact, it seems to be that this is exactly what a large number of investors spend their time doing.

Additional Reading

(1) Efficient Market Hypothesis on Trial:A Survey by Philip S. Russel and Violet M. Torbey

(2) A mathematician plays the stock market by John Paulos

John Paulos wrote us: I discussed Keynes' game and the 80% (or 66.66%) game in my book, A Mathematician Plays the Stock Market. I also wrote about the efficient market paradox, a kind of market analogue of the liar paradox: The Efficient Market Hypothesis is true if and only if a sufficient number of investors believes it to be false.

Submitted by Laurie Snell

Measuring Emotion on the Web

"Mining the Web for Feelings, Not Facts"
by Alex Wright, The New York Times, August 23, 2009

There's a lot of data on the web, but it isn't data in the numeric sense.

The rise of blogs and social networks has fueled a bull market in personal opinion: reviews, ratings, recommendations and other forms of online expression.

There are serious reasons to sift through this data.

For many businesses, online opinion has turned into a kind of virtual currency that can make or break a product in the marketplace. Yet many companies struggle to make sense of the caterwaul of complaints and compliments that now swirl around their products online.

A new methodology, sentiment analysis, attempts to summarize the positive and negative emotions associated with these reviews and ratings.

Jodange, based in Yonkers, offers a service geared toward online publishers that lets them incorporate opinion data drawn from over 450,000 sources, including mainstream news sources, blogs and Twitter. Based on research by Claire Cardie, a Cornell computer science professor, and her students, the service uses a sophisticated algorithm that not only evaluates sentiments about particular topics, but also identifies the most influential opinion holders.

In a similar vein, The Financial Times recently introduced Newssift, an experimental program that tracks sentiments about business topics in the news, coupled with a specialized search engine that allows users to organize their queries by topic, organization, place, person and theme. Using Newssift, a search for Wal-Mart reveals that recent sentiment about the company is running positive by a ratio of slightly better than two to one. When that search is refined with the suggested term “Labor Force and Unions,” however, the ratio of positive to negative sentiments drops closer to one to one.

This work isn't easy.

Translating the slippery stuff of human language into binary values will always be an imperfect science, however. "Sentiments are very different from conventional facts," said Seth Grimes, the founder of the suburban Maryland consulting firm Alta Plana, who points to the many cultural factors and linguistic nuances that make it difficult to turn a string of written text into a simple pro or con sentiment. "'Sinful' is a good thing when applied to chocolate cake," he said.

The simplest algorithms work by scanning keywords to categorize a statement as positive or negative, based on a simple binary analysis ("love" is good, "hate" is bad). But that approach fails to capture the subtleties that bring human language to life: irony, sarcasm, slang and other idiomatic expressions. Reliable sentiment analysis requires parsing many linguistic shades of gray.

Submitted by Steve Simon

Questions

1. No algorithm is going to be perfect, but some may provide sufficient accuracy to be useful. How would you measure the accuracy of a sentiment algorithm? How would you decide whether the accuracy was sufficient for your needs?

Assigning points to books

"Reading by the Numbers"
by Susan Straight, The New York Times, August 27, 2009

There's a program in many schools to encourage reading. But some people don't like it.

At back-to-school night last fall, I was prepared to ask my daughter’s eighth-grade language arts teacher about something that had been bothering me immensely: the rise of Accelerated Reader, a 'reading management' software system that helps teachers track student reading through computerized comprehension tests and awards students points for books they read based on length and difficulty, as measured by a scientifically researched readability rating. When the teacher announced during the class presentation that she refused to use the program, I almost ran up and hugged her.

The problem, according to Ms. Straight, is that the system does not give enough credit for the classics.

Many classic novels that have helped readers fall in love with story, language and character are awarded very few points by Accelerated Reader. My Antonia is worth 14 points, and Go Tell It on the Mountain 13. The previous school year, my daughter had complained that some of her reading choices that I thought were pretty audacious — long, well-written historical novels like Libba Bray’s Great and Terrible Beauty and Lisa Klein’s Ophelia, recommended by her college-age sister — were worth only 14 points each. Sense and Sensibility is worth 22.

Indeed the article has a clever graphic showing a handwritten equation "Sense + Sensibility = 22". Instead of giving points to the classics, the system assigns heavy point totals to the Harry Potter books.

Harry Potter and the Order of the Phoenix topped out at 44 points, while Harry Potter and the Deathly Hallows and Harry Potter and the Goblet of Fire were worth 34 and 32.

The points are assigned using a formula system (ATOS).

ATOS employs the three statistics that researchers have

found to be most predictive of reading difficulty: the number of words per sentence, the number of characters per word, and the average grade level of the words in the book.

[2]

This formula does have its problems. A Wikipedia article notes that

The Accelerated Reader's method for determining grade level is critically flawed. Lord of the Flies is considered 5th grade level and James Joyce's Ulysses is considered 7th grade level. There is more to a grade level of a book than the word count and length. Lord of the Flies and Ulysses score at very younger audience levels in the simplistic AR rubric, yet most 5th and 7th graders would not and probably could not read these books because of the story and narrative structure which is much more mature than the AR mathematical reduction of word count and length. [3]

This issue is acknowledged in an article written by the company that produces the Advanced Learning System, republished at a school web site.

Advances in technology and statistical analysis have led to improvements in the science of readability, but there are still some things that readability formulas cannot do—and will never be able to do. All readability formulas produce an estimate of a book’s difficulty based on selected variables in the text, but none analyzes the suitability of the content or the literary merit for individual readers. This decision is up to educators and parents, who know best what content is appropriate for each student. [4]

Submitted by Steve Simon

Questions

1. Is it wrong to assign more points to a Harry Potter book than a Jane Austen book?

2. Could the ATOS formula be adapted to give greater weight to the classics? How?

Earthquake probability maps

"Geology News - Earth Science Current Events" refers interested readers to a website created by the U.S. Geological Survey, which enables people to create earthquake probability maps for specific regions. ASCII files of raw data used in creating the maps are also available.

Conditional probability in search and rescue

“Coast Guard looks for lessons in Matagorda miracle”
by Jennifer Latson, Houston Chronicle, September 1, 2009

Three missing fishermen were found by a recreational yachtsman after 45 flights, 250 hours, 6 days, and 86,000 square miles of searching by the U.S. Coast Guard. The Coast Guard began the search on Saturday, August 22 and suspended it on Friday, August 28; the yachtsman found the fishermen on Saturday, August 29.

The Coast Guard's search and rescue system, built on probability statistical models, lists the average probability of being detected from a plane as 78 percent. …. [T] the crew made the difficult decision to suspend the search on Friday. A strict reading of the probability models might have called off the search as early as Wednesday. But searchers were hopeful, swayed by the pleas of relatives who said the men were skilled survivalists: outdoorsmen, fishers and hunters.

Discussion

1. What do you think that the "average probability of being detected from a plane" refers to – one flight, one time period, one geographic region, one rescue mission, one person, one boat, …?

2. Suppose that the Coast Guard only measures the relative frequency of finding people or boats after it receives reports of alleged missing boaters. Can you see any benefit to including, in their calculations, simulations with people or boaters randomly placed in bodies of water?

3. Based on the Matagorda experience, would you recommend that the Coast Guard break its “average probability of being detected from a plane” into two conditional probabilities? What would those be?

4. Do you think that the Coast Guard could, would, or should have extended its search even more, if the probability of detecting the fishermen had been higher, based on the fact that the men were “skilled survivalists”?

Confidence in hurricane predictions

“NOAA: 2009 Atlantic Hurricane Season Outlook Update”
Issued August 6, 2009

Predictions for the 2009 Atlantic hurricane season were produced by the National Oceanic and Atmospheric Administration for the Atlantic hurricane region.

…. This combination of climate factors indicates a 50% chance of a near-normal hurricane season for 2009, and a 40% chance of a below normal season. An above-normal season is not likely ....

The outlook indicates a 70% probability for each of the following seasonal ranges: 7-11 named storms, 3-6 hurricanes, 1-2 major hurricanes ….

These predicted ranges have been observed in about 70% of past seasons having similar climate conditions to those expected this year. They do not represent the total range of activity seen in those past seasons.

Discussion

1. How likely is an above-normal season?

2. Statistically speaking, would you call the 70% figure a probability? What would you have called it?

3. Based on the given seasonal range of named storms, can you estimate the (arithmetic) mean number of named storms and the standard error of the measurement?

Short-term probability in lawsuit

“... Filing of Class Action Lawsuit Against … ProShares Fund”
Reuters, August 31, 2009

A law firm filed a class action lawsuit in Maryland on behalf of shareholders in the UltraShort Financials ProShares Trust Fund (SKF). DJFI refers to the Dow Jones Financial Index. Here are excerpts from the claim:

… Defendants failed to disclose the following risks: (a) the mathematical probability that SKF's performance will fail to track the performance of the DJFI over any period longer than a single trading day; … (c) that SKF is not a directional play on the performance of U.S. financial stocks, but dependent on the volatility and path the DJFI takes over any time period greater than a single day; … (f) that based upon the mathematics of compounding, the volatility of the DJFI and probability theory [it can be inferred that] SKF was highly unlikely to achieve its stated investment objectives over time periods longer than a single trading day.

Discussion

Consider part (f) of the complaint. Suppose that the probability of SKF’s performance tracking the DJFI over a single trading day was as high as, say, 75%.

1. How do you think that either the “mathematics of compounding” or “probability theory,” by themselves, would make it highly unlikely that SKF could track the DJFI over more than a single trading day, say 5 trading days? What assumption(s) are you making?

2. How do you think that including the “volatility of the DJFI” as a condition could affect your answers to the preceding question?

Conditional entropy in text analysis

“Decoding the Ancient Script of the Indus Valley”
by Ishaan Tharoor, TIME, September 1, 2009

The Indus Valley refers to a 300,000-square-mile region in modern-day Pakistan and northwestern India. Its urban culture is estimated to be at least 4500 years old.

The group examined hundreds of Harappan texts and tested their structure against other known languages using a computer program. Every language, they suggest, possesses what is known as "conditional entropy": the degree of randomness in a given sequence. In English, for example, the letter "t" can be found preceding a whole variety of other letters, but instances of "tx" or "tz" are far more infrequent than "th" or "ta." … Quantifying this principle through computer probability tests, they determined the Harappan script had a similar measure of conditional entropy to other writing systems, including English, Sanskrit and Sumerian. If it mathematically looked and acted like writing, they concluded, then surely it is writing.

Cable news polls a la The Daily Show

“Poll Bearers”
Jon Stewart, The Daily Show, August 17, 2009

Jon Stewart opened his August 17 show by commenting:

I believe that we are now more united than ever, and I have got mathematical proof to back me up. The more you watch cable news the more you see how unified Americans are.

Stewart showed three video clips from FOX News:
(a) “93% of you who know how to text say no, that we should not be talking to moderate factions of the Taliban.”
(b) “93% said yes,” the Republican Party is better off without Arlen Spector.
(c) “100% of you say yes, the town halls are making a difference.”

and four video clips from Lou Dobbs on CNN:
(d) “96% are outraged that big business and socio-ethnocentric special interest groups are trying to kill the most effective program in the fight against illegal immigration.”
(e) “97% of you say that it’s more important for the federal government to enforce our immigration laws than to count illegal aliens.”
(f) “94% of you say you are outraged that you are expected to tighten your belt.”
(g) “98% of you say it’s time illegal aliens said, ‘thank you’ for all the help and support they get in this country.”

He also showed two video clips from Ed Schultz’ polls on MSNBC.

Stewart reflected:

These numbers seem a tad on the high side. Are they trustworthy? Is it possible these polls don’t reflect public opinion so much as the ability of those shows’ viewers to repeat the opinions they have just heard the host that they’re watching express?

At the end of the show, Stewart showed two somewhat contradictory poll results about paying for the increased cost of a reformed health care system:
(h) “92% said no, we don’t need a tax hike," on FOX.
(i) “94% said yes, it’s fair to tax the top 1.2% of the richest Americans," on MSNBC.

Discussion

The FOX, CNN, and MSNBC program hosts asked viewers to text their responses to on-air questions, each of which had two possible answer choices.

1. Why is it important to see the exact questions and answer choices in each poll?
2. Result (c) was practically instantaneous after the question was posed. Can you suggest some reason(s) for the 100% response in this result?
3. Identify some problems with the method these programs used, in terms of providing reliable information about the attitudes of the American people in general, or even of a program’s viewers in particular?
4. Would it make sense to compute an estimate of the standard errors in these poll percentages? Why or why not?

Oscar Nominees Brings New Voting Rules

The New York Times.
Michael Cieply
September 3, 2009

http://graphics8.nytimes.com/images/blogs/carpetbagger/24oscars_cb.jpg

Here we read:

The Oscar process is clearly still in process. On Monday, the Academy of Motion Picture Arts and Sciences said its decision last June to double the number of best picture nominees to 10 brings with it a change in voting method.

The best picture will now be chosen by a preferential voting system, rather than the single-choice voting used in other categories.In the single-choice system, voters pick their film and the one with the most votes wins. Oscar voters will now be expected to rank their best picture choices, one through 10. Without such ranking, the wider field of nominees raised the possibility that a film would win top honors though it was preferred by only a small plurality of voters.

The preferential voting system is also called the single transferable vote. It is described in more detail by the Numbers Guy who writes for the wall Street Journal. He writes:

Voters will rank the 10 nominees from 1 to 10. PricewaterhouseCoopers staffers who oversee the voting for the Academy will place the ballots into 10 piles, each one for ballots with one of the films ranked at the top. If one pile has 50% of the ballots, it wins. If not, the ballots in the smallest pile are added to the pile of the second-place film listed, and the procedure continues until one film has 50% of the votes.

He goes on to say

I wrote in February that voting theorists aren’t so keen on the Oscars’ system for choosing winners from the slate of nominees. The problem is that some don’t like the new system much, either. Steven Brams, a professor of politics at New York University, points out several problems. “Some voters, by raising an alternative from last to first place, [could] cause it to lose — just the opposite effect of what one would want a voting system to induce,” Brams said. Also possible is the scenario in which “some voters, by actually voting as opposed to not showing up at the polls, hurt the alternatives they rank highest.” And when there is a Condorcet winner — one who beats every other contender in head-to-head matchups — it can often lose in the Oscars’ new system. Brams advocated for a system he helped develop, called approval voting, that he said was superior on these counts.

Approval voting is a system in which you can vote for as many candidates as you like, as long as there are more than two candidates on the ballot. In an earlier article in the New York Times. Brams gave examples where he thought the best movie did not win the voting. He says, Just look at the 1976 best picture race. The five nominees were "All the President's Men," "Bound for Glory," "Network," "Taxi Driver" and "Rocky," the eventual winner. "I cannot believe that 'Rocky' would have won a head-to-head contest with "Taxi Driver.

Bob Norman ( Professor emeritus at Dartmouth college) has as current research methods of voting, He provided for us the following comments on the new Oscar voting method.

The Academy of Motion Picture Arts and Sciences announced it was doubling the number of best-picture nominees, to 10. The Academy also announced that it was changing its voting system. Called preferential voting by some, and single-transferable vote (STV) by others. This voting procedure is by the The Numbers Guy (see above)

Let’s examine whether their chosen system of voting is a good one.

In The Wrap August 31 Steven Bond wrote about the change: “There are certain mathematical dangers with more nominees,” says the Academy’s executive director, Bruce Davis, who revealed the new rule exclusively to TheWrap. “You could really get a fragmentation to the point where a picture with 18 or 20 percent of the vote could win, and the board didn’t want that to happen.”

Eventually, one film will wind up with more than 50 percent.

Bond goes on to say: The process is designed to discern a true consensus and uncover, in Davis’ words, “the picture that has the most support from the entire membership.”

Using preferential voting, a. k. a. STV, the winning picture can still have initially only18-20 percent of the first-place vote. After the elimination process has run its course in a close vote, some film will have more than 50% of the voters over its remaining rival. That winning film could be a divisive one that was disliked intensely by nearly half the voters. It would hardly the preference of the voters.

The Numbers Guy tells us: Davis, of the Academy, told me that any voting system is open to criticism, but it’s hard to find a consensus on the best alternative, as I wrote in the February column. “Though no voting system is perfect, for the Academy’s purposes, it is difficult to point to a better system than the preferential system.”

It is true that with more candidates there are potentially more mathematical complications. It is also true that no voting system is perfect. Preferential voting has more than its share of problems, including several that are more serious than the one just noted. One of the most serious is that while their system proudly makes good use of the second choices of all voters who favored one of the eliminated films, it ignores the second choices for the runner-up, and in a close decision that make a lot of difference.

This problem is apt to occur when the runner up is a divisive candidate, one liked well by many but less than half the voters. To illustrate, suppose that when all but three films have been eliminated, the voters are divided between 40 who like Film 1 and 60 who don’t like it at all. It could easily happen that 25 of the 60 like film 2 best, while 35 like film 3 best. By their preferential voting system, film 2 is eliminated and nearly all of those 25 voters’ second choices get transferred to film 3, which is declared the winner. This ignores the second choices of the voters who like Film 1 best even if they have a three-to-one preference for Film 2 over Film 3. If they do, then a majority of the voters, 55 of 100, like Film 2 better than the winner, Film 3.

If you want a system that, as Davis says, finds “the picture that has the most support from the entire membership,” simply ask the membership which films they want to support. That is, let them vote for as many as they like. The one with the most votes, the most supported film, would win. It’s that simple.

Submitted by
Laurie Snell

For another voting problem, this with respect to selecting the site of the 2016 Summer Olympics Games, see “The Olympics of Voting”, by John Mark Hansen and Allen R. Sanderson, Forbes Magazine, June 22, 2009.

Consistency of a tennis star

“Roger Federer May be More Machine Than Man”
by Matthew Futterman, The Wall Street Journal, August 31, 2009

This brief article describes the “remarkable level of consistency that is the hallmark of great tennis champions” such as Roger Federer and Pete Sampras, and Andre Agassi during his “rejuvenation” period of 1998-2004.

For the period 2005-2009, some Federer stats are given:
(a) Serve games won: 89-90%
(b) 1st serve percentage: 62-64%
(c) 1st serve points won: 76-79%
(d) 2nd serve points won: 57-59%
(e) 1st return points won: 31-35%
(f) Break points converted: 41-44%.

Discussion

1. What statistical term would you use to refer to Federer’s “consistency”?
2. What else would you like to know, in order to evaluate whether Federer’s “consistency” was unusual or not?
3. Do you think that “consistency” is a sufficient condition for a tennis champion? A necessary condition?

Fat tails in investing

“Some Funds Stop Grading on the Curve”
by Eleanor Laise, The Wall Street Journal, September 8, 2009

This article discusses how the classic portfolio-construction models have failed to include extreme events in their risk assessments by basing their analyses on normal distributions of stock-market returns. Apparently Benoit Mandelbrot recognized this flaw in the models in the 1960s, but analysts were reluctant to move to fat-tailed models because “the math was so unwieldy.”

Morningstar has now “built fat-tailed assumptions into its Monte Carlo simulations, which estimate the odds of reaching retirement financial goals.” While the classic model assumed that a 60% stocks-40% bonds portfolio’s losing a fifth of its value was a once-in-111-year event, Morningstar’s model predicts that it would be a once-in-40-year event. The author notes that 1931 was the last year that portfolio losses as great as the recent 2008 loss occurred.

One problem in performing these analyses is identified:

Number-crunchers have a smaller supply of historical observations to construct models focused on rare events.
“Data are intrinsically sparse,” says [one analyst].

There is also a problem with investments based on assumptions that predict more frequent “market shocks,” and that is reduced returns on more conservative portfolios. One fund lost only 0.5% of its value over the last year, compared to S&P’s 16% decline during that period; however, it rose only 4% over the last three months, compared to the S&P’s 8% rise.

The article discusses other measures of risk that have been adopted recently and transmits advice from an analyst:

Pimco's Mr. Bhansali is unimpressed. Since it is so difficult to forecast extreme events, investors should focus on their potential consequences rather than the probability they will occur, Mr. Bhansali says.
As for comprehensive measures of risk, he says, "they fail you in many cases when you need them the most."

A blogger [5] refers interested readers to Mandelbrot’s The (MIS)Behavior of Markets[6] and several Taleb books[7] and states, “The market has been proven to be NON-random. Normal distribution statistics assumes randomness in price movements. They are not.”

A second blogger, a financial analyst,[8] feels that we should “train the users of probability distributions to better understand the significance of tail information,” instead of trying to “’fix’ probability distributions by ‘distorting’ them.”

A third blogger, a physicist,[9] objects to applying physics formulas to economics:

The fundamental problem, in mathematical jargon, is that economic time series are probably not stationary, meaning that it is not possible to extract their statistical behavior from historic data; the forms and values that fit the past are not valid predictors, even statistically, of the future. Trying to do so is somewhere between tilting at windmills and defrauding the customers. It is amazing that the "quants", who are generally very smart people, do not or refuse to recognize this. Perhaps there is no money to be made saying the problem is intrinsically insoluble, while if you have a tool that you pretend is useful you can get rich (often) or even famous … peddling it.

Some interesting suggestions

Perceptual mislocalization of bouncing balls by professional tennis referees

Suggested by Jeff Norman

Should Entrepreneurs Minimize Credit Card Debt?

Suggested by Jimmy K Duong

How How Did Economists Get It So Wrong?

Suggested by Laurie Snell