Chance News 79

(Difference between revisions)
Jump to: navigation, search
m (Population pyramids)
m (Population pyramids)
Line 188: Line 188:
  
 
To view populations pyramids, go to the U.S. Census Bureau's "International Data Base" website[http://www.census.gov/population/international/data/idb/informationGateway.php],  select a country and year (1950-2050), and choose the tab “Population Pyramids.”<br>
 
To view populations pyramids, go to the U.S. Census Bureau's "International Data Base" website[http://www.census.gov/population/international/data/idb/informationGateway.php],  select a country and year (1950-2050), and choose the tab “Population Pyramids.”<br>
 
Note:  Instructions for creating your own pyramid plots in SAS or R are provided in a post by Nick Horton at [http://sas-and-r.blogspot.com/2010/08/example-83-pyramid-plots.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+SASandR+%28SAS+and+R%29 SAS and R Blogspot].
 
  
 
Submitted by Margaret Cibes
 
Submitted by Margaret Cibes
 +
 +
Note:  Instructions for creating your own pyramid plots in SAS or R are provided in a post by Nick Horton at [http://sas-and-r.blogspot.com/2010/08/example-83-pyramid-plots.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+SASandR+%28SAS+and+R%29 SAS and R Blogspot].
  
 
==Modeling the financial world: some caveats==
 
==Modeling the financial world: some caveats==

Revision as of 16:53, 23 December 2011

Contents

Quotations

"...risk, essentially, is measurable whereas uncertainty is not measurable.

"In Mr. Cain’s case, I think we are dealing with an instance where there is considerable uncertainty."

--Nate Silver, writing in Herman Cain, outlier

FiveThirtyEight blog, New York Times, 27 October 2011

Submitted by Paul Alper


"Experts have a poor understanding of uncertainty. Usually, this manifests itself in the form of overconfidence: experts underestimate the likelihood that their predictions might be wrong. …. [E]xperts who use terms like “never” and “certain” too often are playing Russian roulette with their reputations."

"I used to be annoyed when the margin of error was high in a forecasting model that I might put together. Now I view it as perhaps the single most important piece of information that a forecaster provides. When we publish a forecast on FiveThirtyEight, I go to great lengths to document the uncertainty attached to it, even if the uncertainty is sufficiently large that the forecast won’t make for punchy headlines."

"Another fundamental error: when you have such little data, you should almost never throw any of it out, and you should be especially wary of doing so when it happens to contradict your hypothesis."

--Nate Silver, writing in Herman Cain and the Hubris of Experts
FiveThirtyEight blog, The New York Times, 27 October 2011

Submitted by Margaret Cibes

Forsooth

“The most important statistics in football are wins and losses and whether or not a team can outscore his opponent.”

Mike Leach, in Sports for Dorks: College Football (p. 14)

The book is excerpted in the NYT College Sports Blog 1 December and 2 December

Submitted by Bill Peterson


“I think we’re in trouble. …. Look at the difference between the top 1 percent and the bottom 95.”

Republican presidential primary candidate Buddy Roemer

on the Occupy Wall Street “99%” issue

in an interview with Rachel Maddow, November 28, 2011 [1]

Submitted by Margaret Cibes


In Reframing the debate over using phones behind the wheel (New York Times, 17 December 2011), we read, "Part of the lure of smartphones...is that they randomly dispense valuable information. People do not know when an urgent or interesting e-mail or text will come in, so they feel compelled to check all the time." The following sidebar appears in the online version of the article:

http://community.middlebury.edu/~wpeterso/Chance_News/images/CN79_twitter.png

Submitted by Bill Peterson

Fraud may just be the tip of the iceberg

Fraud Case Seen as a Red Flag for Psychology Research by Benedict Carey, The New York Times, November 2, 2011.

A recently revealed case about fraud may point to a much larger problem.

A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found. Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.

The psychologist accused of fraud took advantage of some common practices in the field.

Dr. Stapel was able to operate for so long, the committee said, in large measure because he was “lord of the data,” the only person who saw the experimental evidence that had been gathered (or fabricated). This is a widespread problem in psychology, said Jelte M. Wicherts, a psychologist at the University of Amsterdam. In a recent survey, two-thirds of Dutch research psychologists said they did not make their raw data available for other researchers to see. “This is in violation of ethical rules established in the field,” Dr. Wicherts said.

The field also appears to be rather careless about their statistical analyses.

In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

This is not a surprise to psychologists.

Researchers in psychology are certainly aware of the issue. In recent years, some have mocked studies showing correlations between activity on brain images and personality measures as “voodoo” science, and a controversy over statistics erupted in January after The Journal of Personality and Social Psychology accepted a paper purporting to show evidence of extrasensory perception. In cases like these, the authors being challenged are often reluctant to share their raw data. But an analysis of 49 studies appearing Wednesday in the journal PLoS One, by Dr. Wicherts, Dr. Bakker and Dylan Molenaar [available here], found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.


Submitted by Steve Simon

Remark

Andrew Gelman's blog has often considered questions of cheating in science. The following quote from E. J. Wagenmakers, a Dutch professor at Amsterdam University, appeared in a post from September 9 of this year :

Diederik Stapel was not just a productive researcher, but he also made appearances on Dutch TV shows. The scandal is all over the Dutch news. Oh, one of the courses he taught was on something like 'Ethical behavior in research', and one of his papers is about how power corrupts. It doesn’t get much more ironic than this. I should stress that the extent of the fraud is still unclear.

This is perhaps doubly ironic, in that the psychologists have been caught making psychological errors.

Submitted by Paul Alper

Another Remark

"Much of Prof. Stapel's work made it into newspapers in no small part because he delivered scientific evidence for contentions journalists wanted to believe …..”

Eric Felten reporting[2] in The Wall Street Journal, November 4, 2011

Other stories include “Diederik Stapel; The Lying Dutchman”, in The Washington Post and “Massive Fraud Uncovered in Work by Social Psychologist”, the latter an article reprinted in the Scientific American, with permission from Nature. Both articles are dated November 1, 2011.

Submitted by Margaret Cibes

Marilyn tackles a dice problem

Ask Marilyn, by Marilyn vos Savant, Parade, 23 October 2011

It has been a while since we've reported on an "Ask Marilyn" story. In the Sunday column referenced above, a reader asks:

I’m a math instructor and I think you’re wrong about this question [originally from Marilyn's July 23 column]: “Say you plan to roll a die 20 times. Which result is more likely: (a) 11111111111111111111; or (b) 66234441536125563152?” You said they’re equally likely because both specify the number for each of the 20 tosses. I agree so far. However, you added, “But let’s say you rolled a die out of my view and then said the results were one of those series. Which is more likely? It’s (b) because the roll has already occurred. It was far more likely to have been that mix than a series of ones.” I disagree. Each of the results is equally likely—or unlikely. This is true even if you are not looking at the result.

Marilyn responds: "My answer was correct. To convince doubting readers, I have, in fact, rolled a die 20 times and noted the result, digit by digit. It was either: (a) 11111111111111111111; or (b) 63335643331622221214. Do you still believe that the two series are equally likely to be what I rolled?"

Many people will remember that the infamous Monty Hall problem first gained national attention after appearing in an "Ask Marilyn" column in 1990. (More nostalgia: In the inaugural issue of Chance News, Laurie Snell described a New Yorker cartoon inspired by Monty's game show, Let's Make a Deal). One important lesson from that discussion was that the host's behavior mattered, and that the problem was not well-defined without a model for how he chooses a door to open.

In that spirit, it might be relevant to consider how the strings of "rolls" are produced. Marilyn's answer suggests that she was already planning to write down a string of twenty 1s along with her string of twenty actual rolls. If you know that is going to happen, then even before the roll has occurred, you might be prepared to guess in advance that the real string will not be the one consisting of twenty 1s.

Discussion

  1. The other lesson from the Monty Hall discussion was not to jump to the conclusion that Marilyn is wrong. So what do you think she had in mind when she wrote "because the roll has already occurred..."?
  2. I've just rolled a die twenty times. Which of the following do you think it is: (i) 14152653532346264333; or (ii) 61655214235336553132? Does your answer change if someone points out that (i) consists of the digits of pi after the decimal point, skipping the 0s, 7s, 8, and 9s?

Submitted by Bill Peterson

Comment

Paul Alper wrote to point out an analogy with a famous classroom experiment, in which the instructor leaves the room while students compile lists of 200 "tosses" of a fair coin. Half the students toss a real coin, while the other half produce a string of imagined tosses. Upon return, the teacher classifies the strings as real or fake, depending on the length of the longest run. The imagined strings typically will typically not include long runs, but with probability 0.965 a real string of 200 tosses will contain a run of at least six consecutive heads or six consecutive tails (see discussion in archives of the Chance Newsletter. This activity is also described in the chapter "Streaky Behavior" in Scheaffer, et. al, Activity Based Statistics).

Grading health news reports

Gary Schwitzer's invaluable website HealthNewsReviews.org provides weekly reviews of news stories from the health field. (The project is sponsored by the Foundation for Informed Medical Decision Making.)

A recent story on the site gives the following overall summary of performance of the news media in providing accurate coverage. Schwitzer writes "After 5 years and 7 months, and after reviewing 1,648 stories and publishing nearly 1,300 blog posts, we've revised the site (for the second time)." Below is how these 1,648 stories fared on his rating system:

http://ih.constantcontact.com/fs055/1102072836906/img/117.png

The stories rated above come from 20 news organizations, including newspapers, magazines and web sources. When it comes to TV presentations of medical results, however, HealthNewsReviews.org has thrown in the towel and won't be reviewing them because, "After 3.5 years and 228 network TV health segments reviewed, we can make the data-driven statement that many of the stories are bad and they’re not getting much better."

Submitted by Paul Alper

The goal of reproducibility

Scientists' elusive goal: Reproducing study results
by Gautam Naik, Wall Street Journal, 2 December 2011

The WSJ says "This is one of medicine's dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can't be reproduced." The article includes the following graphic summarizing the (largely unsuccessful) attempts by Bayer to reproduce published findings.

http://si.wsj.net/public/resources/images/P1-BD631_REPROD_NS_20111201165702.jpg

The article goes on to discuss various reasons for this state of affairs, pressure on researchers to to publish, the increasing complexity of medical experiments, and the well-known bias of journals for publishing only positive results. Some of these issues were discussed in CN 5, which John Ionnidis's 2005 article in PLoS Medicine Why most published research findings are false.

The more reliable popular media have finally been convinced to carry pretty accurate statements about interpreting confidence intervals in polling results. Maybe the media - and even science journals themselves - need to be encouraged to carry a cigarette-like warning about study results: "Caution: Since science is an inductive process whose conclusions depend upon strong evidence that is reproducible, readers should take into account that any conclusions are preliminary, and should not be acted upon until further experiments have reinforced them."

Submitted by Margaret Cibes

QL in the Media Contest finalists

In CN 77, Margaret Cibes noted that the MAA SIGMAA on Quantitative Literacy was running a contest for best and worst examples of QL in the media. They have posted the entries here, where viewers are invited to cast their votes.

Suggested by Priscilla Bremser

Dilbert: Wally as "lurking variable"

http://dilbert.com/dyn/str_strip/000000000/00000000/0000000/100000/40000/4000/200/144270/144270.strip.gif
Source: http://dilbert.com/strips/comic/2011-11-28/

Teaching stats with sports

At Moneyball U, what are the odds?
by Alan Schwarz, New York Times, 4 November 2011

The title is a reference to the movie "Moneyball," which features the role of probability and statistics in the world of baseball. Neverthless, the story leads with the comment that "Watching a baseball telecast may not be the best way to learn basic probability." Indeed, when a good hitter has gone hitless in his last 10 at-bats, announcers can't resist saying that he is now "due for a hit." The appeal is to a mythical Law of Averages that will even things out in the short run, but but of course the real Law of Large Numbers promised no such thing. Similarly, fans will offer a variety of explanations for the much-discussed sophomore jinx, a phenomenon which can generally be accounted by regression to the mean.

Given that many students can be readily engaged by such conversations, college courses have appeared that teach statistical concepts in the context of sports. The article describes such offerings at Stanford, Ohio State, Bowling Green, Louisiana Tech, and James Madison University, among others. It is a mistake, however, to assume that all students are sports fans. The article relates an anecdote from the James Madison course. When the professor asked if we should be surprised that the last 14 opening coin tosses at the Super Bowl have all been won by the N.F.C., one student asked "What's the N.F.C.?" Fortunately, this student still seemed to appreciate the lighter atmosphere of the class. Or, as Prof. Jim Cochran of Louisiana Tech professor observes, "You want them to demand data, to look for evidence, to test hypotheses. You have students who would otherwise not come close to this discipline. It’s very valuable."

The article catalogs a variety of probability and statistics techniques that are readily illustrated with sports data. We were intrigued by a reference to Simpson's paradox, which "helps explain why the Cincinnati Reds, despite having the best record in the National League during the strike-split 1981 season, didn’t make the playoffs." Details can be found in Prof. Cochran's article Bowie Kuhn's worst wightmare (INFORMS Transactions on Education, Vol 5, No 1, Sept 2004). This is a sophisticated discussion which applies integer programming to demonstrate the possibility aggregation paradoxes. As Cochran states in the abstract, "although the case deals with a baseball-related problem, it is relatively self-contained and requires no understanding of how baseball is played, so students who are unfamiliar with the sport will not be seriously disadvantaged."

Note: On the topic of Simpson's paradox, Tom Moore's 2006 article Paradoxes in Film Ratings (Journal of Statistics Education Vol 14, No 1) presents another interesting illustration, and also includes a table with references to favorite examples.

Submitted by Bill Peterson

Probabilist for president?

“A Pony for Every American? New Hampshire Primary Has It All”
The Wall Street Journal, December 6, 2011

A New Hampshire mathematician and Republican presidential primary candidate stated:

”I will accept any top-tier candidate's neutrally administered aptitude challenge that assesses the mental, physical and ethical qualities of leadership …. No other candidate comes close to my structured problem-solving abilities and demonstrated proficiency in probabilistic risk assessment." ….
“[I will] balance the federal budget through a mathematically superior tax platform that combines personal income, flat taxes, progressive taxes and capital gains into one elegant solution that no other candidate has formulated or is capable of generating.”

Submitted by Margaret Cibes

Queues

“Find the Best Checkout Line”
by Ray A. Smith, The Wall Street Journal, December 8, 2011

This article discusses queuing issues and several recent studies about them. It includes a 6-minute video about wait time perceptions and their effects on customers, and a graphic showing for expected wait time: “Average wait time = average number of people in line divided by their arrival rate.” (Operations Researchers will recognize this as an illustration of Little's Law)

Submitted by Margaret Cibes

Population pyramids

“International Data Base”

Chance readers may be interested in population pyramids. They are nice examples of distributions, as well as opportunities for comparisons of age and gender characteristics within one country, or in the same country over different years, or across different countries. For the U.S., they illustrate clearly what Atul Gwande calls “the ‘rectangularization’ of survival”:

Throughout most of human history, a society's population formed a sort of pyramid: young children represented the largest portion – the base – and each successively older cohort represented a smaller and smaller group. In 1950, children under the age of five were eleven per cent of the U.S. population, adults aged forty-five to forty-nine were six per cent, and those over eighty were one percent. Today [2007], we have as many fifty-year olds as five-year-olds. In thirty years, there will be as many people over eighty as there are under five.

See Gwande’s article, “The Way We Age Now”, from The New Yorker, April 30 2007.

To view populations pyramids, go to the U.S. Census Bureau's "International Data Base" website[3], select a country and year (1950-2050), and choose the tab “Population Pyramids.”

Submitted by Margaret Cibes

Note: Instructions for creating your own pyramid plots in SAS or R are provided in a post by Nick Horton at SAS and R Blogspot.

Modeling the financial world: some caveats

“Physics Envy”

Book review of Models Behaving Badly, in The Wall Street Journal, December 14, 2011

Emanuel Derman, author of Models Behaving Badly, is a Columbia professor who was trained as a physicist and later worked at Goldman Sachs. He writes that when people try to create financial models that involve human behavior, they “are trying to force the ugly stepsister's foot into Cinderella's pretty glass slipper”:

Although financial models employ the mathematics and style of physics, they are fundamentally different from the models that science produces. Physical models can provide an accurate description of reality. Financial models, despite their mathematical sophistication, can at best provide a vast oversimplification of reality.

Derman has an online blog, “The Financial Modelers’ Manifesto”, posted in 2009:
“I will remember that I didn't make the world, and it doesn't satisfy my equations.”
“Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.”
“I will never sacrifice reality for elegance without explaining why I have done so.”
“Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.”
“I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.”

Derman also states in his book, “[I]n physics you're playing against God, and He doesn't change His laws very often. In finance, you're playing against God's creatures."

Submitted by Margaret Cibes

Graphic on campaign contributions

Deep pockets, deeply political
by Charles Blow, New York Times, 19 December 2011

Blow describes a recent report The political one percent of the one percent, by the Sunlight Foundation, an organization dedicated to transparency in government. Blow has developed the following graphic summarizing the trends in campaign contributions by those in the top one-percent-of-one-percent of the income distribution.

http://graphics8.nytimes.com/images/2011/12/19/opinion/cs-blow-donors/cs-blow-donors-blog533.jpg

Submitted by Paul Alper

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox