Chance News 80: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
 
(51 intermediate revisions by 2 users not shown)
Line 1: Line 1:
December 23, 2011 to January 17, 2012
==Quotations==
==Quotations==
“The role of context. ....  The focus on variability naturally gives statistics a particular content that sets it apart from from mathematics itself and from other mathematical sciences, but there is more than just content that distinguishes statistical thinking from mathematics.  Statistics requires a different kind of thinking, because <i>data are not just numbers, they are numbers with a context</i>.  ....  <i>In mathematics, context obscures structure.  ....  In data analysis, context provides meaning.</i>  ....  [A]lthough statistics cannot prosper without mathematics, the converse fails.”<br>
“The role of context. ....  The focus on variability naturally gives statistics a particular content that sets it apart from from mathematics itself and from other mathematical sciences, but there is more than just content that distinguishes statistical thinking from mathematics.  Statistics requires a different kind of thinking, because <i>data are not just numbers, they are numbers with a context</i>.  ....  <i>In mathematics, context obscures structure.  ....  In data analysis, context provides meaning.</i>  ....  [A]lthough statistics cannot prosper without mathematics, the converse fails.”<br>
Line 13: Line 15:


Submitted by Paul Alper
Submitted by Paul Alper
----
"Apparently unsatisfied with the evidence that each study provides on its own, Bem et al. (2011) resort to the last refuge of the refuted: consider all experiments simultaneously."
<div align=right>Eric–Jan Wagenmakers, writing in [http://dl.dropbox.com/u/1018886/ClarificationsForBemUttsJohnson.pdf  Yes, psychologists must change the way they analyze their data: <br>Clarifications for Bem, Utts, and Johnson (2011)]</div>
Submitted by Paul Alper
----
“Statistical significance should be a tiny part of an inquiry concerned with the size and importance of relationships.”  [p. 2]<br>
“[Statistical significance] is a philosophical, qualitative test.  It does not ask how much.  It asks 'whether.'  ….  [S]ome of the putatively quantitative sciences have slipped into asking qualitatively whether there <i>exists</i> an effect ….  Yes or no, they say, and then they stop.  They have ceased asking the scientific question ‘How much is the effect?’  And they have therefore ceased being interested in the pragmatic questions that follow:  ‘What Difference Does the Effect Make?’ and ‘Who Cares?’  They have become, as we put it, ‘sizeless.’”  [pp.  4-5]<br>
“[S]tatistical significance is a philosophy of mere existence.  ….  It concerns itself only with one kind of probability of a (allegedly) randomly sampled event – the so-called exact <i>p</i>-value or Student’s <i>t</i> – and not with other kinds of sampling probability, such as the “power of the test … [or] <i>nonsampling</i> sources of error, such as ... measurement error … [or] experimental error and sample selection bias.”  [pp. 7-8]<br>
<div align=right>Ziliak and McCloskey in <i>The Cult of Statistical Significance</i>, 2008</div>
Submitted by Margaret Cibes


==Forsooth==
==Forsooth==
Line 35: Line 53:
by Michael Shermer, MIF Books, 1997, p. 54<br>
by Michael Shermer, MIF Books, 1997, p. 54<br>


Shermer is founding publisher of <i>Skeptic</i> magazine and a <i>Scientific American</i> columnist.  This book contains his list of “Twenty-five Fallacies That Lead Us to Believe Weird Things.”  The fallacies are not new, but are well illustrated by many interesting historic and contemporary stories.<br>   
Shermer is founding publisher of [http://www.skeptic.com/ ''Skeptic'' magazine] and a <i>Scientific American</i> [http://www.scientificamerican.com/author.cfm?id=597 columnist].  This book contains his list of “Twenty-five Fallacies That Lead Us to Believe Weird Things.”  The fallacies are not new, but are well illustrated by many interesting historic and contemporary stories.<br>   
See Shermer's 13-minute TED Talk[http://www.ted.com/talks/michael_shermer_on_believing_strange_things.html], including a demonstration of a 900 dollar “dowser” designed to find marijuana in kids’ lockers.  Shermer states: <blockquote>Science is not a thing, it’s a verb.  It’s a way of thinking about things.  It’s a way of looking for natural explanations for all phenomena.</blockquote>
See Shermer's 13-minute [http://www.ted.com/talks/michael_shermer_on_believing_strange_things.html TED Talk], including a demonstration of a 900 dollar “dowser” designed to find marijuana in kids’ lockers.  Shermer states: <blockquote>Science is not a thing, it’s a verb.  It’s a way of thinking about things.  It’s a way of looking for natural explanations for all phenomena.</blockquote>


===Question===
===Question===
Line 53: Line 71:
On a scale where 90 to 110 is considered average, one student's IQ rose 21 points to 128 from 107, lifting the student from the 68th percentile to the 97th compared with others the same age, [according to a co-author] of the study, published last month in Nature.</blockquote>
On a scale where 90 to 110 is considered average, one student's IQ rose 21 points to 128 from 107, lifting the student from the 68th percentile to the 97th compared with others the same age, [according to a co-author] of the study, published last month in Nature.</blockquote>
===Questions===
===Questions===
1.  We are told that 33 British students took an IQ test twice.  It is conceivable that there were additional students who participated in the first test administration but not the second.  Would it be helpful, for inference purposes, to have information about any such students, such as reasons for their non-participation in the second administration?<br>  
#We are told that 33 British students took an IQ test twice.  It is conceivable that there were additional students who participated in the first test administration but not the second.  Would it be helpful, for inference purposes, to have information about any such students, such as reasons for their non-participation in the second administration?<br>  
2.  When people are IQ tested over time, do you think that they are given the same test (or a parallel version), or might a subsequent test include different skills/concepts appropriate for an older group?<br>   
# When people are IQ tested over time, do you think that they are given the same test (or a parallel version), or might a subsequent test include different skills/concepts appropriate for an older group?<br>   
3.  On one commonly used IQ test, scores are standardized to mean 100 and standard deviation 15.  This is consistent with the claim that a 107 score rising to 128 corresponds to a 68th percentile score rising to the 97th.  How would you equate the test scores from two administrations if the tests were different, in order to account for the two tests' possibly different difficulty levels?  (See [http://tigger.uic.edu/~georgek/HomePage/EdMeasurement/livingston.pdf  “Equating Test Scores”],”by Samuel Livingston, Educational Testing Service, 2004.)<br>
#On one commonly used IQ test, scores are standardized to mean 100 and standard deviation 15.  This is consistent with the claim that a 107 score rising to 128 corresponds to a 68th percentile score rising to the 97th.  How would you equate the test scores from two administrations if the tests were different, in order to account for the two tests' possibly different difficulty levels?  (See [http://tigger.uic.edu/~georgek/HomePage/EdMeasurement/livingston.pdf  “Equating Test Scores”],”by Samuel Livingston, Educational Testing Service, 2004.)<br>


Submitted by Margaret Cibes
Submitted by Margaret Cibes
Line 61: Line 79:
==Winning the fight against crime by putting your head in the sand==
==Winning the fight against crime by putting your head in the sand==


[http://www.nytimes.com/2011/12/31/nyregion/nypd-leaves-offenses-unrecorded-to-keep-crime-rates-down.html Police Tactic: Keeping Crime Reports Off the Books] Al Baker and Joseph Goldstein, The New York Times, December 30, 2011.
[http://www.nytimes.com/2011/12/31/nyregion/nypd-leaves-offenses-unrecorded-to-keep-crime-rates-down.html Police Tactic: Keeping Crime Reports Off the Books] Al Baker and Joseph Goldstein, ''New York Times'', December 30, 2011.


Police officers are joining just about every other profession in trying to skew the statistics to make themselves look good.
Police officers are joining just about every other profession in trying to skew the statistics to make themselves look good.
Line 84: Line 102:


===Questions===
===Questions===
 
#The report discusses ways in which the underreporting of crimes could be measured. Discuss those approaches and suggest any additional approaches that could be used to detect the extent of this problem.
1. The report discusses ways in which the underreporting of crimes could be measured. Discuss those approaches and suggest any additional approaches that could be used to detect the extent of this problem.
#Why is the desire to keep crime statistics low a short sighted policy?
 
2. Why is the desire to keep crime statistics low a short sighted policy?


Submitted by Steve Simon
Submitted by Steve Simon
Line 109: Line 125:
:by Marilyn vos Savant, ''Parade'', 2 January 2012
:by Marilyn vos Savant, ''Parade'', 2 January 2012


Here Marilyn suggests that the original problem may have been ambiguous.  Nevertheless, she does print corrections from Jerry and another reader. She also swears off eggnog, and promises a followup on January 22.  Check back here for the update!
Here Marilyn suggests that the original problem may have been ambiguous.  Nevertheless, she does print corrections from Jerry and another reader. She also swears off eggnog, and promises a followup on January 22.  Check back here for the [http://test.causeweb.org/wiki/chance/index.php/Chance_News_81#Marilyn.27s_correction_on_the_drug-testing_problem update].


==Impact of large philanthropists on research==
==Impact of large philanthropists on research==
Line 137: Line 153:
<blockquote>
<blockquote>
In an interview titled “On How We Judge Bad Behavior,” made a few months before the ''Globe'' broke the story of Harvard’s investigation and [http://www.youtube.com/watch?v=963yf616npc available on YouTube], Hauser discusses psychopaths and suggests that they “know right from wrong but just don’t care.”
In an interview titled “On How We Judge Bad Behavior,” made a few months before the ''Globe'' broke the story of Harvard’s investigation and [http://www.youtube.com/watch?v=963yf616npc available on YouTube], Hauser discusses psychopaths and suggests that they “know right from wrong but just don’t care.”
</blockquote>
Andrew Gelman notes [http://andrewgelman.com/2011/09/some-thoughts-on-academic-cheating-inspired-by-frey-wegman-fischer-hauser-stapel/  on his blog] that he received this comment on the Hauser case from E. J. Wagenmakers:
<blockquote>
One of the problems is that the field of social psychology has become very competitive, and high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself. There is a slippery slope here though; although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….
</blockquote>
</blockquote>


Submitted by Paul Alper
Submitted by Paul Alper
==More scientific misconduct==
[http://www.courant.com/health/connecticut/hc-uconn-researcher-fabrications-0112-20120111,0,6684325.story “Investigation Finds UConn Professor Fabricated Research”]<br>
by William Weir and Kathleen Megan, <i>The Hartford Courant</i>, January 12, 2012<br>
After a three-year investigation, the University of Connecticut has stopped all externally funded research in a Health Center researcher’s lab and declined 890K dollars in research grants to the researcher, due to 145 instances of alleged “fabricated research.”  The U.S. Office of Research is investigating, after receiving a 60,000-page report from UConn.<br>
<blockquote>One focus of the investigation was a set of images in these research papers representing "Western blots" — an analytical technique used to detect certain proteins in tissue samples. Generally, results of these experiments are represented with a series of bands, one for each experiment conducted.<br>
The investigation found several instances of these images being manipulated: Some were spliced together, some duplicated, some erased. Many bands that had nothing to do with the particular experiments were cut and pasted into the studies. The report states that these kinds of manipulations can be done simply with such software programs as Adobe Photoshop.</blockquote>
The report suggests that “one of the curious aspects” of this case relates to the division of responsibilities of the research team:
<blockquote> Some lab members — even when they were the first authors on the papers — had no role in biochemical analyses or preparing figures. Compartmentalizing the work in such a way, according to the report, would make it harder to trace any fabrication to its origin.</blockquote>
The researcher has denied any knowledge of the alleged manipulation. <br>
Several other recent instances of alleged scientific fraud are briefly described in a separate article[http://www.courant.com/health/connecticut/hc-research-fraud-0112-20120111,0,1361230.story].<br>
Submitted by Margaret Cibes
==Surprising dreidel outcome==
[http://www.timesunion.com/local/article/A-one-in-trillions-dreidel-game-2427950.php A one-in-trillions dreidel game]<br>
by By Paul Grondahl, ''Times Union'' (Albany, NY), 28 December, 2011
Here is a local news story concerning a remarkable streak in a family game of dreidel, whose rules are described in the article as follows:
<blockquote>
The four-sided spinning top features letters of the Hebrew alphabet on each side: nun, gimel, hei and shin. The players put a penny in a pot at the center of the table. Each player took a turn and spun the dreidel. If the nun faced up, the player did nothing. A gimel meant the player got all the pennies in the pot. A hei roll won half the pot. Shin required the player to add a penny to the pot.
</blockquote>
To the amazement everyone present, Alfred Lorini compiled a streak of 68 spins that included 56 gimels and zero shins.  According to the article, his great-nephew "used a binomial distribution and came up with 1-in-2.25 times 10 to the 22nd power for the order of magnitude."
In fact, this is the binomial probability for obtaining exactly 56 successes (gimels) in 68 trials (spins) with success probability 1/4;  this assumes independent spins of a perfectly balanced dreidel.  But there are two problems here.  First, we should ask for the chance of 56 ''or more'' successes.  This adjustment does change the order of magnitude of the probability, which becomes 1 in <math>2.09 \times 10^{22}</math>.  However,  the binomial description is not correct because it allows the non-gimel rolls to be nun, hei or shin.  Mr. Lorini's feat was more unusual in that the non-gimels did not include any shins.  Thus we really need to consider a multinomial situation, with categories (gimel, shin, neither), for which the probabilities are (1/4, 1/4, 1/2).  The chance of 56 or more gimels in 68 rolls, with zero shins, is then calculated as 1 in <math>2.62 \times 10^{24}</math>.
'''Discussion'''<br>
In an effort to describe the order of magnitude of the (originally reported) answer, the article reports that the figure was 22.5 billion times 1 trillion.  Do you think this helps the lay person to understand it?  Can you suggest an alternative?
Submitted by Bill Peterson, based on a suggestion from Adam Peterson


==A million monkeys==
==A million monkeys==
Line 147: Line 202:
The author is a software engineer who has “created a computer program using the Hadoop framework to simulate a million monkeys randomly typing," and he claims that the program has reproduced all of the works of Shakespeare.  He gives an overview and technical details about his project at his website[http://www.jesse-anderson.com/2011/09/a-few-million-monkeys-randomly-recreate-shakespeare/], both in text and in two videos.  He says that he ran out of cloud-computing space and had to revert to working on his home computer.<br>
The author is a software engineer who has “created a computer program using the Hadoop framework to simulate a million monkeys randomly typing," and he claims that the program has reproduced all of the works of Shakespeare.  He gives an overview and technical details about his project at his website[http://www.jesse-anderson.com/2011/09/a-few-million-monkeys-randomly-recreate-shakespeare/], both in text and in two videos.  He says that he ran out of cloud-computing space and had to revert to working on his home computer.<br>


The <i>Significance</i> article is apparently only available online with an online subscription.  (I paid $52 for the four 2011 paper issues; online access seems to require a separate fee.)<br>
The <i>Significance</i> article is apparently only available online with an online subscription.  (I paid 52 dollars for the four 2011 paper issues; online access seems to require a separate fee.)<br>


Submitted by Margaret Cibes
Submitted by Margaret Cibes<br>


'''Note'''<br>
===Note===
This story calls to mind the following tongue-in-cheek quotation, which also appeared in the first installment of the Chance News Wiki:
This story calls to mind the following tongue-in-cheek quotation, which also appeared in the first installment of the Chance News Wiki:


:"We've heard that a million monkeys at a million keyboards could produce the Complete Works of Shakespeare; now, thanks to the Internet, we know this is not true."
<blockquote>"We've heard that a million monkeys at a million keyboards could produce the Complete Works of Shakespeare; now, thanks to the Internet, we know this is not true."</blockquote>
<div align=right> [http://www.eecs.berkeley.edu/Faculty/Homepages/wilensky.html Robert Wilensky],
<div align=right>[http://www.eecs.berkeley.edu/Faculty/Homepages/wilensky.html Robert Wilensky], Professor Emeritus of Electrical Engineering and Computer Science, UC Berkeley</div>
Professor Emeritus of Electrical Engineering and Computer Science, UC Berkeley</div>


==Ethics in economic reports==
==Ethics in economic reports==
[http://online.wsj.com/article/SB10001424052970203436904577148940410667970.html?KEYWORDS=ben+casselman “Economists Set Rules on Ethics”]<br>
[http://online.wsj.com/articleSB10001424052970203436904577148940410667970.html?KEYWORDS=ben+casselman “Economists Set Rules on Ethics”]<br>
by Ben Casselman,<i>The Wall Street Journal</i>, January 9, 2012<br>
by Ben Casselman,<i>The Wall Street Journal</i>, January 9, 2012<br>


<blockquote> Under new rules adopted by the American Economic Association at its annual meeting here last week, economists will have to disclose financial ties and other potential conflicts of interest in papers published in academic journals. .<br>
<blockquote> Under new rules adopted by the American Economic Association at its annual meeting here last week, economists will have to disclose financial ties and other potential conflicts of interest in papers published in academic journals. .... Under the policy, which will be enacted over the course of the next year, authors submitting papers to academic journals must disclose to the journal's editors all sources of financing for the research and all "significant" financial relationships with groups or individuals with a "financial, ideological or political stake" in the research. The policy defines "significant" as financial support to an author and immediate family members totaling at least $10,000 in the past three years. The journals will then make public what their editors deem "relevant potential conflicts of interest."</blockquote>


  ….  [A University of Denver professor] has argued for the adoption of an even broader "economists' oath" that would address questions like the ethics of advising dictators and the responsibility of economists to stand up for the poor.  ….<br> 
<blockquote>[C]riticisms [related to the lack of transparency in financial reporting/analysis] were made most prominently in the 2010 film "Inside Job," which won an Academy Award for best documentary in 2011. The movie highlighted prominent economists' ties to companies and governments that later collapsed in the financial crisis.</blockquote>
Under the policy, which will be enacted over the course of the next year, authors submitting papers to academic journals must disclose to the journal's editors all sources of financing for the research and all "significant" financial relationships with groups or individuals with a "financial, ideological or political stake" in the research. The policy defines "significant" as financial support to an author and immediate family members totaling at least $10,000 in the past three years. The journals will then make public what their editors deem "relevant potential conflicts of interest."</blockquote>
Criticisms [related to the lack of transparency in financial reporting/analysis] were made most prominently in the 2010 documentary "Inside Job," a film about the bank crisis of 2008, which won an Academy Award for best documentary in 2011. The movie highlighted prominent economists' ties to companies and governments that later collapsed in the financial crisis.<br>
The film “Inside Job” is available on Netflix for online or at-home-TV viewing.  See a brief excerpt from the film “Inside Job” on YouTube, [http://www.youtube.com/watch?v=8lHvTKzfu8Q “Inside Job Clip”].  It shows the interview with a Columbia economist who is an advocate for de-regulation of the financial markets, and includes a question to him about why the title of one of his 2006 books (#17 on his [http://www0.gsb.columbia.edu/faculty/fmishkin/VITA.pdf  CV]) has had its title changed from <i>Financial Stability in Iceland</i> to, currently, <i>Financial Instability in Iceland</i>; the professor suggests that it was a typo.<br>  


===Discussion==
The film “Inside Job” is available on Netflix for online or at-home-TV viewing.  See a brief excerpt from the film “Inside Job” on YouTube, [http://www.youtube.com/watch?v=8lHvTKzfu8Q “Inside Job Clip”].  It shows the interview with a Columbia economist who is an advocate for de-regulation of the financial markets, and includes a question to him about why the title of one of his 2006 books (#17 on his [http://www0.gsb.columbia.edu/faculty/fmishkin/VITA.pdf CV]) has had its title changed from <i>Financial Stability in Iceland</i> to, currently, <i>Financial Instability in Iceland</i>; the professor suggests that this was a typo.<br>
The <i>WSJ</i> article states “[A Harvard economist] drew a distinction between fields like medicine, where researchers can suppress data that don't support their or their sponsors' desired outcomes, and economics, where most research is based on publicly available information.”<br> What do you think about such a distinction?<br>
 
===Discussion===
The <i>WSJ</i> article states “[A Harvard economist] drew a distinction between fields like medicine, where researchers can suppress data that don't support their or their sponsors' desired outcomes, and economics, where most research is based on publicly available information.”  What do you think about such a distinction?<br>


Submitted by Margaret Cibes
Submitted by Margaret Cibes
==Polling content problems==
[http://blogs.wsj.com/numbersguy/when-polls-turn-up-the-wrong-number-1109/?KEYWORDS=bialik “When Polls Turn Up the Wrong Number”]<br>
Blog by Carl Bialik, <i>The Wall Street Journal</i>, January 6, 2012<br>
Carl Bialik (the WSJ Numbers Guy) writes about political scientists’ explanations for poll results that are less than helpful.  Here are a few explanations from academics; while not all are new to <i>Chance</i> readers, they do remind us and our students of some important polling issues:<br>
*“Most [people] are not critical consumers of knowledge and rarely seek out multiple sources of information to verify statistical findings.”<br>
*“People have enough trouble estimating numbers they control, let alone ones that may never personally affect them.”<br>
*“Pollsters shouldn’t be so quick to stop surveys with people who say they don’t plan to vote: 55% of that group ended up voting in the 2008 general election, compared to 87% of those who were almost certain they’d cast a ballot.”[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1971846]<br>
*“Everyone, no matter how well-educated and well-versed in quantitative thinking, is vulnerable to bad math: One classic study found that Wharton graduate students two decades ago — a decade before the Sept. 11 attacks — were willing to pay more for a travel-insurance policy that covered just terrorism than for one that covered all travel interruptions, including terrorism.”[http://opim.wharton.upenn.edu/risk/downloads/archive/arch41.pdf]<br>
*“Most survey respondents aren’t thinking too hard about these questions and there is no consequence to them giving a wrong answer. Would they do better if they thought harder or if they had more at stake?”<br>
*“Disproportionate media coverage of a certain issue might also spark inflated estimates of its scope.”<br>
A Princeton nueroscientist suggests  one way to improve numeracy:  “Every policy story should contain an example that is typical, again reflecting true probabilities. ‘MAN LOSES LOTTERY’ — how’s that for a headline?”<br>
And the saddest commentary of all, which has been backed up by some recent behavioral studies:  “Even if people improved their statistical literacy and absorbed the correct number, then adjusted their views, any one issue is unlikely to shift their votes ….  Even if they knew the right answers, there is little evidence that anyone would change their [sic] votes.”<br>
See Bialik's related article, [http://online.wsj.com/article/SB10001424052970203471004577144632919979666.html?KEYWORDS=bialik "Americans Stumble on Math of Big Issues"], in <i>The Wall Street Journal</i>, January 7, 2012.<br>
Submitted by Margaret Cibes
===Comment===
Paul Alper wrote to say he had taken Margaret's advice to see the related article. There we read:
<blockquote>
"We found that people resisted any attempts to give them accurate information," says James Kuklinski, a political scientist at the University of Illinois.  He and colleagues asked Illinois residents for their opinions and factual beliefs on welfare. More than 60% supplied an estimate of the percentage of U.S. families on welfare that was more than double the correct proportion, among other misfires. Those most misinformed were most confident in their estimates, according to the 2000 paper. And a subgroup supplied with the right numbers didn't change their views in a meaningful way.
</blockquote>
This is not, Paul notes, a ringing endorsement for education.

Latest revision as of 19:09, 9 May 2012

December 23, 2011 to January 17, 2012

Quotations

“The role of context. .... The focus on variability naturally gives statistics a particular content that sets it apart from from mathematics itself and from other mathematical sciences, but there is more than just content that distinguishes statistical thinking from mathematics. Statistics requires a different kind of thinking, because data are not just numbers, they are numbers with a context. .... In mathematics, context obscures structure. .... In data analysis, context provides meaning. .... [A]lthough statistics cannot prosper without mathematics, the converse fails.”

"What About Probability? .... In the ideal Platonic world of mathematics, we can start with a probabilistic chicken and use deductive logic to lay a statistical egg, but in the messier world of empirical science, we must start with the egg as observed data and construct a prior probabilistic chicken as an inference."

George Cobb and David Moore (authors' emphasis)

in “Mathematics, Statistics, and Teaching”

The American Mathematical Monthly, 1997

Submitted by Margaret Cibes


"[W]hile disciplines like physics or psychology or statistics discard projects and methodologies no longer regarded as cutting edge, if you like the way literary studies were done in 1950 or even 1930, there will be a department or a journal that allows you to proceed as if nothing had happened in the last 50 or 75 years."

Stanley Fish, writing in The old order changeth, New York Times, 26 December 2011

Submitted by Paul Alper


"Apparently unsatisfied with the evidence that each study provides on its own, Bem et al. (2011) resort to the last refuge of the refuted: consider all experiments simultaneously."

Eric–Jan Wagenmakers, writing in Yes, psychologists must change the way they analyze their data:
Clarifications for Bem, Utts, and Johnson (2011)

Submitted by Paul Alper


“Statistical significance should be a tiny part of an inquiry concerned with the size and importance of relationships.” [p. 2]

“[Statistical significance] is a philosophical, qualitative test. It does not ask how much. It asks 'whether.' …. [S]ome of the putatively quantitative sciences have slipped into asking qualitatively whether there exists an effect …. Yes or no, they say, and then they stop. They have ceased asking the scientific question ‘How much is the effect?’ And they have therefore ceased being interested in the pragmatic questions that follow: ‘What Difference Does the Effect Make?’ and ‘Who Cares?’ They have become, as we put it, ‘sizeless.’” [pp. 4-5]

“[S]tatistical significance is a philosophy of mere existence. …. It concerns itself only with one kind of probability of a (allegedly) randomly sampled event – the so-called exact p-value or Student’s t – and not with other kinds of sampling probability, such as the “power of the test … [or] nonsampling sources of error, such as ... measurement error … [or] experimental error and sample selection bias.” [pp. 7-8]

Ziliak and McCloskey in The Cult of Statistical Significance, 2008

Submitted by Margaret Cibes

Forsooth

“[A researcher] has been funded in part by the U.S. government’s Monty Python-esquely named Office of Research Integrity’s Research on Research Integrity Program.[1]

David H. Freedman, in Wrong, 2010, p. 106

Submitted by Margaret Cibes


“[A] political science professor at Southern Connecticut State University … has developed a mathematical formula to assess presidential success. … Her model, she said, ‘explains 50 percent in the variance in the quality of the president, which is awfully good ….’"

in “Southern Professor Links Presidential Success To Prior Experience”
The Hartford Courant, January 2, 2012

Submitted by Margaret Cibes


"When I was in NYC I went to this party by group of Japanese bio-scientists. There, one guy told me about how the biggest pharmaceutical company in Japan did their statistics. They ran 100 different tests and reported the most significant one. (This was in 2006 and he said they stopped doing this few years back so they were doing this until pretty recently…) I’m not sure if this was 100 multiple comparison or 100 different kinds of test but I’m sure they wouldn’t want to disclose their data…"

posted by a colleague on Andrew Gelman's blog, 24 December 2011

Submitted by Paul Alper

Weirdness

Why People Believe Weird Things: Pseudo-Science, Superstition, and Other Confusions of Our Time
by Michael Shermer, MIF Books, 1997, p. 54

Shermer is founding publisher of Skeptic magazine and a Scientific American columnist. This book contains his list of “Twenty-five Fallacies That Lead Us to Believe Weird Things.” The fallacies are not new, but are well illustrated by many interesting historic and contemporary stories.

See Shermer's 13-minute TED Talk, including a demonstration of a 900 dollar “dowser” designed to find marijuana in kids’ lockers. Shermer states:

Science is not a thing, it’s a verb. It’s a way of thinking about things. It’s a way of looking for natural explanations for all phenomena.

Question

Shermer states:

[M]ost people have a very poor understanding of the laws of probability. …. The probability that two people in a room of thirty people will have the same birthday is .71.

Ignoring issues such as leap years or twins, and assuming a uniform distribution of real-life birthdays, do you agree with the probability as stated – or could you modify the statement to make it more accurate?

Submitted by Margaret Cibes

Improving IQs

“Ways to Inflate Your IQ”
by Sue Shellenbarger, The Wall Street Journal, November 29, 2011

This is a report about a potpourri of research projects that claim to show that an IQ can change over time. Some sample IQ test questions, with suggestions about how to increase an IQ, are also provided. There is no discussion about what an IQ test measures.

In the latest study, 33 British students were given IQ tests and brain scans at ages 12 to 16 and again about four years later …; 9% of the students showed a significant change of 15 points or more in IQ scores.
On a scale where 90 to 110 is considered average, one student's IQ rose 21 points to 128 from 107, lifting the student from the 68th percentile to the 97th compared with others the same age, [according to a co-author] of the study, published last month in Nature.

Questions

  1. We are told that 33 British students took an IQ test twice. It is conceivable that there were additional students who participated in the first test administration but not the second. Would it be helpful, for inference purposes, to have information about any such students, such as reasons for their non-participation in the second administration?
  2. When people are IQ tested over time, do you think that they are given the same test (or a parallel version), or might a subsequent test include different skills/concepts appropriate for an older group?
  3. On one commonly used IQ test, scores are standardized to mean 100 and standard deviation 15. This is consistent with the claim that a 107 score rising to 128 corresponds to a 68th percentile score rising to the 97th. How would you equate the test scores from two administrations if the tests were different, in order to account for the two tests' possibly different difficulty levels? (See “Equating Test Scores”,”by Samuel Livingston, Educational Testing Service, 2004.)

Submitted by Margaret Cibes

Winning the fight against crime by putting your head in the sand

Police Tactic: Keeping Crime Reports Off the Books Al Baker and Joseph Goldstein, New York Times, December 30, 2011.

Police officers are joining just about every other profession in trying to skew the statistics to make themselves look good.

Crime victims in New York sometimes struggle to persuade the police to write down what happened on an official report. The reasons are varied. Police officers are often busy, and few relish paperwork. But in interviews, more than half a dozen police officers, detectives and commanders also cited departmental pressure to keep crime statistics low.

The message about reducing police reports comes in many subtle ways.

Officers sometimes bend to pressure by supervisors to eschew report-taking. “Cops don’t want a bad reputation, and stigma,” one commander said. “They know they have to please the sergeants.”

This pressure comes from even higher up.

The sergeants, in turn, are acting on the wishes of higher-ups to keep crime statistics down, a desire that is usually communicated stealthily, the commander said. As an era of low crime continues, and as 2011 draws to a close with felony numbers running virtually even with last year’s figures, any new felony is a significant event in a precinct and a source of consternation to commanders.

Part of the problem is the broad discretion that police officers apply.

In one case, Sandra Ung, 37, went to the Fifth Precinct in Chinatown after her wallet disappeared at a Starbucks. "I had it and then it was gone," she said of the Feb. 23 episode. She said she believed her wallet had been stolen, but could not prove it. She assumed the police had recorded it as pickpocketing, but when she retrieved a copy of the report days later, she saw it was recorded not as a crime, but as lost property that had gone "missing in an unknown manner."

The report noted that the victim had not felt anything that would indicate the actions of a pickpocket. But interestingly, the standards for categorizing the event as a crime were not this strict.

The guidelines focused on the very words that the police used to discount her suspicions: "The victim does not need to have witnessed, felt or otherwise been aware of being bumped or jostled in order to properly record the occurrence as grand larceny."

Questions

  1. The report discusses ways in which the underreporting of crimes could be measured. Discuss those approaches and suggest any additional approaches that could be used to detect the extent of this problem.
  2. Why is the desire to keep crime statistics low a short sighted policy?

Submitted by Steve Simon

Marilyn slips up on a drug testing question

Jerry Grossman wrote to point out an error in a recent "Ask Marilyn" column.

Ask Marilyn: What's the Probability of Being Chosen for a Drug Test?
by Marilyn vos Savant, Parade, 25 December 2011

A reader asks:

I manage a drug-testing program for an organization with 400 employees. Every three months, a random-number generator selects 100 names for testing. Afterward, these names go back into the selection pool. Obviously, the probability of an employee being chosen in one quarter is 25 percent. But what’s the likelihood of being chosen over the course of a year?

Marilyn responds, "The probability remains 25 percent, despite the repeated testing. One might think that as the number of tests grows, the likelihood of being chosen increases, but as long as the size of the pool remains the same, so does the probability." Jerry observes that she seems to be answering the wrong question (What is the chance of being chosen in any particular quarter during the year?), rather than the one the reader intended (What is the chance of being chosen at least once?), for which the answer would be <math>1-(3/4)^4</math>, or about 68 percent. He adds that Marilyn's readers quickly began commenting here.

In another message, Dom Rosa noted the blunder and sent us a link to the following week's column:

Ask Marilyn: Did Marilyn Make a Mistake on the Drug-Testing Question?
by Marilyn vos Savant, Parade, 2 January 2012

Here Marilyn suggests that the original problem may have been ambiguous. Nevertheless, she does print corrections from Jerry and another reader. She also swears off eggnog, and promises a followup on January 22. Check back here for the update.

Impact of large philanthropists on research

“Got Dough? How Billionaires Rule Our Schools”
by Joanne Barkan, Dissent, Winter 2011

The author discusses what she sees as potential problems with the 4 billion dollars in annual funding of educational innovation and research by the Gates, Broad, and Walton families through their foundations. Here is an excerpt about Gates’ funding of malaria research:

[In 2008], the New York Times reported on a memo that it had obtained, written by Dr. Arata Kochi, head of the World Health Organization’s malaria programs, to WHO’s director general. Because the Gates Foundation was funding almost everyone studying malaria, Dr. Arata complained, the cornerstone of scientific research—independent review—was falling apart.

Many of the world’s leading malaria scientists are now “locked up in a ‘cartel’ with their own research funding being linked to those of others within the group,” Dr. Kochi wrote. Because “each has a vested interest to safeguard the work of the others,” he wrote, getting independent reviews of research proposals “is becoming increasingly difficult.”

The director of global health at Gates responded predictably: “We encourage a lot of external review.” But a lot of external review does not solve the problem, which is structural. It warps the work of most philanthropies to some degree but is exponentially dangerous in the case of the Gates Foundation. Again, Frederick Hess in With the Best of Intentions: “…Researchers themselves compete fiercely for the right to evaluate high-profile reform initiatives. Almost without exception, the evaluators are hired by funders or grantees….Most evaluators are selected, at least in part, because they are perceived as being sympathetic to the reform in question.”

Submitted by Margaret Cibes

Scientific misconduct

Disgrace: On Marc Hauser; A case of scientific misconduct at Harvard.
by Charles Gross, The Nation, 21 December 2011 (in the print edition January 9-16, 2012)

This is a lengthy article about Marc Hauser's research fabrication at Harvard. The Boston Globe broke the story in August of 2010; Hauser resigned from Harvard last summer. Hauser's research involved cognition in chimpanzees. As described in the Globe, Hauser's 2002 paper in the journal Cognition was retracted because of unspecified data irregularities, for which Hauser reportedly took responsibility.

Gross is critical of the secrecy surrounding Harvard's internal investigation. He writes:

The procedures and conclusions of the investigation raise many questions. Its methods and results remain secret. Its procedures bore no relation to the due process that is the goal of our judicial system. We have no clear idea of the exact nature of the evidence, of how many studies were examined and if anyone besides the three whistleblowers and Hauser was asked to testify. I was told by one of the whistleblowers that, to this person’s surprise and relief, the committee, which included scientists, did look carefully at evidence, even going so far as to recalculate statistics.

Earlier in the article, we find this ironic note:

In an interview titled “On How We Judge Bad Behavior,” made a few months before the Globe broke the story of Harvard’s investigation and available on YouTube, Hauser discusses psychopaths and suggests that they “know right from wrong but just don’t care.”

Andrew Gelman notes on his blog that he received this comment on the Hauser case from E. J. Wagenmakers:

One of the problems is that the field of social psychology has become very competitive, and high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself. There is a slippery slope here though; although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….

Submitted by Paul Alper

More scientific misconduct

“Investigation Finds UConn Professor Fabricated Research”
by William Weir and Kathleen Megan, The Hartford Courant, January 12, 2012

After a three-year investigation, the University of Connecticut has stopped all externally funded research in a Health Center researcher’s lab and declined 890K dollars in research grants to the researcher, due to 145 instances of alleged “fabricated research.” The U.S. Office of Research is investigating, after receiving a 60,000-page report from UConn.

One focus of the investigation was a set of images in these research papers representing "Western blots" — an analytical technique used to detect certain proteins in tissue samples. Generally, results of these experiments are represented with a series of bands, one for each experiment conducted.
The investigation found several instances of these images being manipulated: Some were spliced together, some duplicated, some erased. Many bands that had nothing to do with the particular experiments were cut and pasted into the studies. The report states that these kinds of manipulations can be done simply with such software programs as Adobe Photoshop.

The report suggests that “one of the curious aspects” of this case relates to the division of responsibilities of the research team:

Some lab members — even when they were the first authors on the papers — had no role in biochemical analyses or preparing figures. Compartmentalizing the work in such a way, according to the report, would make it harder to trace any fabrication to its origin.

The researcher has denied any knowledge of the alleged manipulation.

Several other recent instances of alleged scientific fraud are briefly described in a separate article[2].

Submitted by Margaret Cibes

Surprising dreidel outcome

A one-in-trillions dreidel game
by By Paul Grondahl, Times Union (Albany, NY), 28 December, 2011

Here is a local news story concerning a remarkable streak in a family game of dreidel, whose rules are described in the article as follows:

The four-sided spinning top features letters of the Hebrew alphabet on each side: nun, gimel, hei and shin. The players put a penny in a pot at the center of the table. Each player took a turn and spun the dreidel. If the nun faced up, the player did nothing. A gimel meant the player got all the pennies in the pot. A hei roll won half the pot. Shin required the player to add a penny to the pot.

To the amazement everyone present, Alfred Lorini compiled a streak of 68 spins that included 56 gimels and zero shins. According to the article, his great-nephew "used a binomial distribution and came up with 1-in-2.25 times 10 to the 22nd power for the order of magnitude."

In fact, this is the binomial probability for obtaining exactly 56 successes (gimels) in 68 trials (spins) with success probability 1/4; this assumes independent spins of a perfectly balanced dreidel. But there are two problems here. First, we should ask for the chance of 56 or more successes. This adjustment does change the order of magnitude of the probability, which becomes 1 in <math>2.09 \times 10^{22}</math>. However, the binomial description is not correct because it allows the non-gimel rolls to be nun, hei or shin. Mr. Lorini's feat was more unusual in that the non-gimels did not include any shins. Thus we really need to consider a multinomial situation, with categories (gimel, shin, neither), for which the probabilities are (1/4, 1/4, 1/2). The chance of 56 or more gimels in 68 rolls, with zero shins, is then calculated as 1 in <math>2.62 \times 10^{24}</math>.

Discussion
In an effort to describe the order of magnitude of the (originally reported) answer, the article reports that the figure was 22.5 billion times 1 trillion. Do you think this helps the lay person to understand it? Can you suggest an alternative?

Submitted by Bill Peterson, based on a suggestion from Adam Peterson

A million monkeys

“A million monkeys and Shakespeare”
by Jesse Anderson, Significance, December 2011

The author is a software engineer who has “created a computer program using the Hadoop framework to simulate a million monkeys randomly typing," and he claims that the program has reproduced all of the works of Shakespeare. He gives an overview and technical details about his project at his website[3], both in text and in two videos. He says that he ran out of cloud-computing space and had to revert to working on his home computer.

The Significance article is apparently only available online with an online subscription. (I paid 52 dollars for the four 2011 paper issues; online access seems to require a separate fee.)

Submitted by Margaret Cibes

Note

This story calls to mind the following tongue-in-cheek quotation, which also appeared in the first installment of the Chance News Wiki:

"We've heard that a million monkeys at a million keyboards could produce the Complete Works of Shakespeare; now, thanks to the Internet, we know this is not true."

Robert Wilensky, Professor Emeritus of Electrical Engineering and Computer Science, UC Berkeley

Ethics in economic reports

“Economists Set Rules on Ethics”
by Ben Casselman,The Wall Street Journal, January 9, 2012

Under new rules adopted by the American Economic Association at its annual meeting here last week, economists will have to disclose financial ties and other potential conflicts of interest in papers published in academic journals. .... Under the policy, which will be enacted over the course of the next year, authors submitting papers to academic journals must disclose to the journal's editors all sources of financing for the research and all "significant" financial relationships with groups or individuals with a "financial, ideological or political stake" in the research. The policy defines "significant" as financial support to an author and immediate family members totaling at least $10,000 in the past three years. The journals will then make public what their editors deem "relevant potential conflicts of interest."

[C]riticisms [related to the lack of transparency in financial reporting/analysis] were made most prominently in the 2010 film "Inside Job," which won an Academy Award for best documentary in 2011. The movie highlighted prominent economists' ties to companies and governments that later collapsed in the financial crisis.

The film “Inside Job” is available on Netflix for online or at-home-TV viewing. See a brief excerpt from the film “Inside Job” on YouTube, “Inside Job Clip”. It shows the interview with a Columbia economist who is an advocate for de-regulation of the financial markets, and includes a question to him about why the title of one of his 2006 books (#17 on his CV) has had its title changed from Financial Stability in Iceland to, currently, Financial Instability in Iceland; the professor suggests that this was a typo.

Discussion

The WSJ article states “[A Harvard economist] drew a distinction between fields like medicine, where researchers can suppress data that don't support their or their sponsors' desired outcomes, and economics, where most research is based on publicly available information.” What do you think about such a distinction?

Submitted by Margaret Cibes

Polling content problems

“When Polls Turn Up the Wrong Number”
Blog by Carl Bialik, The Wall Street Journal, January 6, 2012

Carl Bialik (the WSJ Numbers Guy) writes about political scientists’ explanations for poll results that are less than helpful. Here are a few explanations from academics; while not all are new to Chance readers, they do remind us and our students of some important polling issues:

  • “Most [people] are not critical consumers of knowledge and rarely seek out multiple sources of information to verify statistical findings.”
  • “People have enough trouble estimating numbers they control, let alone ones that may never personally affect them.”
  • “Pollsters shouldn’t be so quick to stop surveys with people who say they don’t plan to vote: 55% of that group ended up voting in the 2008 general election, compared to 87% of those who were almost certain they’d cast a ballot.”[4]
  • “Everyone, no matter how well-educated and well-versed in quantitative thinking, is vulnerable to bad math: One classic study found that Wharton graduate students two decades ago — a decade before the Sept. 11 attacks — were willing to pay more for a travel-insurance policy that covered just terrorism than for one that covered all travel interruptions, including terrorism.”[5]
  • “Most survey respondents aren’t thinking too hard about these questions and there is no consequence to them giving a wrong answer. Would they do better if they thought harder or if they had more at stake?”
  • “Disproportionate media coverage of a certain issue might also spark inflated estimates of its scope.”

A Princeton nueroscientist suggests one way to improve numeracy: “Every policy story should contain an example that is typical, again reflecting true probabilities. ‘MAN LOSES LOTTERY’ — how’s that for a headline?”

And the saddest commentary of all, which has been backed up by some recent behavioral studies: “Even if people improved their statistical literacy and absorbed the correct number, then adjusted their views, any one issue is unlikely to shift their votes …. Even if they knew the right answers, there is little evidence that anyone would change their [sic] votes.”

See Bialik's related article, "Americans Stumble on Math of Big Issues", in The Wall Street Journal, January 7, 2012.

Submitted by Margaret Cibes

Comment

Paul Alper wrote to say he had taken Margaret's advice to see the related article. There we read:

"We found that people resisted any attempts to give them accurate information," says James Kuklinski, a political scientist at the University of Illinois. He and colleagues asked Illinois residents for their opinions and factual beliefs on welfare. More than 60% supplied an estimate of the percentage of U.S. families on welfare that was more than double the correct proportion, among other misfires. Those most misinformed were most confident in their estimates, according to the 2000 paper. And a subgroup supplied with the right numbers didn't change their views in a meaningful way.

This is not, Paul notes, a ringing endorsement for education.