Chance News 79: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 17: Line 17:


<div align=right>--Nate Silver, writing in [http://fivethirtyeight.blogs.nytimes.com/2011/10/27/herman-cain-and-the-hubris-of-experts/?hp Herman Cain and the Hubris of Experts], FiveThirtyEight blog, <i>The New York Times</i>, 27 October 2011</div>
<div align=right>--Nate Silver, writing in [http://fivethirtyeight.blogs.nytimes.com/2011/10/27/herman-cain-and-the-hubris-of-experts/?hp Herman Cain and the Hubris of Experts], FiveThirtyEight blog, <i>The New York Times</i>, 27 October 2011</div>
Submitted by Margaret Cibes


==Forsooth==
==Forsooth==

Revision as of 18:04, 4 November 2011

Quotations

"...risk, essentially, is measurable whereas uncertainty is not measurable.

"In Mr. Cain’s case, I think we are dealing with an instance where there is considerable uncertainty."

--Nate Silver, writing in Herman Cain, outlier, FiveThirtyEight blog, New York Times, 27 October 2011

Submitted by Paul Alper


"Experts have a poor understanding of uncertainty. Usually, this manifests itself in the form of overconfidence: experts underestimate the likelihood that their predictions might be wrong. …. [E]xperts who use terms like “never” and “certain” too often are playing Russian roulette with their reputations."

"I used to be annoyed when the margin of error was high in a forecasting model that I might put together. Now I view it as perhaps the single most important piece of information that a forecaster provides. When we publish a forecast on FiveThirtyEight, I go to great lengths to document the uncertainty attached to it, even if the uncertainty is sufficiently large that the forecast won’t make for punchy headlines."

"Another fundamental error: when you have such little data, you should almost never throw any of it out, and you should be especially wary of doing so when it happens to contradict your hypothesis."

--Nate Silver, writing in Herman Cain and the Hubris of Experts, FiveThirtyEight blog, The New York Times, 27 October 2011

Submitted by Margaret Cibes

Forsooth

Fraud may just be the tip of the iceberg

Fraud Case Seen as a Red Flag for Psychology Research by Benedict Carey, The New York Times, November 2, 2011.

A recently revealed case about fraud may point to a much larger problem.

A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found. Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.

The psychologist accused of fraud took advantage of some common practices in the field.

Dr. Stapel was able to operate for so long, the committee said, in large measure because he was “lord of the data,” the only person who saw the experimental evidence that had been gathered (or fabricated). This is a widespread problem in psychology, said Jelte M. Wicherts, a psychologist at the University of Amsterdam. In a recent survey, two-thirds of Dutch research psychologists said they did not make their raw data available for other researchers to see. “This is in violation of ethical rules established in the field,” Dr. Wicherts said.

The field also appears to be rather careless about their statistical analyses.

In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

This is not a surprise to psychologists.

Researchers in psychology are certainly aware of the issue. In recent years, some have mocked studies showing correlations between activity on brain images and personality measures as “voodoo” science, and a controversy over statistics erupted in January after The Journal of Personality and Social Psychology accepted a paper purporting to show evidence of extrasensory perception. In cases like these, the authors being challenged are often reluctant to share their raw data. But an analysis of 49 studies appearing Wednesday in the journal PLoS One, by Dr. Wicherts, Dr. Bakker and Dylan Molenaar, found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.

Item 2