Chance News 69

(Difference between revisions)
Jump to: navigation, search
m (The Truth Wears Out)
m (The Truth Wears Out)
Line 234: Line 234:
 
Lehrer also reports:
 
Lehrer also reports:
 
<blockquote>In 2005, [Stanford epidemiologist John Ioannidis] published an article in [JAMA] that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. ….  Because most of these studies were randomized controlled trials …they tended to have a significant impact on clinical practice, and led to the spread of treatments ….. Nevertheless, … of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.</blockquote>
 
<blockquote>In 2005, [Stanford epidemiologist John Ioannidis] published an article in [JAMA] that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. ….  Because most of these studies were randomized controlled trials …they tended to have a significant impact on clinical practice, and led to the spread of treatments ….. Nevertheless, … of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.</blockquote>
See Ioannidis’s 2005 article, [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ “Why Most Published Research Findings Are False”], in PLoS.<br>
+
See Ioannidis’s 2005 article, [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ “Why Most Published Research Findings Are False”], in <i>PLoS</i>.<br>
  
 
Besides “publication bias,” the issues of "significance chasing” and “selective reporting” of results need to be addressed if research results are to be relied upon for decision-making.<br>
 
Besides “publication bias,” the issues of "significance chasing” and “selective reporting” of results need to be addressed if research results are to be relied upon for decision-making.<br>

Revision as of 12:20, 5 January 2011

Contents

Quotations

"The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true."

John P.A. Ioannidis, in Why most published research findings are false

Submitted by Paul Alper


"We’re not going to stop using algorithms. They’re too useful. But we need to be more aware of the algorithmic perversity that’s creeping into our lives. The short-term fit of a dating match or a Web page doesn’t measure the long-term value it may hold. Statistically likely does not mean correct, or just, or fair. .... It’s when people deviate from what we predict they’ll do that they prove they are individuals, set apart from all others of the human type."

Alexis Madrigal, in "Take the Data Out of Dating"
The Atlantic, December 2010

“Several said the government is making a crucial mistake by rating performance [of dialysis clinics] by lab tests, not outcomes or measures that reflect patients’ quality of life. ‘Mortality, morbidity, and infection—that’s the bottom line,’ said [a former dialysis-clinic owner]. ‘It’s easy to adjust the labs. What good is it if you have good numbers, but everyone’s dying or in the hospital?’”

Robin Fields, in “God Help You. You’re on Dialysis.”
The Atlantic, December 2010

“I had realized from experience that university people sometimes don't react well to common sense, and in any case, most of them listen to it harder if you first intimidate them with equations."

W. D. Hamilton
quoted in a conversation with Dutch journalist FransRoes[1]

Submitted by Margaret Cibes


"If you think you already know which theory is right, you are either a major scientist who has been concealing a vast mountain of unpublished research from the rest of the world, or else you are confusing wishful thinking with knowledge."

Daniel C. Dennett, in

Breaking the Spell: Religion as a Natural Phenomenon

Submitted by Paul Alper


"Instead of rhetoric, the politician favors figures of another kind: Today's infatuation with statistics is a bid for scientific exactness but tends to crowd out finesse."

Henry Hitchings, reviewing Farnsworth's Classical English Rhetoric

in The syntax of style, Wall Street Journal, 15 December 2010

Submitted by Paul Alper

Forsooth

“The mechanics of the ghostwriter’s job are fairly simple, [an anonymous ghostwriter] says. Early on, a medical-communications agency and its pharmaceutical-company sponsors will agree on a title for an article and a potential author, usually an academic physician with a reputation as a ‘thought leader.’ The agency will ask the thought leader to ’author’ the article, sometimes in exchange for a fee. The ghostwriter will write the article, or perhaps an extended outline containing the message the company wants to transmit, and send it along to the physician, who may make some changes or simply sign it as written and submit it to a journal, usually scrubbed of any mention of the ghostwriter, the agency, or the pharmaceutical company. [The ghostwriter] says he rarely even sees the published articles he writes.”

Carl Elliott, in “Playing Doctor”
The Atlantic, December 2010

See also “Drug Maker Wrote Book Under 2 Doctors’ names, Documents Say”, The New York Times, November 29, 2010.

Submitted by Margaret Cibes


"[A] former Colorado Springs state senator ... once claimed, 'I don’t know whether we need a bill on teen pregnancy because statistics show teen pregnancy drops off significantly after age 25.'”

Cited in "For Colorado lawmakers: People who live in glass houses shouldn't bowl"
The Colorado Independent, January 7, 2009

Submitted by Margaret Cibes at the suggestion of James Greenwood


Relationships cited in "Department of Awful Statistics", The Atlantic, April 2, 2009

http://i284.photobucket.com/albums/ll33/Jaguar_Fan_1/Lemongraph.jpg "photobucket"
http://www.venganza.org/images/PiratesVsTemp.png "Church of the Flying Spaghetti Bucket"

Submitted by Margaret Cibes


Spurious precision?

"Everybody trips on stairs at some time or other. It has been calculated that you are likely to miss a step once every 2,222 occasions you use stairs, suffer a minor accident once in every 63,000 uses, suffer a painful accident once in every 734,000, and need hospital attention once every 3,616,667 uses."
Bill Bryson, in At Home: A Short History of Private Life, p. 309 (data from

The Staircase: Studies of Hazards, Falls and Safer Designs by John A. Templer)

Submitted by Paul Alper

Forsooths from the RRS News

The following Forsooths are from the December 2010 issue of the RRS News

SLIPPER OVER:You always knew they
were a rubbish present-now here's some
evidence that you really shouldn't buy
slippers for grandma. In a US study, 52 per
cent of elderly people who fell at home
were barefoot or wearing slippers at the
time. 'Therefore, older people should
wear shoes at home whenever possible to
minimize their risk of falling,'said study
author Dr Marian Hannan.

Metro

24 June 2010


Temperatures will be near average, at
best.

BBC Radio weather forecast

28 August 2010

Submitted by Laurie Snell

Placebo contents

According to Wikipedia,

A placebo (Latin: I shall please) is a sham or simulated medical intervention that can produce a (perceived or actual) improvement, called a placebo effect. [The origin for the term placebo] dates back to a Latin translation of the Bible by Jerome. It was first used in a medicinal context in the 18th century. In 1785 it was defined as a "commonplace method or medicine" and in 1811 it was defined as "any medicine adapted more to please than to benefit the patient," sometimes with a derogatory implication.

Nowadays, so entrenched is the necessity of a comparison to a placebo, any medical treatment trial without a control arm containing a placebo would be viewed skeptically both statistically and medicinally. But, consider the provocative title of Golomb: “What's in Placebos: Who Knows? Analysis of Randomized, Controlled Trials.” Surprisingly,

No regulations govern placebo composition. The composition of placebos can influence trial outcomes and merits reporting.

The study looked at four prestigious journals: New England Journal of Medicine, JAMA, The Lancet and Annals of Internal Medicine. Included were 176 journal articles:

Most studies did not disclose the composition of the study placebo. Disclosure was less common for pills than for injections and other treatments (8.2% vs. 26.7%; P = 0.002).
Conclusion: Placebos were seldom described in randomized, controlled trials of pills or capsules. Because the nature of the placebo can influence trial outcomes, placebo formulation should be disclosed in reports of placebo-controlled trials.

Discussion

1. Golomb cites the following example: “For instance, olive oil and corn oil have been used as the placebo in trials of cholesterol-lowering drugs.” Under the assumption that these oils might be beneficial, rather than inert, why does this understate the positive benefit of the treatment?

2. Golomb cites another example where a lactose placebo was used in a gastrointestinal trial. Under the assumption that the lactose was harmful, why does this overstate the positive benefit of the treatment?

3. Why is modern communication, e.g., the internet, facebook, etc., a cause for concern when conducting a randomized control trial (with or without a placebo arm)?

4. Golomb further alleges, “failure to describe placebo ingredients breaches basic scientific standards of rigor.” Why would describing the placebo ingredients “disadvantage” the “publication prospects” of the researchers and “disadvantage” the publisher of the particular journal?

5. Medicine is not the only area of endeavor which should require a placebo arm. Name some others.

6. For the record, the term nocebo (I will harm) was coined in 1961 and refers to the negative effects of a sham or simulated medical intervention. An example sometimes given is a patient dying of fright due to being bitten by a non-poisonous snake. Give examples of some other nocebos.

7. Why would prayer be considered a placebo? Why would prayer be considered a nocebo? A treatment?

8. Exorcism has been in the news lately. Is exorcism a treatment, a placebo or a nocebo?

9. Faith healing is always in the news. What distinguishes faith healing from exorcism from prayer? That is, why is prayer more commonly acceptable than either of the other two?

Submitted by Paul Alper

Research is slow

An AIDS Advance, Hiding in the Open, Donald G. McNeil, Jr. The New York Times, November 27, 2010.

The wheels of research machinery turn slowly. A recent article in the New York Times highlights some of the delays in obtaining data for a promising new drug for prevention of AIDS.

Last week, a clinical trial showed that taking Truvada, a pill combining two drugs, once a day would greatly reduce a gay man’s chances of getting infected with the dangerous virus. Although confirmatory studies are still needed, the practice — called “pre-exposure prophylaxis,” or “prep” — will, in theory, also protect sex workers, needle sharers, wives of infected men, prison inmates and anyone else at risk.

Truvada is not a new drug, and indications of its promise and potential were known years ago. So why has it taken so long to obtain this data?

The delay turns out to be a combination of scientific caution and the fiery politics of AIDS. While a medical advance can be made by a momentary flash of inspiration or luck — as legendarily happened with penicillin — proving that it works can take forever. And that is particularly true with AIDS, a disease surrounded by visceral fears, longstanding prejudices and the potential for huge profits.

Part of the problem has to do with the nature of prevention research.

Giving powerful drugs to healthy people is different from giving them to the desperately ill. No doctor would give cancer drugs to a healthy person. Prophylaxis is common with, for example, malaria drugs for travelers making brief sojourns in the tropics. But a drug to be taken all one’s life — or at least for all of one’s sex life — must be very safe.

There is a legal barrier to prevention research as well.

Another factor is that not every drug company wants to see its best treatment drugs, on which it earns billions of dollars, tested for prevention. Dying patients accept unpleasant side effects; healthy ones might sue.

And there were political pressures. AIDS activist groups stopped some earlier trials of Truvada.

At the 2004 International AIDS Conference in Bangkok, the Paris chapter of the AIDS activist group Act-Up unexpectedly attacked Gilead Sciences’ booth, spraying it with fake blood and accusing the company of experimenting on poor people. As Dr. Jaffe tells it, French activists “played the anti-U.S. card in Francophone countries” and stirred up sex workers’ unions in Cambodia, eventually leading the Cameroonian and Cambodian governments to stop their trials. Nigeria’s stopped for other reasons, though many Nigerians were hostile to drug companies because of rumors that polio vaccine was an anti-Muslim plot and because Pfizer had tested a new antibiotic on children with meningitis. “If not for this misplaced activism, we might have had an answer five years earlier,” Dr. Jaffe said.

Another scientists sees the activism differently.

Dr. Grant saw the same struggle differently. The activists were disruptive, he said, but also “raised significant questions” about whether participants would be protected from side effects and about who, if anyone, would pay for lifelong treatment if participants did eventually get AIDS. One result, he said, was that protocols were improved and more countries added: South Africa, Brazil, Peru, Ecuador, Thailand. But more important, he said, was the emergence of the two agencies that now pay for treatment in poor countries, the Global Fund to Fight AIDS, Tuberculosis and Malaria, and the President’s Emergency Plan for AIDS Relief. It took until about 2005 for most poor countries to take advantage of that aid.

Questions

1. Does our legal system make it too easy for research participants to sue? What would be the consequences, both positive and negative, if such suits were barred or at least made more difficult to process?

2. Do you view the involvement of the AIDS activist group, Act-Up, as a positive or negative influence on research?

3. What safeguards are needed for clinical trials that are conducted in third world countries?

Submitted by Steve Simon

Statistics from the Victorian Era

Analyzing Literature by Words and Numbers, Patricia Cohen, The New York Times, December 3, 2010.

A recent effort by Google allows a statistical analysis of the Victorian era.

The titles of every British book published in English in and around the 19th century — 1,681,161, to be exact — are being electronically scoured for key words and phrases that might offer fresh insight into the minds of the Victorians.

Here are some trends in the usage of words in these books:

http://graphics8.nytimes.com/images/2010/12/04/books/04victorian-graphic/04victorian-graphic-popup.gif

The research was made possible by Google, a fact that makes some scholars uneasy.

Some scholars are wary of the control an enterprise like Google can exert over digital information. Google’s plan to create a voluminous online library and store has raised alarms about a potential monopoly over digital books and the hefty pricing that might follow.

Google is trying to avoid any controversy.

But Jon Orwant, the engineering manager for Google Books, Magazines and Patents, said the plan was to make collections and searching tools available to libraries and scholars free. “That’s something we absolutely will do, and no, it’s not going to cost anything,” he said.

Questions

1. What are some of the limitations to the use of word counts.

2. What other applications besides historical studies could benefit from this database?

Submitted by Steve Simon

The Truth Wears Out

"The Truth Wears Off: Is there something wrong with the scientific method"
by Jonah Lehrer, The New Yorker, December 13, 2010, p. 52

The theme is failure of replication after initial "successful" trials.

Submitted by Dick Williamson

Lehrer describes concerns about the decline in treatment effects found by researchers attempting to replicate earlier studies. He states that this decline has been occurring in a wide range of research fields, despite well planned experiments, and is not well known because journals prefer to publish positive effects that are dramatic and unexpected over reduced or absent effects.

In one research area, 8 out of 14 studies (57%) found a correlation between “symmetry and sexual selection” in 1995, but only 4 out of an additional 12 studies (33 1/3%) had confirmed the relationship by 1998. In fact, according to Lehrer, the average effect size shrank by 80% between 1992 and 1997.

Lehrer also reports:

In 2005, [Stanford epidemiologist John Ioannidis] published an article in [JAMA] that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. …. Because most of these studies were randomized controlled trials …they tended to have a significant impact on clinical practice, and led to the spread of treatments ….. Nevertheless, … of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.

See Ioannidis’s 2005 article, “Why Most Published Research Findings Are False”, in PLoS.

Besides “publication bias,” the issues of "significance chasing” and “selective reporting” of results need to be addressed if research results are to be relied upon for decision-making.

Lehrer concludes:

Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. …. And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. …. The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

Submitted by Margaret Cibes

Survey says ...

From a review of Carol Graham’s Happiness Around the World, Oxford University Press, 2009,
by Prashanth Ak in “Toward an Economy of Well-Being,” Science, August 6, 2010:

To take one reason for caution, a good deal hinges on the order in which survey questions are posed. When undergraduates were asked how many dates they had the previous month followed by how happy they were with their lives in general, the correlation between the questions was high (r = 0.66). However, when the order of the questions was reversed, the correlation nearly vanished (r = –0.12).

Submitted by Margaret Cibes

Just joking

From Patrick Vennebush’s blog[2],

A statistician’s wife gives birth to twins. Excitedly, he calls everyone to share the good news. When he calls the minister, the minister says, ‘That’s terrific! Bring them down to church this Sunday, and we’ll baptize them!’
’Uh, let’s just baptize one of them,’ says the statistician. ‘We can keep the other one as a control.’

Vennebush is Online Project Manager for NCTM, father of twins, and author of Math Jokes 4 Mathy Folks.

Submitted by Margaret Cibes

The Joy of Statistics

Hans Rosling in a 2010 BBC documentary: slick here for fun

Submitted by Laurie Snell

See also "Chance News 27 - So You Think You Know How To Give a Statistics Lecture" for discussion and references with respect to Rosling's earlier videos (2006[3] or 2007[4]).

Submitted by Margaret Cibes

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox