Chance News 46

From ChanceWiki
Revision as of 18:26, 2 April 2009 by Jls (talk | contribs)
Jump to navigation Jump to search

Quotations

"I find all novels lacking in probability."

"Probability is the bane of the age. Every Tom, Dick and Harry thinks he knows what is probable. The fact is most people have not the smallest idea what is going on round them. Their conclusions about life are based on utterly irrelevant--and usually inaccurate--premises."

Spoken by a music critic and a musician, respectively, from page 212 of Casanova's Chinese Restaurant by Anthony Powell and is the fifth novel in his twelve volume series, Dance to the Music of Time.

Submitted by Paul Alper


Life is a gamble, at terrible odds;
If it were a bet, you wouldn't take it.


Tom Stoppard
Rosencrantz and Guildenstern are Dead


Submmited by Laurie Snell


Forsooths

"In the last five months, according to the Federal Reserve Board, the money supply in the United States has increased by 271 percent. It has almost tripled."

by Dick Morris in "The Hill"

3 March 2009


The following two Forsooths are from the April 2009 issue of the RSS News:

Europe's particle physics lab, CERN is losing ground rapidly in the race to discover the elusive Higgs boson, or 'God particle', its US rival claims ... the US Fermilab says the odds of its Tefatron accelerator detecting the famed particle first are now 50-50 at worst and up to 96%. at best.

BBC News, Science & Environent

12 February 2009


Does a screening test do more harm than good

Screen or Not? What Those Prostate Studies Mean. Tara Pope-Parker. The New York Times, March 23, 2009.

The Impossible Calculus of PSA Testing Dana Jennings. The New York Times, March 23, 2009.

A screening test gives an indication of whether you have a certain disease. Every screening test (other than autopsy, perhaps) is imperfect, leading to some false positive results and some false negative results. Still, you're better off knowing the results, even if they are imperfect, aren't you? Well, maybe not. If the value of the imperfect information is less than the price of the screening test, then obviously you shouldn't get screened.

Even if the test is free, though, you may still be better off not knowing. This is a classic example of where ignorance truly is bliss.

The problem with screening tests is that a positive finding leads to some sort of intervention, with costs and risks associated with it. If that intervention was done on a false positive screen, then you endured some level of cost and risk without any compensating benefit.

Two screening tests, mammography for breast cancer in women, and PSA (prostate specific antigen) testing for prostate cancer in men, have been in a storm of controversy in the past few years because some critics argue that they cause more harm than good. The controversy of PSA testing has been rekindled by a pair of studies recently published in the New England Journal of Medicine: an American study, and a European study.

Tara Pope-Parker offers a valuable summary of these articles.

"The news was unsettling and confusing to many middle-age men, particularly those who already have diagnoses of prostate cancer as a result of P.S.A. testing. Doctors say some men are reconsidering surgery or radiation treatment they have planned. Others, convinced that their lives were saved by P.S.A. screening, wonder how anyone could question the value of early detection of prostate cancer."

These were very large studies, over 77,000 total patients in the American study and 182,000 in the European study. The followup time was 7-10 years in the American study and a median of 9 years in the European study.

"The bottom line of both studies is that P.S.A. screening does find more prostate cancers — but finding those cancers early doesn’t do much to reduce the risk of dying from the disease. The American study showed no statistical difference in prostate cancer death rates between a group of men who had the screening and a control group who did not. The European researchers found that P.S.A. screening does reduce the risk of dying from prostate cancer by about 20 percent."

But even that 20% improvement comes at a serious cost.

"The European study found that for every man who was helped by P.S.A. screening, at least 48 received unnecessary treatment that increased risk for impotency and incontinence. Dr. Otis Brawley, chief medical officer of the American Cancer Society, summed up the European data this way: 'The test is about 50 times more likely to ruin your life than it is to save your life.'"

There were important limitations to the studies.

"The American study found no benefit in P.S.A. screening over a period of 7 to 10 years. But so far, only about 170 men out of 77,000 studied have died of prostate cancer. Prostate cancer is slow-growing, so it's possible that in the next few years, meaningful differences in mortality rates between the two groups will emerge."

Also, the control group in the American study was not a pure control group.

"A larger concern is what statisticians call “contamination” in the unscreened control group. Because it would have been unethical to tell men in the control group that they could not be screened, many either sought the test or were offered it by their doctors. Investigators initially estimated that 20 percent of the control group would fit in this category, but the numbers ended up being far higher —38 to 52 percent. As a result, the study doesn’t really compare the risks and benefits of screening and no screening. It compares aggressive screening and some screening."

There are some limitations to the European study as well.

"The European research has its own set of problems. Although the finding that P.S.A. screening reduces cancer deaths by 20 percent is statistically significant, experts say it's on the borderline, and a few more years of data could weaken the result. Finally, parts of the study were not 'blinded,' meaning that biases could have crept into the interpretation of the data."

Amazingly, Tara Pope-Parker says that these two large studies have failed to resolve the issue of whether it is better to test.

"Before the studies were released, most major medical groups said P.S.A. testing was a personal decision that a man should discuss with his doctor. The two new studies are unlikely to change that advice, experts say; instead, they give men and their doctors more information with which to make the decision."

Dana Jennings has a personal stake in the PSA testing controversy.

"I'm confused because I'm the statistical exception. I'm the one man in 49 whose life may have been saved because I had the PSA blood test. Most prostate cancers are slow and lazy. But my doctors and I learned after I had my prostate surgically removed last July that my cancer was shockingly aggressive. There's a good chance that it would've killed me if I hadn't been screened. And, to be blunt, it might yet."

Mr. Jennings offers a perspective that these studies tend to dehumanize the people involved.

"My biggest problem with the studies – and, of course, this is the nature of such studies – is that they reduce me and all my brothers-in-disease to abstractions, to cancer-bearing ciphers. Among those dry words, we are not living, breathing and terrified men, but merely our prostate cancers, whether slow or bold."

He also notes that

"The researchers counted 'deaths,' not men who had died. As Charlie Brown once said to Lucy as she detailed his baseball team’s shortcomings: 'Tell your statistics to shut up.'"

The two studies stirred strong emotions in Mr. Jennings.

"So, I sit here in limbo. And I wonder whether I'll be that rare man who ducks death from a cancer that would've killed him – because I got screened. But all I can confess to you, in all honesty, is this: I'm still angry and confused."

Questions

1. If two studies with a quarter of million patients between them is not enough to resolve the controversy over PSA testing, what would it take?

2. Why is it impossible to get a pure control group in a randomized assignment to screen group and a control group? What ethical principle would be violated if you forced the control group to forgo screening?

3. What is the expected impact of contamination on the results of the American study?

4. Do large statistical studies tend to dehumanize the participants by reducing their personal tragedies to a set of statistics? What can be done to avoid this?

Obama's Inflation?

Coming next year: Obama's inflation
The Hill, 3 March 2009
Dick Morris

Morris, a former advisor to President Bill Clinton and Senator Trent Lott, is now actively blogging as a political commentator. The present column, however, is an object lesson in either sloppy reading or how to lie with statistics. Morris asserts that "In the last five months, according to the Federal Reserve Board, the money supply in the United States has increased by 271 percent. It has almost tripled." Calling this a tripling is a common slip-up in interpreting percent change, though one we might expect someone writing on finance to avoid.

That, however, is the least of the distortions--and ironically the only one to err on the small side. Of course, it will be obvious to informed readers that the US money supply could not have nearly tripled (or quadrupled) in the last few months. But what then to make of the reference to Federal Reserve Board data?

Here is a link to the March 5 Federal Reserve Statistical Release that was current at the time of Morris's posting. At the bottom of the first table there, we read that for the 3 Months from October 2008 to January 2009 the M1 money stock grew at a seasonally adjusted annualized rate of 27.1 percent.

DISCUSSION QUESTIONS:

(1) In view of the actual Federal Reserve report, find 3 errors in Morris's analysis. For each, do you think it more likely represents a simple reading error or a deliberate intent to mislead?

(2) On the About The Hill page, we read "In an environment filled with political agendas, The Hill stands alone in delivering solid, non-partisan and objective reporting on the business of Washington...". Comment.

Submitted by Bill Peterson

Judging Statistics

From the New York Times comes this triumph of statistics. The graphic below summarizes why foul play was suspected in Luzerne County, PA on the part of two greedy judges who lacked a moral compass.

http://www.dartmouth.edu/~chance/forwiki/CB461.gif

Discussion:

1. Why is the graph so incriminating?

2. However, many statistics textbooks caution, "The data never speaks for itself." What possible mitigating facts regarding variability are missing?

3. As interesting as the statistical data is, read the article itself as well as the audios of victims to see the non-statistical evidence unearthed by the prosecution. Which do you find more compelling?

Submitted by Paul Alper