Chance News 61: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 148: Line 148:
by Scott Thurm, <i>The Wall Street Journal</i>, February 14, 2010<br>
by Scott Thurm, <i>The Wall Street Journal</i>, February 14, 2010<br>


Two Stanford researchers have examined nearly half a million earnings reports over a 27-year period and found evidence that “many companies tweak quarterly earnings to meet investor expectations, and the companies that adjust most often are more likely to restate earnings or be charged with accounting violations.”  Their working paper can be downloaded from the website [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1474668 “Quadrophobia: Strategic Rounding of EPS Data”]. <br>  
Two Stanford researchers have examined nearly half a million earnings reports over a 27-year period and found evidence that “many companies tweak quarterly earnings to meet investor expectations, and the companies that adjust most often are more likely to restate earnings or be charged with accounting violations.”  Their working paper, “Quadrophobia: Strategic Rounding of EPS Data,” can be downloaded from a website [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1474668]. <br>  


They analyzed standard earnings-er-share ratios down to a tenth of a cent and noticed that some companies had “nudged” these figures upward by a tenth of a cent or two, which had the effect of producing final figures that were a full cent higher than they would otherwise have been.  Apparently even a one-cent increase in expected earnings is significant to investors.  This rounding is said to be legal.
They analyzed standard earnings-er-share ratios down to a tenth of a cent and noticed that some companies had “nudged” these figures upward by a tenth of a cent or two, which had the effect of producing final figures that were a full cent higher than they would otherwise have been.  Apparently even a one-cent increase in expected earnings is significant to investors.  This rounding is said to be legal.

Revision as of 15:06, 16 February 2010

Quotations

Forsooth

Census errors

Can you trust Census data?
Freakonomics blog, New York Times, 2 February 2010
Justin Wolfers

Bureau obscured personal data—Too well, some say
Numbers Guy blog, Wall Street Journal, 6 February 2010
Carl Bialik

To be continued...

Submitted by Bill Peterson

Height bias or data dredging?

Soccer referees hate the tall guys
Wall Street Journal, 8 Feburary 2010

According to the article, "Niels van Quaquebeke and Steffen R. Giessner, researchers at Erasmus University in Rotterdam, compiled refereeing data from seven seasons of the German Bundesliga and the UEFA Champions League, as well as three World Cups (123,844 fouls in total)" and found:

Height Difference Probability of Foul Against Taller Player
1-5 cm 52.0%
6-10 cm 55.4%
> 10 cm 58.8%

Avg. Height of Perpetrator Avg. Height of Victim
182.4 cm 181.5 cm

Note that the height difference on average is only 0.9 cm!

To be continued... Submitted by Paul Alper

Poll question wording affects results

“New Poll Shows Support for Repeal of ‘Don’t Ask, Don’t Tell’”
by Dalia Sussman, The New York Times, February 11, 2010
"Support for Gays in the Military Depends on the Question"
by Kevin Hechtkopf, CBS News, February 11, 2010

These articles describe how the wording of a February 5-10, 2010, NYT/CBS News poll affected the results.

When half of the 1,084 respondents were asked their opinions about permitting “gay men and lesbians” to serve in the military, 70% said that they strongly/somewhat favored it. Of the other half of respondents who were asked about permitting “homosexuals” to serve, only 59% said that they strongly/somewhat favored it. The gap was much wider (79% to 43%) for respondents identifying themselves as Democrats.

For more detailed poll results, see the CBS News website [1].

Discussion
1. The margin of error for each half sample was said to be +/- 4 percentage points. Would you consider the difference between 70% and 59% statistically significant? If not, why? If so, at what level? What about the difference for Democrats?
2. Can you suggest any reason for the difference between 70% and 59% for the half samples? For the difference between 79% and 43% for the Democratic half samples?
3. What implication(s) do these results have, if any, for ballot-question writers?

Submitted by Margaret Cibes based on a suggestion of Jim Greenwood and an ISOSTAT posting by Jeff Witmer

Disability to present accurate statistics

The Odds of a Disability Are Themselves Odd Ron Lieber, The New York Times, February 5, 2010.

What are your chances of needing disability insurance?

You have an 80 percent chance of becoming disabled during your working years. Or maybe it’s 52 percent. Or possibly 30 percent.

Why all the variation? Part of it depends on your definition of disability. One quoted statistic was too high because

the statistic comes from the National Safety Council, which describes “disabling” pretty loosely. “It interferes with normal daily activity one day beyond the day of injury,” said Amy Williams, a spokeswoman for the National Safety Council. “It doesn’t mean they weren’t able to go to work. It may mean that they twisted their ankle and couldn’t go to Pilates that night.”

A good estimate of disability, one that defines disability (an injury that keeps you out of work for 90 days or more) and a provides a time frame for the individual (probability of a disability event between the ages of 25 and 65), appears to be around 30%. But even that number needs to be qualified.

Numbers for white-collar workers are usually lower than for assembly line workers. If you have no chronic conditions, eat decent food and avoid cigarettes, your odds may drop to 10 percent, according to the “Personal Disability Quotient” quiz on the Web site of the Council for Disability Awareness.

And there are even more qualifiers.

Here are a few other things to keep in mind if you’re running your own numbers. Some people lie about being disabled, and their fake claims skew the actuarial data, though no one knows by how much. Lower your odds a bit to account for the cheaters. Lower them some more in recognition of the fact that people who buy their own policies also tend to actually use them.

This is not to imply that disability insurance is a bad deal. The article failed to make this point, but insurance makes sense for covering catastrophic events of whatever probability, that would otherwise bankrupt an individual. It is the magnitude of the potential loss, rather than the probability, that largely determines whether you should buy insurance.

Submitted by Steve Simon

Improving your odds at online dating

Looking for a Date? A Site Suggests You Check the Data. Jenna Wortham, The New York Times, February 12, 2010.

A dating site, OkCupid, decided to help its users by sharing statistics on its blog--statistics on what worked and what didn't.

If you’re a man, don’t smile in your profile picture, and don’t look into the camera. If you’re a woman, skip photos that focus on your physical assets and pick one that shows you vacationing in Brazil or strumming a guitar.

Here are some additional insights.

“If you want worthwhile messages in your in-box, the value of being conversation-worthy, as opposed to merely sexy, cannot be overstated,” wrote Christian Rudder, another OkCupid founder, in the post. Last fall Mr. Rudder looked at the first messages sent by users to would-be mates on the site, and which ones were most likely to get a response. His analysis found that messages with words like “fascinating” and “cool” had a better success rate than those with “beautiful” or “cutie.”

The insights posted on the blog led to a lot of press and a surge of new customers.

Since OkCupid started its blog, the number of active site members has grown by roughly 10 percent, to 1.1 million, according to the company.

Submitted by Steve Simon

Questions

1. Why would the publishing of statistics lead to a surge of new members rather than just changing the behaviors people used on their existing dating site?

2. How could you quantify information about profile pictures that involve inherently subjective judgments?

Children: Just Say No To Sex

“Some Success Seen in Abstinence Program”
by Lindsey Tanner (AP), TIME, February 1, 2010
“Abstinence-Only Classes Reduced Sexual Activity, Study Found”
by Jennifer Thomas, Business Week, February 1, 2010

The National Institute of Mental Health funded a University of Pennsylvania randomized control study of 662 African-American children in 4 Philadelphia public schools. Children were 6th or 7th graders age 10-15 years. Study results were released in the February 2010 edition of Archives of Pediatrics & Adolescent Medicine; see “Abstract”.

An 8-hour abstinence-only intervention targeted reduced sexual intercourse; an 8-hour safer sex–only intervention targeted increased condom use; 8-hour and 12-hour comprehensive interventions targeted sexual intercourse and condom use; and an 8-hour health-promotion control intervention targeted health issues unrelated to sexual behavior. Participants also were randomized to receive or not receive an intervention maintenance program to extend intervention efficacy.[2]

Assignments involved helping the students to “see the drawbacks to sexual activity at their age, including having them list the pros and cons themselves.” Classes met at schools on weekends.

After 2 years, with about 84% of the students still enrolled in the program, almost 50% of the control group reported that they had "ever" had sexual intercourse[3], while only 34% of the abstinence-only group reported the same behavior. Also, 29% of the control group reported having had sexual intercourse in the "previous 3 months during the follow-up period," as opposed to 21% of the abstinence-only students. The study did not collect data on pregnancy or sexually transmitted diseases.

Authors of the study do not recommend abandoning “more comprehensive sex education.” An Archives’ editorial contains the comment:

No public policy should be based on the results of one study, nor should policy makers selectively use scientific literature to formulate a policy that meets preconceived ideologies.

Discussion

1. How many of the students had dropped out of the program at the end of 2 years? Do you think that the study’s results would have been significantly different if these students had not dropped out?

2. Do you think the fact that more than half (54%) of the participants were girls [4] affected the results?

3. Researchers had to have received informed consent from each participant in this study. How might the fact that this consent probably had to come, legally, from parents/guardians have influenced the results of the study?

4. What factors might have influenced individual students in their responses about their sexual activities?

5. What clarification might you want about the time frames of sexual intercourse among the students, with respect to "ever" having had it or having had it "during the previous 3 months during the follow-up period"? Why might you want to know how many of these students had had sexual intercourse prior to the classes?

6. What additional information about each of the four subgroups and/or the "intervention maintenance program" might help a public-policy maker form his/her decisions with respect to sex education for school children? Assume, contrary to what we know, that adverse public attitudes toward sex education are not a factor in these decisions.

7. Based on their sample results, the researchers estimated, with 95% confidence, that the probability of abstinence-only education reducing “sexual initiation” in the general population (population undefined) lies in the interval (0.48, 0.96)[5]. If the probability actually does lie in that interval, what does that mean about the “treatment,” in non-statistical terms? Can you tell whether abstinence-only education would be more likely to be effective or not?

Submitted by Margaret Cibes

Fractions of a penny may count

“For Some Firms, a Case of ‘Quadrophobia’”
by Scott Thurm, The Wall Street Journal, February 14, 2010

Two Stanford researchers have examined nearly half a million earnings reports over a 27-year period and found evidence that “many companies tweak quarterly earnings to meet investor expectations, and the companies that adjust most often are more likely to restate earnings or be charged with accounting violations.” Their working paper, “Quadrophobia: Strategic Rounding of EPS Data,” can be downloaded from a website [6].

They analyzed standard earnings-er-share ratios down to a tenth of a cent and noticed that some companies had “nudged” these figures upward by a tenth of a cent or two, which had the effect of producing final figures that were a full cent higher than they would otherwise have been. Apparently even a one-cent increase in expected earnings is significant to investors. This rounding is said to be legal.

When they ran the earnings-per-share numbers down to a 10th of a cent, they found that the number "4" appeared less often in the 10ths place than any other digit, and significantly less often than would be expected by chance. They dub the effect "quadrophobia."

The article provides a chart[7] showing the frequency of digits in the tenth-of-a-cent place for the companies for the period 1980-2006. For the digits 0 through 9, the frequencies range from approximately 8.5 to 10.8 percent (eyeballing).

But the overall effect is striking. In theory, each digit should appear in the 10ths place 10% of the time. After reviewing nearly 489,000 quarterly results for 22,000 companies from 1980 to 2006, however, the authors found that "4" appeared in the 10ths place only 8.5% of the time. Both "2" and "3" also are underrepresented in the 10ths place; all othe digits show up more frequently than expected by chance.

Companies that are less likely to report 4s in the tenths’ place are those whose earnings-per-share ratios are close to expectations or high. But they are also companies that “later restate earnings or are charged with accounting violations.”
One blogger[8] commented:

I was nailed once by an instructor in high school in an electronics lab in which you are supposed to do an experiment and measure the results, and then be taught the theory and math. I already knew the theory and math, so I just filled out the table. He immediately handed it back to me and said "go do the measurements, even though I know you know this stuff already". I asked, "how did you know?", and he said, our (old analog) meters don't read to two decimal places, unless your eyes are a lot better than mine!

Two other bloggers[9] interacted:
(a) It's not true that 4 should appear as often as every other number. The researchers should refer to Benford's law to determine the expected frequency of the number 4 before making such a claim. This law from number theory has been accepted by courts and successfully used in fraud convictions. And it DOES NOT state that the frequency of all digits should be the same.
(b) Uh -- Benford's law applies to the _first_ digit in a table or list of numbers. These guys were looking at the _last_ digit in a lot of numbers. Please read more carefully next time.
Submitted by Margaret Cibes