Sandbox: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
No edit summary
 
Line 1: Line 1:
==The signal and the noise==


The big winner in the 2012 election was not Barack Obama.  It was Nate Silver, the statistics wunderkind of the fivethirtyeight.com blog.  Do not be surprised if he is Time Magazine’s 2012 Man (Person? Geek? Nerd?) of the Year.  Just before the 2012 election took place this is what Stephen Colbert in his role as a right-wing megalomaniac mockingly said about Silver’s ability to predict election outcomes:


<blockquote>
==Forsooth==
Yes.  This race is razor tight.  That means no margin for error, or correct use of metaphor.  I mean, it's banana up for grabs.  But folks, every prediction out there needs a pooper.  In this case, New York Times polling Jedi Nate Silver, who in 2008 correctly predicted 49 out of 50 states. But, you know what they say.  '''Even a stopped clock is right 98% of the time.'''
<br><br>
See, Silver's got a computer model that uses mumbo jumbo like "weighted polling average", "trendline adjustment", and "linear regression analysis", but ignores proven methodologies like flag-pin size, handshake strength, and intensity of debate glare.
</blockquote>


While the gut feel of the “punditocracy” was certain the race would be very tight or that Romney would win in a landslide, Silver’s model based on his weighted averaging evaluation of  the extensive polling, predicted the outcome (popular vote and electoral college vote) almost exactly. [http://www.washingtonpost.com/blogs/wonkblog/wp/2012/11/05/pundit-accountability-the-official-2012-election-prediction-thread/ Here] is a listing of what Silver and others predicted.
==Quotations==
“We know that people tend to overestimate the frequency of well-publicized, spectacular
events compared with more commonplace ones; this is a well-understood phenomenon in
the literature of risk assessment and leads to the truism that when statistics plays folklore,
folklore always wins in a rout.
<div align=right>-- Donald Kennedy (former president of Stanford University), ''Academic Duty'', Harvard University Press, 1997, p.17</div>


Clearly, Silver never sleeps because all the while he was pumping out simulations of the presidential and US senate races, he published just before the election an amazing book: The Signal and the Noise; Why So Many Predictions Fail--but Some Don’t.  The reviews are glowingly positive as befits his track record.  For instance, as
----
[http://www.nytimes.com/2012/11/04/books/review/the-signal-and-the-noise-by-nate-silver.html?pagewanted=all Noam Scheiber] put it, “Nate Silver has lived a preposterously interesting life…It’s largely about evaluating predictions in a variety of fields, from finance to weather to epidemiology…Silver’s volume is more like an engagingly written user’s manual, with forays into topics like dynamic nonlinear systems (the guts of chaos theory) and Bayes’s theorem (a tool for figuring out how likely a particular hunch is right in light of the evidence we observe).” 
<blockquote>
Along these lines, I believe people are seriously misstating what Silver achieved. It isn’t that he predicted the election right where others botched it. It’s that he popularized a way of thinking about polling, a way to navigate through conflicting numbers and speculation, that would still have remained invaluable even if he’d predicted the outcome wrong.
<br>
Many liberals relied exclusively on Silver. But his model was only one of a number of polling trackers that were all worth consulting throughout — including Real Clear Politics, TPM, and HuffPollster — that were doing roughly the same thing: tracking averages of state polls.
<br>
The election results have triggered soul-searching among pollsters, particularly those who got it wrong. But the failure of some polls to get it right doesn’t tell us anything we didn’t know before the election. Silver’s approach — and that of other modelers — has always been based on the idea that individual polls will inevitably be wrong.
Silver’s accomplishment was to popularize tools enabling you to navigate the unavoidable reality that some individual polls will necessarily be off, thanks to methodology or chance. People keep saying Silver got it right because the polls did. But that’s not really true. The polling averages got it right.
</blockquote>
===Discussion===


1.  The above quotation from Scheiber failed to mention some other fascinating statistical prediction topics in the book: chess, poker, politics, basketball, earthquakes, flu outbreaks, cancer detection, terrorism and of course, baseball--Silver’s first success story.  By all means, read the book which is both scholarly (56 pages of end notes) and breezy.  However, because the book is so USA oriented, it may well be opaque to anyone outside of North America.
"Using scientific language and measurement doesn’t prevent a researcher from conducting flawed experiments and drawing wrong conclusions — especially when they confirm preconceptions."


2.  The above link from the Washington Post has Silver claiming 332 electoral votes for Obama and 203 [misprint, should be 206] for Romney which turns out to be the exact result.  However, on Silver’s blog itself, Obama gets only 313 electoral votes and Romney gets 225.  Explain the discrepancy.  Hint: Look at Silver’s prediction for Florida.
<div align=right>-- Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todoorov, quoted in: The racist history behind facial recognition, ''New York Times'', 10 July 2019</div>


3. The above link from the Washington Post indicates that several other poll aggregators using similar methodology were just as accurate as Silver.  Speculate as to why they are less celebrated?
==In progress==
[https://www.nytimes.com/2018/11/07/magazine/placebo-effect-medicine.html What if the Placebo Effect Isn’t a Trick?]<br>
by Gary Greenberg, ''New York Times Magazine'', 7 November 2018


4. Silver also predicted the outcome of the U.S. Senate races.  In fact, while he got all the others right, he was quite wrong in one of them and spectacularly wrong in another.  Which two were they?  Speculate as to why Silver was less successful predicting the Senate races than he was on the presidential race.
[https://www.nytimes.com/2019/07/17/opinion/pretrial-ai.html The Problems With Risk Assessment Tools]<br>
by Chelsea Barabas, Karthik Dinakar and Colin Doyle, ''New York Times'', 17 July 2019


5.  Silver’s use of averaging to improve a forecast has a long history in statistics.  There exists [http://en.wikipedia.org/wiki/Francis_Galton a famous example of Francis Galton] of over 100 years ago:
==Hurricane Maria deaths==
Laura Kapitula sent the following to the Isolated Statisticians e-mail list:


<blockquote>
:[Why counting casualties after a hurricane is so hard]<br>
In 1906, visiting a livestock fair, he stumbled upon an intriguing contest. An ox was on display, and the villagers were invited to guess the animal's weight after it was slaughtered and dressed. Nearly 800 participated, but not one person hit the exact mark: 1,198 pounds. Galton stated that "the middlemost estimate expresses the vox populi, every other estimate being condemned as too low or too high by a majority of the voters", and calculated this value (in modern terminology, the median) as 1,207 pounds. To his surprise, this was within 0.8% of the weight measured by the judges. Soon afterwards, he acknowledged that the mean of the guesses, at 1,197 pounds, was even more accurate.
:by Jo Craven McGinty, Wall Street Journal, 7 September 2018
</blockquote>


Presumably, those 800 hundred villagers in 1906 knew something about oxen and pounds. Suppose Galton had asked the villagers to guess the number of chromosomes of the ox. Why in this case would averaging likely to be useless?
The article is subtitled: Indirect deaths—such as those caused by gaps in medication—can occur months after a storm, complicating tallies
Laura noted that
:[https://www.washingtonpost.com/news/fact-checker/wp/2018/06/02/did-4645-people-die-in-hurricane-maria-nope/?utm_term=.0a5e6e48bf11 Did 4,645 people die in Hurricane Maria? Nope.]<br>
:by Glenn Kessler, ''Washington Post'', 1 June 2018


6.  Although Silver devotes many pages to the volatility of the stock market, he barely mentions (only in the footnote on page 368) Nassim Taleb and his “black swans.”  Rather than black swans and fractals, Silver invokes the power-law distribution to explain “very occasional but very large swings up or down” in the stock market and the frequency of earthquakes.  For more on the power-law distribution, see [http://en.wikipedia.org/wiki/Power_law this interesting Wikipedia article].
The source of the 4645 figure is a [https://www.nejm.org/doi/full/10.1056/NEJMsa1803972 NEJM article].  Point estimate, the 95% confidence interval ran from 793 to 8498.


7.  One of the lessons of the book is that in order to predict a specific phenomenon successfully is that there needs to be a data rich environmentTherefore, ironically, weather forecasting is, so to speak, on much firmer ground than earthquake forecasting.
President Trump has asserted that the actual number is
[https://twitter.com/realDonaldTrump/status/1040217897703026689 6 to 18].
The ''Post'' article notes that Puerto Rican official had asked researchers at George Washington University to do an estimate of the death tollThat work is not complete.
[https://prstudy.publichealth.gwu.edu/ George Washington University study]


8. Another lesson of the book is that when it comes to the game of poker, now that most of the poor players have left the scene, it is easier to make money by owning the house than being a participant.  Knowledge of Bayes theorem can only go so far.
:[https://fivethirtyeight.com/features/we-still-dont-know-how-many-people-died-because-of-katrina/?ex_cid=538twitter We sttill don’t know how many people died because of Katrina]<br>
:by Carl Bialik, FiveThirtyEight, 26 August 2015


Submitted by Paul Alper
----
[https://www.nytimes.com/2018/09/11/climate/hurricane-evacuation-path-forecasts.html These 3 Hurricane Misconceptions Can Be Dangerous. Scientists Want to Clear Them Up.]<br>
[https://journals.ametsoc.org/doi/abs/10.1175/BAMS-88-5-651 Misinterpretations of the “Cone of Uncertainty” in Florida during the 2004 Hurricane Season]<br>
[https://www.nhc.noaa.gov/aboutcone.shtml Definition of the NHC Track Forecast Cone]
----
[https://www.popsci.com/moderate-drinking-benefits-risks Remember when a glass of wine a day was good for you? Here's why that changed.]
''Popular Science'', 10 September 2018
----
[https://www.economist.com/united-states/2018/08/30/googling-the-news Googling the news]<br>
''Economist'', 1 September 2018
 
[https://www.cnbc.com/2018/09/17/google-tests-changes-to-its-search-algorithm-how-search-works.html We sat in on an internal Google meeting where they talked about changing the search algorithm — here's what we learned]
----
[http://www.wyso.org/post/stats-stories-reading-writing-and-risk-literacy Reading , Writing and Risk Literacy]
 
[http://www.riskliteracy.org/]
-----
[https://twitter.com/i/moments/1025000711539572737?cn=ZmxleGlibGVfcmVjc18y&refsrc=email Today is the deadliest day of the year for car wrecks in the U.S.]
 
==Some math doodles==
<math>P \left({A_1 \cup A_2}\right) = P\left({A_1}\right) + P\left({A_2}\right) -P \left({A_1 \cap A_2}\right)</math>
 
<math>P(E)  = {n \choose k} p^k (1-p)^{ n-k}</math>
 
<math>\hat{p}(H|H)</math>
 
<math>\hat{p}(H|HH)</math>
 
==Accidental insights==
 
My collective understanding of Power Laws would fit beneath the shallow end of the long tail. Curiosity, however, easily fills the fat end.  I long have been intrigued by the concept and the surprisingly common appearance of power laws in varied natural, social and organizational dynamics.  But, am I just seeing a statistical novelty or is there meaning and utility in Power Law relationships? Here’s a case in point.
 
While carrying a pair of 10 lb. hand weights one, by chance, slipped from my grasp and fell onto a piece of ceramic tile I had left on the carpeted floor. The fractured tile was inconsequential, meant for the trash.
<center>[[File:BrokenTile.jpg | 400px]]</center>
As I stared, slightly annoyed, at the mess, a favorite maxim of the Greek philosopher, Epictetus, came to mind: “On the occasion of every accident that befalls you, turn to yourself and ask what power you have to put it to use.”  Could this array of large and small polygons form a Power Law? With curiosity piqued, I collected all the fragments and measured the area of each piece.
 
<center>
{| class="wikitable"
|-
! Piece !! Sq. Inches !! % of Total
|-
| 1 || 43.25 || 31.9%
|-
| 2 || 35.25 ||26.0%
|-
|  3 || 23.25 || 17.2%
|-
| 4 || 14.10 || 10.4%
|-
| 5 || 7.10 || 5.2%
|-
| 6 || 4.70 || 3.5%
|-
| 7 || 3.60 || 2.7%
|-
| 8 || 3.03 || 2.2%
|-
| 9 || 0.66 || 0.5%
|-
| 10 || 0.61 || 0.5%
|}
</center>
<center>[[File:Montante_plot1.png | 500px]]</center>
The data and plot look like a Power Law distribution. The first plot is an exponential fit of percent total area. The second plot is same data on a log normal format. Clue: Ok, data fits a straight line.  I found myself again in the shallow end of the knowledge curve. Does the data reflect a Power Law or something else, and if it does what does it reflect?  What insights can I gain from this accident? Favorite maxims of Epictetus and Pasteur echoed in my head:
“On the occasion of every accident that befalls you, remember to turn to yourself and inquire what power you have to turn it to use” and “Chance favors only the prepared mind.”
 
<center>[[File:Montante_plot2.png | 500px]]</center>
My “prepared” mind searched for answers, leading me down varied learning paths. Tapping the power of networks, I dropped a note to Chance News editor Bill Peterson. His quick web search surfaced a story from ''Nature News'' on research by Hans Herrmann, et. al. [http://www.nature.com/news/2004/040227/full/news040223-11.html Shattered eggs reveal secrets of explosions].  As described there, researchers have found power-law relationships for the fragments produced by shattering a pane of glass or breaking a solid object, such as a stone. Seems there is a science underpinning how things break and explode; potentially useful in Forensic reconstructions.
Bill also provided a link to [http://cran.r-project.org/web/packages/poweRlaw/vignettes/poweRlaw.pdf a vignette from CRAN] describing a maximum likelihood procedure for fitting a Power Law relationship. I am now learning my way through that.
 
Submitted by William Montante
 
----

Latest revision as of 20:58, 17 July 2019


Forsooth

Quotations

“We know that people tend to overestimate the frequency of well-publicized, spectacular events compared with more commonplace ones; this is a well-understood phenomenon in the literature of risk assessment and leads to the truism that when statistics plays folklore, folklore always wins in a rout.”

-- Donald Kennedy (former president of Stanford University), Academic Duty, Harvard University Press, 1997, p.17

"Using scientific language and measurement doesn’t prevent a researcher from conducting flawed experiments and drawing wrong conclusions — especially when they confirm preconceptions."

-- Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todoorov, quoted in: The racist history behind facial recognition, New York Times, 10 July 2019

In progress

What if the Placebo Effect Isn’t a Trick?
by Gary Greenberg, New York Times Magazine, 7 November 2018

The Problems With Risk Assessment Tools
by Chelsea Barabas, Karthik Dinakar and Colin Doyle, New York Times, 17 July 2019

Hurricane Maria deaths

Laura Kapitula sent the following to the Isolated Statisticians e-mail list:

[Why counting casualties after a hurricane is so hard]
by Jo Craven McGinty, Wall Street Journal, 7 September 2018

The article is subtitled: Indirect deaths—such as those caused by gaps in medication—can occur months after a storm, complicating tallies

Laura noted that

Did 4,645 people die in Hurricane Maria? Nope.
by Glenn Kessler, Washington Post, 1 June 2018

The source of the 4645 figure is a NEJM article. Point estimate, the 95% confidence interval ran from 793 to 8498.

President Trump has asserted that the actual number is 6 to 18. The Post article notes that Puerto Rican official had asked researchers at George Washington University to do an estimate of the death toll. That work is not complete. George Washington University study

We sttill don’t know how many people died because of Katrina
by Carl Bialik, FiveThirtyEight, 26 August 2015

These 3 Hurricane Misconceptions Can Be Dangerous. Scientists Want to Clear Them Up.
Misinterpretations of the “Cone of Uncertainty” in Florida during the 2004 Hurricane Season
Definition of the NHC Track Forecast Cone


Remember when a glass of wine a day was good for you? Here's why that changed. Popular Science, 10 September 2018


Googling the news
Economist, 1 September 2018

We sat in on an internal Google meeting where they talked about changing the search algorithm — here's what we learned


Reading , Writing and Risk Literacy

[1]


Today is the deadliest day of the year for car wrecks in the U.S.

Some math doodles

<math>P \left({A_1 \cup A_2}\right) = P\left({A_1}\right) + P\left({A_2}\right) -P \left({A_1 \cap A_2}\right)</math>

<math>P(E) = {n \choose k} p^k (1-p)^{ n-k}</math>

<math>\hat{p}(H|H)</math>

<math>\hat{p}(H|HH)</math>

Accidental insights

My collective understanding of Power Laws would fit beneath the shallow end of the long tail. Curiosity, however, easily fills the fat end. I long have been intrigued by the concept and the surprisingly common appearance of power laws in varied natural, social and organizational dynamics. But, am I just seeing a statistical novelty or is there meaning and utility in Power Law relationships? Here’s a case in point.

While carrying a pair of 10 lb. hand weights one, by chance, slipped from my grasp and fell onto a piece of ceramic tile I had left on the carpeted floor. The fractured tile was inconsequential, meant for the trash.

BrokenTile.jpg

As I stared, slightly annoyed, at the mess, a favorite maxim of the Greek philosopher, Epictetus, came to mind: “On the occasion of every accident that befalls you, turn to yourself and ask what power you have to put it to use.” Could this array of large and small polygons form a Power Law? With curiosity piqued, I collected all the fragments and measured the area of each piece.

Piece Sq. Inches % of Total
1 43.25 31.9%
2 35.25 26.0%
3 23.25 17.2%
4 14.10 10.4%
5 7.10 5.2%
6 4.70 3.5%
7 3.60 2.7%
8 3.03 2.2%
9 0.66 0.5%
10 0.61 0.5%
Montante plot1.png

The data and plot look like a Power Law distribution. The first plot is an exponential fit of percent total area. The second plot is same data on a log normal format. Clue: Ok, data fits a straight line. I found myself again in the shallow end of the knowledge curve. Does the data reflect a Power Law or something else, and if it does what does it reflect? What insights can I gain from this accident? Favorite maxims of Epictetus and Pasteur echoed in my head: “On the occasion of every accident that befalls you, remember to turn to yourself and inquire what power you have to turn it to use” and “Chance favors only the prepared mind.”

Montante plot2.png

My “prepared” mind searched for answers, leading me down varied learning paths. Tapping the power of networks, I dropped a note to Chance News editor Bill Peterson. His quick web search surfaced a story from Nature News on research by Hans Herrmann, et. al. Shattered eggs reveal secrets of explosions. As described there, researchers have found power-law relationships for the fragments produced by shattering a pane of glass or breaking a solid object, such as a stone. Seems there is a science underpinning how things break and explode; potentially useful in Forensic reconstructions. Bill also provided a link to a vignette from CRAN describing a maximum likelihood procedure for fitting a Power Law relationship. I am now learning my way through that.

Submitted by William Montante