Rating 8/10

My Summary:

A treatise that explores our fascination with predictions, as well as the numerous shortcomings that bubble up when we, humans, attempt to predict the future. In short, the name of the game is bayesian statistics. According to the author, no other method comes close to its accuracy. Simply put, Bayesian statistics relies on estimating the likelihood of an event, and then continually updating that even based on new information. Bouncing from earthquakes, to politics, baseball, and terrorism, Silver does a great job of sharing his obsession with the reader. Also, this book helped me discover my favorite Tufts’ grad, Eugene Fama, (He called BS on mutual fund managers back in the 1960’s)

Quotes:

[On the 2008 Financial Crash] That meant that for every dollar that someone was willing to put in a mortgage, Wall Street was making almost $50 worth of bets on the side.

Pure objectivity is desirable but unattainable in this world. When we make a forecast, we have a choice from among many different methods. Some of these might rely solely on quantitative variables like polls, while approaches like Wasserman’s may consider qualitative factors as well.

The way to become more objective is to recognize the influence that our assumptions play in our forecasts and to question ourselves about them. In politics, between our ideological predispositions and our propensity to weave tidy narratives from noisy data, this can be especially difficult.

Olympic gymnasts peak in their teens; poets in their twenties; chess players in their thirties11; applied economists in their forties,12 and the average age of a Fortune 500 CEO is 55.13 A baseball player, James found, peaks at age twenty-seven. Of the fifty MVP winners between 1985 and 2009, 60 percent were between the ages of twenty-five and twenty-nine, and 20 percent were aged twenty-seven exactly. This is when the combination of physical attributes and mental attributes needed to play the game well seem to be in the best balance.

The average, like the family with 1.7 children, is just a statistical abstraction.

New Orleans does not move quickly, and New Orleans does not place much faith in authority. If it did those things, New Orleans would not really be New Orleans. It would also have been much better prepared to deal with Katrina, since those are the exact two things you need to do when a hurricane threatens to strike.

Improved computing power has not really improved earthquake or economic forecasts in any obvious way. But meteorology is a field in which there has been considerable, even remarkable, progress.

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

The National Weather Service gets by on just $900 million per year—about $3 per U.S. citizen—even though weather has direct effects on some 20 percent of the nation’s economy.

According to the agency’s statistics, humans improve the accuracy of precipitation forecasts by about 25 percent over the computer guidance alone, and temperature forecasts by about 10 percent. Moreover, according to Hoke, these ratios have been relatively constant over time: as much progress as the computers have made, his forecasters continue to add value on top of it. Vision accounts for a lot.

For instance, the for-profit weather forecasters rarely predict exactly a 50 percent chance of rain, which might seem wishy-washy and indecisive to consumers. Instead, they’ll flip a coin and round up to 60, or down to 40, even though this makes the forecasts both less accurate and less honest.

People notice one type of mistake—the failure to predict rain—more than another kind, false alarms. If it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic, whereas an unexpectedly sunny day is taken as a serendipitous bonus. It isn’t good science, but as Dr. Rose at the Weather Channel acknolwedged to me: “If the forecast was objective, if it has zero bias in precipitation, we’d probably be in trouble.”

Earthquakes kill more people than hurricanes, in fact, despite seeming like the rarer phenomenon.

The official position of the USGS is even more emphatic: earthquakes cannot be predicted. “Neither the USGS nor Caltech nor any other scientists have ever predicted a major earthquake,” the organization’s Web site asserts. “They do not know how, and they do not expect to know how any time in the foreseeable future.”

The USGS estimates, on the basis of high death tolls from smaller earthquakes in Iran, that between 15 and 30 percent of Tehran’s population could die in the event of a catastrophic tremor there. Since there are about thirteen million people in Tehran’s metro area, that would mean between two and four million fatalities.

An oft-told joke: a statistician drowned crossing a river that was only three feet deep on average.

Hatzius sees it, economic forecasters face three fundamental challenges. First, it is very hard to determine cause and effect from economic statistics alone. Second, the economy is always changing, so explanations of economic behavior that hold in one business cycle may not apply to future ones. And third, as bad as their forecasts have been, the data that economists have to work with isn’t much good either.

“The way we think about it is if you take something like initial claims on unemployment insurance, that’s a very good predictor for unemployment rates, which is a good predictor for economic activity,” I was told by Google’s chief economist, Hal Varian, at Google’s headquarters in Mountain View, California. “We can predict unemployment initial claims earlier because if you’re in a company and a rumor goes around that there are going to be layoffs, then people start searching ‘where’s the unemployment office,’ ‘how am I going to apply for unemployment,’ and so on. It’s a slightly leading indicator.”

If you compare the number of children who are diagnosed as autistic to the frequency with which the term autism has been used in American newspapers,65 you’ll find that there is an almost perfect one-to-one correspondence (figure 7-4), with both having increased markedly in recent years.

As the statistician George E. P. Box wrote, “All models are wrong, but some models are useful.”

If the future exists in shades of probabilistic gray to the forecaster, however, the present arrives in black and white.

Bayes’s theorem is concerned with conditional probability. That is, it tells us the probability that a theory or hypothesis is true if some event has happened.

Poe claimed that if this chess-playing machine were real, it must by definition play chess flawlessly; machines do not make computational errors. He took the fact that the Turk did not play perfect chess—it won most of its games but lost a few—as further proof that it was not a machine but a human-controlled apparatus, full of human imperfections.

These are astronomical numbers: as Diego Rasskin-Gutman has written, “There are more possible chess games than the number of atoms in the universe.”

In any long game of chess, it is quite likely that you and your opponent will eventually reach some positions that literally no two players in the history of humanity have encountered before.

The challenge for Campbell is that Deep Blue long ago became better at chess than its creators. It might make a move that they wouldn’t have played, but they wouldn’t necessarily know if it was a bug.

If you have strong analytical skills that might be applicable in a number of disciplines, it is very much worth considering the strength of the competition. It is often possible to make a profit by being pretty good at prediction in fields where the competition succumbs to poor incentives, bad habits, or blind adherence to tradition—or because you have better data or technology than they do. It is much harder to be very good in fields where everyone else is getting the basics right—and you may be fooling yourself if you think you have much of an edge.

PokerKingBlog.com has alleged that Guy Laliberté, the CEO of Cirque du Soleil, lost as much as $17 million in online poker games in 2008, where he sought to compete in the toughest high-stakes games against opponents like Dwan. Whatever the number, Laliberté is a billionaire who was playing the game for the intellectual challenge and to him this was almost nothing, the equivalent of the average American losing a few hundred bucks at blackjack.

In the United States, we live in a very results-oriented society. If someone is rich or famous or beautiful, we tend to think they deserve to be those things. Often, in fact, these factors are self-reinforcing: making money begets more opportunities to make money; being famous provides someone with more ways to leverage their celebrity; standards of beauty may change with the look of a Hollywood starlet.

The paradox reminds me of an old joke among economists. One economist sees a $100 bill sitting on the street and reaches to grab it. “Don’t bother,” the other economist says. “If it were a real $100 bill, someone would already have picked it up.” If everyone thought this way, of course, nobody would bother to pick up $100 bills until a naïve young lad who had never taken an economics course went about town scooping them up, then found out they were perfectly good and exchanged them for a new car.

This book advises you to be wary of forecasters who say that the science is not very important to their jobs, or scientists who say that forecasting is not very important to their jobs! These activities are essentially and intimately related. A forecaster who says he doesn’t care about the science is like the cook who says he doesn’t care about food. What distinguishes science, and what makes a forecast scientific, is that it is concerned with the objective world. What makes forecasts fail is when our concern only extends as far as the method, maxim, or model.

One reason there aren’t all that many terror attacks may be that there aren’t all that many terrorists. It is very difficult to get a head count of terrorists, but one commonly cited estimate is that Al Qaeda had only about 500 to 1,000 operatives at its peak. This figure includes hangers-on and wannabes, as well as the people engaged in all the nonviolent functions that groups like Al Qaeda must perform: some doofus has to reboot Al Qaeda’s server when their network goes down.

This is why events like Pearl Harbor and September 11 produce the sort of cognitive dissonance they do. Where our enemies will strike us is predictable: it’s where we least expect them to.

[In Silver’s Thank you’s] I hope to return all these favors someday. I will start by buying the first beer for anybody on this list, and the first three for anybody who should have been, but isn’t.

Header photo © uwaterloo.ca
Body photo © wikipedia.com