ME AS URIN G A NY T H I NG
Almost everyone
can be trained
to be an expert
“probability
estimator.”
perceived to be immeasurable, and why all three
are mistaken.
2. Do the math.
A key point in every edition of the book was
that we measure to feed quantitative decision
models, and that even naïve quantitative models
easily outperform human experts in a variety of estimation and decision problems. In a meta-study of
150 studies comparing expert judgment to statistical models, the models clearly outperformed the
experts in 144 of the cases [Meehl, 1975]. More
and more research confirms this. The third edition
adds the findings of Philip Tetlock’s giant undertaking to track more than 82,000 forecasts of 284
experts over a 20-year period. From this, Tetlock
could confidently state, “It is impossible to find any
domain in which humans clearly outperformed
crude extrapolation algorithms, less still sophisticated statistical ones” [Tetlock, 2006]. The book
reviews additional research to show that, unless
we do the math, most people, even statistically
trained experts, are susceptible to common inference errors.
3. Just about everyone can be trained to assess odds like a pro.
Almost everyone can be trained to be an expert “probability estimator.” Building on the work of
others in decision psychology [Lichtenstein and B.
Fischhoff, 1980], HDR started providing “calibrated
probability assessment” training in the mid-1990s.
The third edition included data from more than 900
people calibrated by HDR. The data consistently
30
|
A N A LY T I C S - M A G A Z I N E . O R G
W W W. I N F O R M S . O R G