Unreliable forecasts – assessing the predictors

By Toby Manhire In The Internaut

Print Share
10th December, 2013 Leave a Comment

“North Korea will win an Emmy”

The year is spluttering to its close, accompanied by the annual chorus of predictions for the year to come. However, with a few exceptions, such as meteorology, it’s very difficult to gauge “predictive accuracy”, write Dan Gardner and Philip Tetlock in the Economist’s The World in 2014.

In an effort to remedy that, the authors urge readers to take part in a “forecasting tournament”. A precursor to the current tournament, held by Tetclock in the 80s, featuring economists, political scientists and journalists showed “the average expert did only slightly better than random guessing”.

Moreover: “Experts with the most inflated views of their own batting averages tended to attract the most media attention. Their more self-effacing colleagues, the ones we should be heeding, often don’t get on to our radar screens.

Over recent years the reborn tournament has involved more than 5,000 forecasters, making more than a million predictions on over 250 questions. The most interesting discovery has been vindication of “the unabashedly elitist ‘super-forecaster hypothesis’”, they write. The top 2% of participants “showed that there is more than luck at play” – and their forecasts are improving.

The next tournament is 2014-2015. “Current questions include: Will America and the EU reach a trade deal? Will Turkey get a new constitution? Will talks on North Korea’s nuclear programme resume?” they write.

“We predict with 80% confidence that at least 70% of you will enjoy it—and we are 90% confident that at least 50% of you will beat our dart-throwing chimps.”

 

 

See also: the future industry – the growth in professional predictions

 

More by Toby Manhire

Post a Comment

You must be to post a comment.

Switch to our mobile site