Philip Tetlock, whose seminal work on the "prediction problem" has shaped my views about punditry has written an amazing follow-on article for The Edge (well worth the reading).
This is a recurring theme in the psychological literature–the tension between human based forecasting and machine or algorithm based forecasting. It goes back to 1954. Paul Meehl wrote on clinical versus actuarial prediction in which clinical psychologists and psychiatrists' predictions were being compared to various algorithms. Over the last 58 years there have been hundreds of studies done comparing human based prediction to algorithm or machine based prediction and the track record doesn't look good for people. People just keep getting their butts kicked over and over again.I am intrigued by the early results. While it seems reasonable that political predictions are inherently impossible, despite our relentless desire to believe, the idea that we could harness the best of human insight and mechanical advantage (algorithms) is slightly disconcerting. It's like all those time-travel/foreknowledge paradoxes in science fiction or Calvinist theology: if we could foretell the future what would it mean?
We don't have geopolitical algorithms that we're comparing our forecasters to, but we're turning our forecasters into algorithms and those algorithms are outperforming the individual forecasters by substantial margins. There's another thing you can do though and it's more the wave of the future. And that is, you can go beyond human versus machine or human versus algorithm comparison or Kasparov versus Deep Blue (the famous chess competition) and ask, how well could Kasparov play chess if Deep Blue were advising him? What would the quality of chess be there? Would Kasparov and Deep Blue have an FIDE chess rating of 3,500 as opposed to Kasparov's rating of, say, 2,800 and the machines rating of, say, 2,900? That is a new and interesting frontier for work and it's one we're experimenting with.
In our tournament, we've skimmed off the very best forecasters in the first year, the top two percent. We call them "super forecasters." They're working together in five teams of 12 each and they're doing very impressive work. We're experimentally manipulating their access to the algorithms as well. They get to see what the algorithms look like, as well as their own predictions. The question is–do they do better when they know what the algorithms are or do they do worse? [More]
BTW, I applied to be on the team, but didn't make the cut.
Put your name in.