Passe, at least they should be. Unless they embrace something other than patisanship such as
http://www.nytimes.com/2013/03/22/op...-fox.html?_r=0
Most important, participants were taught to turn hunches into probabilities. Then they had online discussions with members of their team adjusting the probabilities, as often as every day. People in the discussions wanted to avoid the embarrassment of being proved wrong.
In these discussions, hedgehogs disappeared and foxes prospered. That is, having grand theories about, say, the nature of modern China was not useful. Being able to look at a narrow question from many vantage points and quickly readjust the probabilities was tremendously useful. The Penn/Berkeley team also came up with an algorithm to weigh the best performers. Let’s say the top three forecasters all believe that the chances that Italy will stay in the euro zone are 0.7 (with 1 being a certainty it will and 0 being a certainty it won’t). If those three forecasters arrive at their judgments using different information and analysis, then the algorithm synthesizes their combined judgment into a 0.9. It makes the collective judgment more extreme.
This algorithm has been extremely good at predicting results. Tetlock has tried to use his own intuition to beat the algorithm but hasn’t succeeded.