SciCast participated in the TechCast Webinar Series on May 7, 2015, Forecasting in Turbulent Times: Tools for Managing Change and Risk.
The webinar covered The SciCast Prediction Market (Charles Twardy), Cybersecurity Markets (Dan Geer), and Near and far future of AI (Robin Hanson). Read the full description. There were a few questions after each segment, and some more at the end. (Hanson fans: note that Robin’s talk was not about markets this time, but a particular scenario extrapolation using economic reasoning from some strong initial assumptions, and the subject of his forthcoming book.)
In a column, Andrew Gelman and Eric Loken note that academia has a problem:
Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true.
They consider prediction markets as a solution, but largely reject them for reasons both bad and not so bad. I’ll respond here to their article in unusual detail. First the bad:
Would prediction markets (or something like them) help? It’s hard to imagine them working out in practice. Indeed, the housing crisis was magnified by rampant speculation in derivatives that led to a multiplier effect.
Yes, speculative market estimates were mistaken there, as were most other sources, and mistaken estimates caused bad decisions. But speculative markets were the first credible source to correct the mistake, and no other stable source had consistently more accurate estimates. Why should the most accurate source should be blamed for mistakes made by all sources?
Allowing people to bet on the failure of other people’s experiments just invites corruption, and the last thing social psychologists want to worry about is a point-shaving scandal.