Tag Archives: scoring

Scaled Continuous Question

The following shows an example of a Scaled or Continuous question:

Image

Instead of estimating the chance of a particular outcome, you are asked to forecast the outcome in natural units like $.  Forecasts moving the estimate towards the actual outcome will be rewarded. Those moving it away will be penalized.  As with probability questions, moving toward the extremes is progressively more expensive: we have merely rescaled the usual 0%-100% range and customized the interface.

Continue reading

0

Why did my forecast do that?

Forecasters frequently want to know why their forecast had so much (or so little) effect. For example, Topic Leader jessiet recently asked:

I made a prediction just now of 10% and the new probability came down to 10%. That seems weird- that my one vote would count more than all past predictions? I assume it’s not related to the fact that I was the question author?

The quick answer is that she used Power mode, which is our market interface, and that’s how markets work: your estimate becomes the new consensus.  Sound crazy? Note that markets beat out most other methods for the past three years of live geopolitical forecasting on the IARPA ACE competition. For two years, we ran one of those markets, before we switched to Science & Technology.  So how can this possibly work?  Read on for (a) How it works, (b) Why you should start with Safe mode, (c) The scoring rule underneath, and (d) An actual example.

Continue reading

0