Market Accuracy and Calibration

Prediction market performance can be assessed using a variety of methods. Recently, SciCast researchers have been taking a closer look at the market accuracy, which is measured in a variety of ways. A commonly used scoring rule is the Brier score that functions much like squared error between the forecasts and the outcomes on questions.

Calibration

One component of the Brier score is calibration, AKA reliability. Calibration refers to the agreement between observed probabilities of outcomes and predictions of probabilities of outcomes. Well-calibrated forecasts are good indicators of the chances that events will occur. For example, if our market is well calibrated and reports that the chance of a new invention is 0.6, the chance must really be about 0.6. That invention might not occur, and the forecast would look inaccurate in some ways, but if we could observe the real probability close to the estimated probability, we would say that the forecast is well calibrated.

Because we cannot observe the real probability of a one-time event, we cannot tease out calibration from other forms of accuracy without grouping forecasts. After questions in a selected group have resolved, we can estimate the chance that they would occur as the average of their resolutions. For binary questions, this requires computing only the portion of questions that resolved as 1 rather than 0 out of all questions in the group. We then compute the average of forecasts on the group of questions and compare to our estimate of the real probability.

The grouping approach to estimating calibration is not without drawbacks, but it has become standard practice. Rather than create a summarizing statistic, I’m presenting the calibration measurement in a graph below. The graph is the observed probability of events as a function of the average forecast of events. Continuous questions had their forecasts rescaled onto the [0,1] interval as in binary questions and multiple-choice questions’ options. To set the groups of questions, I created 20 equally sized bins on the range of forecasts. Questions were placed in bins based on their average forecasts over time.

The diagonal line represents perfect calibration so that forecasts and observed probabilities for every group of questions would always be equal. On the other lines we should expect some noise unless we have thousands of resolved questions, but we should hope that the points aren’t consistently on one side of that thin black line. However, the points for all types of questions tend to fall to the upper left of the diagonal, indicating underconfidence in the chances of events in question occurring. That result is disappointing, but it could be worse: the market could be systematically overconfident, which tends to have more severe consequences.

A second result worth noting is that the underconfidence seems to increase as events become more likely. We have a problem: all of SciCast’s long-run forecasts (on resolved questions) above 0.75 were for events that ultimately occurred. The market should have forecast close to 1.0 on those events to be well calibrated, but we don’t even have one question on which the market did so. There are few questions that averaged forecasts above 0.85 over their runs and none that averaged forecasts above 0.95.

There is a reasonable possibility that the low end of the blue line would look more like the red line if some of the options on multiple-choice questions were evaluated independently.  Because SciCast strives for clean resolutions of questions, all possibilities for the outcome on a multiple-choice question must be considered even if any sensible user recognizes that their chances of occurring are quite low.  Maybe being well calibrated on all those little possibilities is easy, and if it weren’t, the graph would show over-estimation of low probabilities and under-estimation of high probabilities for both the blue and red lines.

I’d love to hear suggestions in the comments on how to improve our calibration without post-hoc corrections. Some of the underconfidence might come from questions that resolved positively early, and when the preset resolution dates for more questions arrive, the observed probabilities on groups of questions will decrease, but that won’t get rid of all of the underconfidence.

Edit: At @sflicht’s request, a file with the forecasts on scaled, continuous questions is avg_forecasts_continuous.

0

18 thoughts on “Market Accuracy and Calibration

  1. relax852014

    I believe that you nailed it in the last paragraph. The resolved long-term questions are by their very nature unrepresentative in that the overwhelming majority of long-term questions ever posed must certainly still be “unresolved.” But the largest confounding factor in your quest to “get rid of all of the underconfidence” is limited liquidity combined with the prohibitive opportunity cost of matching the points forecast on long-term questions with one’s beliefs. I may be 99% confident that a result will occur “after 2022,” but I would be a fool to tie up the points necessary to reflect that in my forecast. Hence, forecasting a 70% likelihood shows up as “underconfidence.”

    1+
    Reply
    1. kennyth0

      This is a great point. Do you have a suggestion on how to increase liquidity on long-term questions? I have the feeling that putting more assets into the overall market wouldn’t guarantee that more points go into long-term questions. A similar concern arises for conditional forecasts because they often require more than one question to resolve before points are released.

      0
      Reply
      1. relax852014

        I vaguely recall a cursory discussion about this somewhere, sometime. I believe @ted had some speculations. It comes down to creating a mechanism for awarding variable point premiums for long-term forecasts. It would have to be rather complicated, based on resolution date and the probabilities of time scales within the question. For example, a question that can only resolve on 12/31/2020 might offer a 25% discount for points used on a forecast. Another question whose “latest possible” resolution date is 12/31/2020 would offer a slightly lower discount, which would fluctuate with the forecast probabilities. For example, as the “will happen between 2015-2016″ bucket increases, the discount decreases. The result is that forecasters gets more “bang” in their overall score by investing points in the long term. But it will take a great deal of thought to make something like that work, and ensure that such questions don’t turn into point-laundering engines.

        0
        Reply
  2. sflicht

    @relax is right about the liquidity premium aspect. In fact, I think it’s a theorem that (assuming less liquidity in a given regime entails worse calibration, which is pretty reasonable for the reasons @relax gives), a MSR market will be well-calibrated EITHER in the center OR the tails, but NOT both. Cf http://arxiv.org/ftp/arxiv/papers/1206/1206.5252.pdf.

    It would be a nice piece of field research to compare LMSR to QMSR on the same corpus of questions (assigning participants randomly to each market), to verify the theoretical result. This would need to have a pretty large number of participants to overcome the small ratio of “superforecasters” in the general population. Or you could attempt an intensive Tetlock-style training regime to try to deal with that issue.

    0
    Reply
    1. kennyth0

      Thanks for the interesting ideas. Unfortunately, there’s not much the researchers can do with them for a while, but they will be discussed.

      0
      Reply
  3. sflicht

    I find the miscalibration in the left tail for scaled continuous questions particularly interesting. I guess my intuition is that anchoring bias will be more severe for scaled continuous than discrete questions: any time you make a forecast you are staring at the bounds of the question writer’s “90%” confidence interval, which must have an effect. Could we simply have been unlucky in that the few scaled continuous questions which have resolved had their left lower bound set too low, causing us to overweight the odds of a very small outcome value? If you provide the q. numbers for all the resolved scaled continuous qs, I’ll take a look. (There’s no particularly convenient way to do this search with the site’s question view interface, and I’m a bit too lazy to write an api query for this purpose.)

    0
    Reply
    1. relax852014

      The scaled MH370 contract question resolving today is a perfect case in point. It seems to perfectly illustrate how a question writer’s idiosyncratically-skewed “90% confidence” notions can completely distort the forecasting. Had the range been $1-$100 million, you would have had a much nicer Brier curve.

      0
      Reply
    2. kennyth0

      I’ve put this on my to-do list. I’ll post the question numbers for all continuous questions along with the bins they’re in when I have a chance.

      0
      Reply
      1. sflicht

        One experiment to run would be to take a scaled continuous question with natural logical bounds and put the limits beyond those bounds, so that (a) no one should be paying attention to the bounds, reducing anchoring bias, and (b) the “illiquidity in the tails” effect is minimized. For example, “How many games will Carlsen win in Sochi?” I assume the bounds are zero to 12. (In fact, I’m not sure if 12 is the right upper bound, because I don’t know the tiebreak procedure.) What if you made the bounds -6 to 18?

        0
        Reply
      2. relax852014

        (This is a response to @sflicht’s response to @kennyth0’s “to-do list” comment; not sure where this will appear in thread nest).

        @sflicht’s experiment will have the interesting side effect of affecting the liquidity issue. When the bounds are artificially extreme, forecasting within the “real” bounds is much cheaper. In this example, if I believed that a Russian femme fatale would get to Carlsen the night before the tournament, I could forecast down all the way to zero wins without needing to expend infinite points.

        0
        Reply
      3. relax852014

        Rereading @sflicht’s actual comment, I suppose what I just wrote in response was a huge part of the point, not a “side effect.” Duh.

        0

        Reply
      4. kennyth0

        @sflicht, If anchoring happens at the low end, shouldn’t it also happen at the high end of the scale?

        I’m adding the requested data to the post with average forecasts rather than bins in the file.

        0
        Reply
      5. sflicht

        @kenny, of course it should happen at the high end. but the sample size is (I think) pretty small, so the asymmetry could just be random. For these particular questions, the authors’ original confidence intervals happened to be worse on the low end than the high end.

        0
        Reply
  4. Lucien

    The under-confidence may partly reflect a psychological expectation among estimators. People will likely expect questions on a site like SciCast to have real probabilities less than 1.0, i.e. that they be truly uncertain. Otherwise there would be little value to prediction. This would account both for the under-confidence, as well as the increase in this under-confidence as real probabilities approach 1.0.

    0
    Reply
  5. Ben

    I think you can rescale the axes on the chart to make it better match the brier score. When a forecasting agent consistently forecast a 10% likelihood for events whose true likelihood is 1%, that’s much worse for its brier score than the opposite error of 1% for 10% events. But the chart in this post uses linear axes, which makes the errors look identical.

    There’s also, I think, a considerably larger margin of error on the true likelihood of events at the edges on the prediction range.

    0
    Reply
    1. kennyth0

      I think you’re mostly right @Ben. This is a standard calibration plot, but the effect of each bin on the calibration statistic varies along the scale. Being miscalibrated near the middle of the scale is worse than being miscalibrated near the ends (assuming the mistake is the same in each case, 9% in your example). Additionally, one can really only be miscalibrated in one direction on each end. Still, the way to achieve the best Brier score and calibration for the market or the individual is to honestly report beliefs about the probability and try to make the beliefs match the true probability.
      The bigger problem is that we’re seeing our market estimates pretty consistently fall below the observed probabilities even in the middle of the scale, from about 0.45 to 1.00. I’m still thinking this comes more from the duration and liquidity on some questions.

      0
      Reply
  6. disp_name_unavailable

    Part of the reason Scicast doesn’t have questions that average over 85% during their lifetime is the way questions are selected. Question authors aren’t interested in writing questions that the market would think are likely to be 90% likely to occur. E.g., “Will at least 10 people die from automobile accidents in the U.S. during 2015?” Scicast could add a question like that, but what’s the point? Also note that some of the questions that are obvious are generally worded as longshots. Like “Will Facebook’s user base decline during 2014?” not “Will Facebook’s user base grow during 2014?” Also, “Will we discover life on Mars by September 2014″ not “Will Earth remain the only known planet with life on it at the end of September 2014?”

    0
    Reply
  7. ctwardy

    Great discussion. Be on the lookout for updated calibration plots — the plots in this post averaged over questions instead of the more usual method of averaging over forecasts.

    0
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>