HPV questions have resolved on SciCast

The HPV questions have resolved. There were 52 questions (one for each state) plus six cluster questions and one question about the nation (see trends graph below).

The state questions asked, “What will be the 2013 estimated vaccination coverage of HPV for females aged 13-17 in the state of <State name>.”

State HPV vaccination likely is dependent on the cluster to which the state belongs. We asked forecasters to help to fill in the conditional probability forecasts by using the “Related Forecasts” section after a forecast on the cluster question.

For the state questions, there were 4318 forecasts made by 1521 unique users. The question for the state of Maine had the most activity with 139 forecasts and 78 unique users. Did you make a forecast? If so, login and check your dashboard.

The six cluster questions were:

What will be the 2013 estimated vaccination coverage of HPV for females aged 13-17 in states with historically ‘very high,’ ‘somewhat high,’ ‘average,’ ‘somewhat low,’ and ‘very low’ HPV vaccination coverage?

Read this in-depth blog post for more information about the cluster analysis for the HPV-related questions on SciCast.

Data & Analysis

The national question, What percentage of the US female population aged 13-17 will be vaccinated against HPV by the end of 2013? has resolved.  Nationally, the rate was 57.3%. Read the CDC report.

The Brier Score for a uniform distribution forecast “UBS” was 0.6. Our raw market Brier Score (before smoothing or adjustments) was .762.

The above trend graph is for the question, What percentage of the US female population aged 13-17 will be vaccinated against HPV by the end of 2013?

For the 52 state questions, the average Brier Score for a uniform distribution forecast “UBS” was .040. Our raw market average Brier Score (before smoothing or adjustments) was .025

For the six cluster questions, the average Brier Score for a uniform distribution forecast “UBS” was .372. Our raw market average Brier Score (before smoothing or adjustments) was .392. Visit the HPV Resolved Questions spreadsheet to see all data related to these questions.

Note that Q546 had no one other than administrators trade on it. Nevertheless, after removing admin trades our score is the same as the uniform score: 0.619.

The Brier Score (Brier 1950) measures the accuracy of probabilistic predictions. Lower is better.  For dates, counts, and other ordered questions, we use the related Ranked Probability Score (Gneiting & Raftery 2007).  Both range from 0 to 2 and we report them together as Briers. On a binary question, a 50-50 forecast yields a Brier of 0.5.

Three interesting issues stand out:

1) SciCast forecasters tended to underpredict vaccination rates.

2) Over all of these questions, we had only six conditional forecasts, other than those from admin to set up the basic relations among nation, clusters, and states. Still, the state questions on which we performed worse linked to the cluster questions on which we performed worse, so the relations held. That is, our Brier scores were typically worse than those of “no confidence” forecasts on states with historically “somewhat low” and “very low” vaccination rates and on the clusters to which they belonged.

3) Another pattern is that there was some “regression to the mean” so that many states in the “very high” and “very low” vaccination rate clusters, based on the previous three years of data, moved toward the national average in 2013.

Comments

There were 53 participants’ comments for the HPV state questions. Here are a few:

I feel like population of a state matters when deciding and predicting these statistics. This is due to more populated states get priority on vaccines and less populated states only get occasional vaccines. Every state has vaccines but sometimes vaccines are only offered certain times of the year, such as flu shots.

 

Looking at previous data for 2010 and 2012 for the US Virgin Islands’ vaccination for females between the age range stated, I see in 2010 22.5% for 1 shot, whereas in 2012, the most recent data, was 28.7%. This would suggest an upwards trend, and there should be noticeable improvement once again in 2013. This follows the same trend as all of the United States, where there is shown to be improvement in the # of females between 13-17 getting at least 1 HPV shot.

 

There’s not that many vaccination centers in Idaho but those that do have the vaccine will most likely be in big cities such as Boise and maybe even Sandpoint.

 

@Miku: You don’t necessarily need a vaccination centre, all you need is a general practitioner, or more normally just a nurse. I think the uptake rates of this vaccine depends much more on factors other than convenience, specifically awareness, and cultural and religious questions.

 

Louisiana has shown a recent downtrend from 2011 to 2012 of 0.9% decrease from the previous year. However, in 2010, the % of females who were vaccinated held into 2011, but only recently dropped in 2012. Can anyone perhaps provide more explanation for this downtrend?

 

The general trend over time has been that more and more girls within the age range of 13 to 17 have been getting vaccinated as of recent years, but there has been no evidence that this year will experience a greater increase compared to any other year. I do not recall, nor can I find any info of any recent campaigns that were rolled out within the last year that have had significant effect in terms of educating the public, but, as with the trend dictates for Texas, there should be an increase nevertheless. http://www.ncsl.org/research/health/hpv-vaccine-state-legislation-and-statutes.aspx#2013-2014 chart The link shows that Texas has not instigated any recent legislation to help enforce vaccination. The increase is most likely significant, but not incredible.

Did you like this post? Follow us on Twitter.

0

13 thoughts on “HPV questions have resolved on SciCast

  1. ctwardy

    1. Minor: “Nevertheless” should be “Therefore”: with no edits, our forecast is the uniform distribution.
    2. We beat uniform about 2/3 of the time, though the average gain was very small, 0.08.
    2. Neither #edits nor #users is correlated with gain.

    0
    Reply
    1. sflicht

      That 0.08 improvement on uniform reeks of statistical insignificance… :)

      Personally I just didn’t find this question complex very engaging, so that’s anecdotal evidence that the results here are not especially reflective of the market’s performance.

      0
      Reply
      1. ctwardy

        I think you mean it doesn’t reflect the market’s *potential*. But some legitimate forecasts may be less engaging. I’d like to see us increase our lead over uniform, and unweighted average.

        0
        Reply
      2. sflicht

        1500 unique users seems high — must have been a lot of banner traffic. Do you have the metadata to compare banner ad forecaster performance to non-banner forecaster performance? Or to regress the Brier score of a forecast versus the experience of the forecaster (measured by # forecasts made)?

        0
        Reply
      3. relax852014

        @sflicht, even with the one-time visitors, I don’t believe that there is any way that there was more than a small fraction of the claimed 1521 unique visitors. Perhaps they counted the unique visitors per state, and then added them up to get the 1521 (in which case I would be a “unique” visitor 52 times for the set).
        I believe that the movement on these questions was driven less by ads than by the promotion days. I would be interested in seeing what percentage of forecasts were made on “Activity” reward days. For this question set, I suspect that it was well over half.

        0
        Reply
      4. kennyth0

        @sflicht and relax, the phrasing was misleading on the number of unique users. 1521 is the sum over the group of questions of unique users on each question.

        People coming in from the ads tended to make very few forecasts as individuals, however, some users (recruited by the ads or otherwise) were very motivated by the activity rewards. A large portion of forecasts occurred on activity reward days of the week — and not just on these questions. Because they tended to barely move the estimates, they had almost no effect on the accuracy of the market.

        0
        Reply
  2. PrieurDeLaCoteD'or

    Thanks for the summary, Jessie!

    I forecasted based on “regression to the mean” and did well on these questions. I think a number of the Scicast forecasters thought that the two data points listed in the question background constituted a trend rather. Instead, it looked like they were noise around a static average. Looking at the raw data, the confidence intervals around the historical state-level HPV rates were higher than justified the level of confidence in any sort of state-level trends.

    I think that’s an interesting example of anchoring bias that future question writers should consider.

    0
    Reply
  3. relax852014

    Just out of curiosity, how does SciCast pronounce “Brier?” Does it sound like “brie” + “err,” “brie” + “ay,” or like the Supreme Court Justice Stephen “Breyer?”

    0
    Reply
  4. relax852014

    What was the deal with question 546? Did you determine why it never returned with searches on “HPV,” or appeared on the question lists? Are there any more “invisible” questions out there?

    0
    Reply
    1. kennyth0

      I can’t be absolutely sure why, but three things happened: 1) initially our team didn’t click to make the question visible, 2) there was a misspelling in a question tag, and 3) after the other two problems were solved, inertia prevented users from forecasting. This seems to be an isolated case in which we bungled the publication of the question and didn’t notify forecasters of the fix.

      After the public references to five cluster questions, I’m surprised that devoted forecasters didn’t notice a problem.

      0
      Reply
  5. relax852014

    Final thought on this question set: did you account for the aberration of the last two days of forecasting? The results were published and known for over 48 hours before the questions were closed for forecasting. Would removal of the post-publication forecasts significantly affect your conclusions?

    0
    Reply
    1. kennyth0

      We did try the analysis without the last couple of days. Because the questions were open for nearly three months, the difference in performance was trivial.

      0
      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>