The final SciCast annual report has been released! See the “About” or “Project Data” menus above, or go directly to the SciCast Final Report download page.
Exeutive Summary (excerpts)
Registration and Activity
SciCast has seen over 11,000 registrations, and over 129,000 forecasts. Google Analytics reports over 76K unique IP addresses (suggesting 8 per registered user), and 1.3M pageviews. The average session duration was 5 minutes.
The SciCast 2015 Annual Report has been approved for public release. The report focuses on Y4 activities, but also includes a complete publication and presentation list for all four years. Please click “Download SciCast Final Report” to get the PDF. You may also be interested in the SciCast anonymized dataset.
Here are two paragraphs from the Executive Summary:
We report on the fourth and final year of a large project at George Mason University developing and testing combinatorial prediction markets for aggregating expertise. For the first two years, we developed and ran the DAGGRE project on geopolitical forecasting. On May 26, 2013, renamed ourselves SciCast, engaged Inkling Markets to redesign our website front-end and handle both outreach and question management, re-engineered the system architecture and refactored key methods to scale up by 10x – 100x, engaged Tuuyi to develop a recommender service to guide people through the large number of questions, and pursued several engineering and algorithm improvements including smaller and faster asset data structures, backup approximate inference, and an arc-pricing model and dynamic junction-tree recompilation that allowed users to create their own arcs. Inkling built a crowdsourced question writing platform called Spark. The SciCast public site (scicast.org) launched on November 30, 2013, and began substantial recruiting in early January, 2014.
As of May 22, 2015, SciCast has published 1,275 valid questions and created 494 links among 655 questions. Of these, 624 questions are open now, of which 344 are linked (see Figure 1). SciCast has an average Brier score of 0.267 overall (0.240 on binary questions), beating the uniform distribution 85% of the time, by about 48%. It is also 18-23% more accurate than the available baseline: an unweighted average of its own “Safe Mode” estimates, even though those estimates are informed by the market. It beats that ULinOP about 7/10 times.
You are welcome to cite this annual report. Please also cite our Collective Intelligence 2014 paper and/or our International Journal of Forecasting 2015 paper (if it gets published — under review now).
Congratulations to the winners of the Combo Edits Contest! We’ve awarded $16,000 proportionately to our top forecasters for their efforts during a month long period. This contest encouraged SciCasters to add their own links.
SciCast participated in the TechCast Webinar Series on May 7, 2015, Forecasting in Turbulent Times: Tools for Managing Change and Risk.
The webinar covered The SciCast Prediction Market (Charles Twardy), Cybersecurity Markets (Dan Geer), and Near and far future of AI (Robin Hanson). Read the full description. There were a few questions after each segment, and some more at the end. (Hanson fans: note that Robin’s talk was not about markets this time, but a particular scenario extrapolation using economic reasoning from some strong initial assumptions, and the subject of his forthcoming book.)
We’re looking for forecasting tips from our top SciCasters to include in our training lessons and we would love to hear from you. What tips can you share with fellow forecasters? Possible topic areas include:
Question Selection
Background Research
Keeping up with News
Estimation & Calibration
Trading Rules
Safe Mode or Power Mode Tips
Website Tips
and more…
Feel free to comment here or send a message to [email protected]. Thank you!
Can you be the most accurate forecaster on SciCast? Look for the questions marked with a gold Au symbol or select the “Prize Eligible” topics when searching questions. Forecasts on these questions through March 6th will have their market scores calculated and added to a person’s “portfolio.” The best portfolios at a time shortly after March 7, 2015, will win big prizes.
We recently spoke with John Brownstein, Ph.D., co-creator of HealthMap (www.healthmap.org), a global leader in utilizing online informal sources for disease outbreak monitoring and real-time surveillance of emerging public health threats. SciCast partnered with Healthmap and the Discovery Analytics Center at Virginia Tech on a series of questions about the 2014-2105 flu season in the United States. Predict now: https://scicast.org/flu.
The market is better calibrated than we thought, but not perfect. In our previous calibration post, each question counted once. In the chart below, each forecast counts once, which is the usual method.
Because SciCast strives for continual improvement and also needs even more participants and forecasts than in previous years, we have been exploring the effectiveness of incentives, particularly monetary incentives, for increasing the quality of participation in a prediction market. This post is the first of a five-part series to summarize the first two incentives studies as we start a third. (Parts of this series of posts are based on previous technical reports unavailable to the public.) This first post lays out the goals, hypotheses, and background of the incentives studies. If you’ve been participating in SciCast for a while, you might better understand some of your own experiences after reading this. Continue reading →