Tag Archives: prediction market

Color_Logo

SciCast Final Report (Public)

The SciCast 2015 Annual Report has been approved for public release. The report focuses on Y4 activities, but also includes a complete publication and presentation list for all four years.  Please click “Download SciCast Final Report”  to get the PDF.  You may also be interested in the SciCast anonymized dataset.

Here are two paragraphs from the Executive Summary:

We report on the fourth and final year of a large project at George Mason University developing and testing combinatorial prediction markets for aggregating expertise. For the first two years, we developed and ran the DAGGRE project on geopolitical forecasting. On May 26, 2013, renamed ourselves SciCast, engaged Inkling Markets to redesign our website front-end and handle both outreach and question management, re-engineered the system architecture and refactored key methods to scale up by 10x – 100x, engaged Tuuyi to develop a recommender service to guide people through the large number of questions, and pursued several engineering and algorithm improvements including smaller and faster asset data structures, backup approximate inference, and an arc-pricing model and dynamic junction-tree recompilation that allowed users to create their own arcs. Inkling built a crowdsourced question writing platform called Spark. The SciCast public site (scicast.org) launched on November 30, 2013, and began substantial recruiting in early January, 2014.

As of May 22, 2015, SciCast has published 1,275 valid questions and created 494 links among 655 questions. Of these, 624 questions are open now, of which 344 are linked (see Figure 1). SciCast has an average Brier score of 0.267 overall (0.240 on binary questions), beating the uniform distribution 85% of the time, by about 48%. It is also 18-23% more accurate than the available baseline: an unweighted average of its own “Safe Mode” estimates, even though those estimates are informed by the market. It beats that ULinOP about 7/10 times.

You are welcome to cite this annual report.  Please also cite our Collective Intelligence 2014 paper and/or our International Journal of Forecasting 2015 paper (if it gets published — under review now).

Sincerely,

Charles Twardy and the SciCast team

1+

Users who have LIKED this post:

  • avatar

US Flu Forecast: Exploring links between national and regional level seasonal characteristics

For the flu forecasting challenge (https://scicast.org/flu) participants are required to predict several flu season characteristics, at national and at regional levels (10 HHS regions). For some of the required quantities  such as peak percentage influenza-like illness (ILI), and total seasonal ILI count  one may argue that national level values have some relationship with the regional level ones. Or, in other words participants may be led to believe that national level statistics can be obtained from regional level ones.

Continue reading

0

SciCast WSJ Coverage: U.S. Intelligence Community Explores More Rigorous Ways to Forecast Events

SciCast has been featured in a Wall Street Journal article about crowdsourced forecasting in the U.S. intelligence community. We’re excited to share that SciCast now has nearly 10,000 participants, a 50% increase in the last two months - an important achievement for a crowdsourced prediction site.

WSJ_September2014

Continue reading

0

SciCast Calls for Science, Technology Experts to Make Predictions

Color_Logopdf_button

 

Contact:
Lynda Baldwin – 708-703-8804;
[email protected] 

Candice Warltier – 312-587-3105;
[email protected]

FOR IMMEDIATE RELEASE 

SciCast Calls for Science, Technology Experts to Make Predictions 

Largest sci-tech crowdsourcing forecast site in search of professionals and enthusiasts to predict future events 

FAIRFAX, Va (June 19, 2014) – SciCast, a research project run by George Mason University, is the largest known science and technology-focused crowdsourced forecasting site. So what makes a crowdsourced prediction market more powerful? An even bigger crowd. SciCast is launching its first worldwide call for participants to join the existing 2,300 professionals and enthusiasts ranging from engineers to chemists, from agriculturists to IT specialists.

Continue reading

0

Fixing Academia Via Prediction Markets on Overcoming Bias

By Robin Hanson

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see herehereherehere). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see herehereherehere). I’ve also talked a lot lately about what I see as the main social functions of academia (see herehereherehere). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques…

Read the full post at Overcoming Bias

0