Tag Archives: crowdsourcing

Google Analytics statistics for SciCast, as of May 22, 2015.

SciCast Final Report Released

The final SciCast annual report has been released!  See the “About” or “Project Data” menus above, or go directly to the SciCast Final Report download page.

Exeutive Summary (excerpts)

Registration and Activity

SciCast has seen over 11,000 registrations, and over 129,000 forecasts. Google Analytics reports over 76K unique IP addresses (suggesting 8 per registered user), and 1.3M pageviews. The average session duration was 5 minutes.

Continue reading

3+

Users who have LIKED this post:

  • avatar
Color_Logo

SciCast Final Report (Public)

The SciCast 2015 Annual Report has been approved for public release. The report focuses on Y4 activities, but also includes a complete publication and presentation list for all four years.  Please click “Download SciCast Final Report”  to get the PDF.  You may also be interested in the SciCast anonymized dataset.

Here are two paragraphs from the Executive Summary:

We report on the fourth and final year of a large project at George Mason University developing and testing combinatorial prediction markets for aggregating expertise. For the first two years, we developed and ran the DAGGRE project on geopolitical forecasting. On May 26, 2013, renamed ourselves SciCast, engaged Inkling Markets to redesign our website front-end and handle both outreach and question management, re-engineered the system architecture and refactored key methods to scale up by 10x – 100x, engaged Tuuyi to develop a recommender service to guide people through the large number of questions, and pursued several engineering and algorithm improvements including smaller and faster asset data structures, backup approximate inference, and an arc-pricing model and dynamic junction-tree recompilation that allowed users to create their own arcs. Inkling built a crowdsourced question writing platform called Spark. The SciCast public site (scicast.org) launched on November 30, 2013, and began substantial recruiting in early January, 2014.

As of May 22, 2015, SciCast has published 1,275 valid questions and created 494 links among 655 questions. Of these, 624 questions are open now, of which 344 are linked (see Figure 1). SciCast has an average Brier score of 0.267 overall (0.240 on binary questions), beating the uniform distribution 85% of the time, by about 48%. It is also 18-23% more accurate than the available baseline: an unweighted average of its own “Safe Mode” estimates, even though those estimates are informed by the market. It beats that ULinOP about 7/10 times.

You are welcome to cite this annual report.  Please also cite our Collective Intelligence 2014 paper and/or our International Journal of Forecasting 2015 paper (if it gets published — under review now).

Sincerely,

Charles Twardy and the SciCast team

1+

Users who have LIKED this post:

  • avatar

HealthMap Co-Founder Dr. John Brownstein on the Value of Crowdsourced Forecasting in Public Health

John Brownstein, Ph.DWe recently spoke with John Brownstein, Ph.D., co-creator of HealthMap (www.healthmap.org), a global leader in utilizing online informal sources for disease outbreak monitoring and real-time surveillance of emerging public health threats. SciCast partnered with Healthmap and the Discovery Analytics Center at Virginia Tech on a series of questions about the 2014-2105 flu season in the United States. Predict now: https://scicast.org/flu.

Continue reading

1+

User Activity (1st Quarter, 2014)

By March 31, SciCast had 5425 forecasts, 1375 users, and 444 questions.

The graph below (click to enlarge) shows some user activity statistics through the end of March. Registrations have leveled off, but the number of daily forecasts per active user is rising. Since January, the average number of forecasts per day among people who make comments and forecasts on SciCast questions has roughly doubled (from 2.5 to 5).

FpUpD

The number of registered users has increased over the same time frame, but most registration occurred early in the year. We had about 800 new users in January but only about 200 new users in both February and March. April will see some new outreach campaigns and incentives.

Please help the SciCast team by encouraging other people to join in our forecasting challenge. Our crowdsourcing approach to predicting science and technology benefits from having a crowd to forecast on every question.

The more competitive users might like to take advantage of the daily and weekly cycles in forecasting. Timings show we still have a strong U.S. bias: few forecasts occur during our night, but mornings also have fewer forecasts than afternoons and evenings. There are roughly half as many forecasts each hour from 07:00 to 11:00 as there are each hour from 11:00 to 19:00. (All times U.S. Eastern, GMT-5/4).

Weekends also have slightly fewer forecasts. There are four forecasts per day on Saturday, Sunday, and Monday for every five forecasts per day on Tuesday through Friday.

by Ken Olson

Did you like this post? Follow us on Twitter.

0

Can crowdsourcing help find the missing Malaysia Airlines flight #MH370?

On SciCast, we’ve posted three questions about the missing plane. Can crowdsourcing help to locate it?

Dr. Charles Twardy, Project Principal, explains the different ways to crowdsource a search. “When a community turns out to help look for a lost child, that’s crowdsourcing,” he says. “The community volunteers typically aren’t as well-prepared as the search teams, but when directed by experienced Field Team Leaders, they can greatly extend the search effort. Similarly, experimental micro-tasking sites like TomNod.com let volunteers help search piles of digital images. Call it the effort of the crowd. SciCast is about the wisdom of the crowd: weighing the vast amounts of uncertain and conflicting evidence to arrive at a group judgment, of say the relative chances of several regions or scenarios. This could be as simple as an average - a robust method with much to recommend it when judgments are independent.  Or it could be something more advanced, like SciCast’s combinatorial prediction market.  A market reduces double-counting, and may be better suited to the case where most of us are just mulling over the same information, but a few have real insight. The trick is to find a large and diverse crowd, and persuade them to participate.”

Following are the questions. Click any of them to make your forecast (register or login first). Also, see the discussion and background tabs of each question for more details and links to news sources.

Where will the Malaysia Airlines Flight MH370 be found?

What happened to Malaysia Airlines Flight MH370?

Where will Malaysian Air Flight MH 370 be found (extended version)?

The extended search region uses this map.

Image

See this blog post for info on how to explore conditional probabilities.

Click here to read more about approaches to crowdsourcing Search & Rescue.

0

Decision Analysis Journal Article: Probabilistic Coherence Weighting for Optimizing Expert Forecasts

We’re excited to announce that the Decision Analysis Journal has published Probabilistic Coherence Weighting for Optimizing Expert Forecasts about some work last year related to DAGGRE.

It’s natural to want to help forecasters stay coherent as we ask related questions. For example, what is your confidence that:

1.      “Jefferson was the third president of the United States.”

2.      “Adams was the third president of the United States.”

People are known to be more coherent when these are immediate neighbors than when on separate pages with many unrelated questions in between.  So it’s natural to think it’s better to present related questions close together.

We found that’s not necessarily a good idea. On a large set of general knowledge questions like these, we got more benefit by allowing people to be incoherent, and then giving more weight to coherent people.  At least on general knowledge questions, coherence signals knowledge. We have yet to extend this to forecasting questions.

We found other things, too – cool and interesting things.  Here’s the abstract, but be warned, it gets technical:

Methods for eliciting and aggregating expert judgment are necessary when decision-relevant data are scarce. Such methods have been used for aggregating the judgments of a large, heterogeneous group of forecasters, as well as the multiple judgments produced from an individual forecaster. This paper addresses how multiple related individual forecasts can be used to improve aggregation of probabilities for a binary event across a set of forecasters. We extend previous efforts that use probabilistic incoherence of an individual forecaster’s subjective probability judgments to weight and aggregate the judgments of multiple forecasters for the goal of increasing the accuracy of forecasts. With data from two studies, we describe an approach for eliciting extra probability judgments to (i) adjust the judgments of each individual forecaster, and (ii) assign weights to the judgments to aggregate over the entire set of forecasters. We show improvement of up to 30% over the established benchmark of a simple equal-weighted averaging of forecasts. We also describe how this method can be used to remedy the “fifty–fifty blip” that occurs when forecasters use the probability value of 0.5 to represent epistemic uncertainty.

Read the article!

Christopher W. Karvetski, Kenneth C. Olson, David R. Mandel, and Charles R. Twardy. Probabilistic coherence weighting for optimizing expert forecasts. Decision Analysis 2013 10:4, 305-326

At Decision Analysis | Local PDF  ]

0