Category Archives: Guest Posts

SciCast, Bluefin-21, and GeNIe

Reposted with permission from SciCast forecaster Jay Kominek. You can find his blog, hypercomplex.net here

I’m going to assume you’re familiar with SciCast; if you aren’t, that link is the place to start. Or maybe Wikipedia.

There has been a open question on SciCast, “Will Bluefin-21 locate the black box from Malaysian Airlines flight MH-370?”, since mid-April. (If you don’t know what MH370 is, I envy you.) It dropped fairly quickly to predicting that there was a 10% chance of Bluefin-21 locating MH370. Early on, that was reasonable enough. There was evidence pings from the black box had been detected in the region, so the entire Indian Ocean had been narrowed down to a relatively small area.

Unfortunately weeks passed and on May 29th Bluefin-21’s mission was completed, unsuccessfully. Bluefin-21 then stopped looking. At this point, I (and others) expected the forecast to plummet. But folks kept pushing it back up. In fact I count about 5 or 6 distinct individuals who moved the probability up after completion of the mission. There are perfectly good reasons related to the nature of the prediction market for some of those adjustments.

I’m interested in the bad reasons.

Continue reading

0

Fixing Academia Via Prediction Markets on Overcoming Bias

By Robin Hanson

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see herehereherehere). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see herehereherehere). I’ve also talked a lot lately about what I see as the main social functions of academia (see herehereherehere). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques…

Read the full post at Overcoming Bias

0

User Activity (1st Quarter, 2014)

By March 31, SciCast had 5425 forecasts, 1375 users, and 444 questions.

The graph below (click to enlarge) shows some user activity statistics through the end of March. Registrations have leveled off, but the number of daily forecasts per active user is rising. Since January, the average number of forecasts per day among people who make comments and forecasts on SciCast questions has roughly doubled (from 2.5 to 5).

FpUpD

The number of registered users has increased over the same time frame, but most registration occurred early in the year. We had about 800 new users in January but only about 200 new users in both February and March. April will see some new outreach campaigns and incentives.

Please help the SciCast team by encouraging other people to join in our forecasting challenge. Our crowdsourcing approach to predicting science and technology benefits from having a crowd to forecast on every question.

The more competitive users might like to take advantage of the daily and weekly cycles in forecasting. Timings show we still have a strong U.S. bias: few forecasts occur during our night, but mornings also have fewer forecasts than afternoons and evenings. There are roughly half as many forecasts each hour from 07:00 to 11:00 as there are each hour from 11:00 to 19:00. (All times U.S. Eastern, GMT-5/4).

Weekends also have slightly fewer forecasts. There are four forecasts per day on Saturday, Sunday, and Monday for every five forecasts per day on Tuesday through Friday.

by Ken Olson

Did you like this post? Follow us on Twitter.

0

Academic Stats Prediction Markets from Overcoming Bias

By Robin Hanson

In a column, Andrew Gelman and Eric Loken note that academia has a problem:

Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true.

They consider prediction markets as a solution, but largely reject them for reasons both bad and not so bad. I’ll respond here to their article in unusual detail. First the bad:

Would prediction markets (or something like them) help? It’s hard to imagine them working out in practice. Indeed, the housing crisis was magnified by rampant speculation in derivatives that led to a multiplier effect.

Yes, speculative market estimates were mistaken there, as were most other sources, and mistaken estimates caused bad decisions. But speculative markets were the first credible source to correct the mistake, and no other stable source had consistently more accurate estimates. Why should the most accurate source should be blamed for mistakes made by all sources?

Allowing people to bet on the failure of other people’s experiments just invites corruption, and the last thing social psychologists want to worry about is a point-shaving scandal.

Read the full post at Overcoming Bias.

0