Tag Archives: Robin Hanson

So Long, and Thanks for All the Fish!

SciCasters:

Thank you for your participation over the past year and a half in the largest collaborative S&T forecasting project, ever. Our main IARPA funding has ended, and we were not able to finalize things with our (likely) new sponsor in time to keep the question-management, user support, engineering support, and prizes running uninterrupted. Therefore we will be suspending SciCast Predict for the summer, starting June 12, 2015 at 4 pm ET.  We expect to resume in the Fall with the enthusiastic support of a big S&T sponsor. In the meantime, we will continue to update the blog, and provide links to leaderboard snapshots and important data.

Recap

Through the course of this project, we’ve seen nearly 130,000 forecasts from thousands of forecasters on over 1,200 forecasting questions, and an average of >240 forecasts per day. We created a combinatorial engine robust enough to allow crowdsourced linking, resulting in the following rich domain structure:

Near-final questoin structure on SciCast, with most of the live links provided by users.

Near-final question structure on SciCast, with most of the live links provided by users. (Click for full size)

Some project highlights:

  • The market beat its own unweighted opinion pool (from Safe Mode) 7/10 times, by an average of 18% (measured by mean daily Brier score on a question)
  • The overall market Brier was about 0.29
  • The project was featured in The Wall Street Journal and Nature and many other places
  • SciCast partnered with AAAS, IEEE, and the FUSE program to author more than 1,200 questions
  • Project principals Charles Twardy and Robin Hanson answered questions in a Reddit Science AMA
  • SciCasters weighed in on news movers & shakers like the Philae landing and Flight MH370
  • SciCast held partner webinars with ACS and with TechCast Global
  • SciCast hosted questions (and provided commentary) for the Dicty World Race
  • In collaboration The Discovery Analytics Center at Virginia Tech and Healthmap.org, SciCast featured questions about the 2014-2015 flu season
  • SciCast gave away BIG prizes for accuracy and combo edits
  • Other researchers are using SciCast for analysis and research in the Bitcoin block size debate
  • MIT and ANU researchers studied SciCast accuracy and efficiency, and were unable to improve using stock machine learning — a testimony to our most active forecasters and their bots. [See links for Della Penna, Adjodah, and Pentland 2015, here.]

What’s Next?

Prizes for the combo edits contest will be sent out this week, and we will be sharing a blog post summarizing the project. Although SciCast.org will be closed, this blog will remain open as well as the user group.  Watch for announcements regarding future SciCast.

Once again, thank you so much for your participation!  We’re nothing without our crowd.

Contact

Please contact us at [email protected] if you have questions about the research project or want to talk about using SciCast in your own organization.

So Long, and Thanks for All the Fish

3+

Users who have LIKED this post:

  • avatar

Webinar Recording: SciCast, Cybersecurity Markets and the Near & Far Future of AI

SciCast participated in the TechCast Webinar Series on May 7, 2015, Forecasting in Turbulent Times: Tools for Managing Change and Risk.

The webinar covered The SciCast Prediction Market (Charles Twardy), Cybersecurity Markets (Dan Geer), and Near and far future of AI (Robin Hanson). Read the full description.  There were a few questions after each segment, and some more at the end.  (Hanson fans: note that Robin’s talk was not about markets this time, but a particular scenario extrapolation using economic reasoning from some strong initial assumptions, and the subject of his forthcoming book.)

1+

SciCast, Cybersecurity Markets and the Near & Far Future of AI

Please join us for a live webinar tomorrow, May 7 at 12PM EST. SciCast, Cybersecurity Markets and the Near & Far Future of AI is the second installment of TechCast Global’s webinar series. In the course of one hour, we will feature three thought-provoking segments and give attendees an opportunity to ask questions and interact with our panelists.

1+

Users who have LIKED this post:

  • avatar

Join SciCast for a Reddit Science AMA and an ACS webinar this week!

Have you ever wondered what will be the next ‘big thing’ in technology?  What if you could garner collective wisdom from your peers - those who are interested in the same topics as you – with global reach?

Don’t miss two unique opportunities to learn more about how you can do this on SciCast (www.scicast.org), the largest known science and technology-focused crowdsourced forecasting site.

SciCast will be the featured topic in a Reddit Science AMA and an American Chemistry Society webinar this week!  Don’t miss these opportunities to share your SciCast expertise and weigh in on the discussion. We also encourage you to share the information with your friends and colleagues.

Continue reading

0
tedshot

Who’s predicting the next big thing?

SciCast is comprised of more than 7,000 science and technology experts and enthusiasts from universities, the private sector and professional organizations such as AAAS, IEEE, and ACS.  The SciCast team thought it would be fun to find out more about what motivates SciCasters to predict the next big thing. 

Meet SciCaster Ted Sanders, 26, who resides in Stanford, CA and is pursuing his PhD in Applied Physics at Stanford University.

tedshot Q: How did you get involved as a SciCast participant?

I learned about SciCast when it evolved out of the DAGGRE project, which I had joined from reading Robin Hanson’s blog. However, I was not active on SciCast until recently, when SciCast announced gift card prizes and the College Bowl competition. My participation also stems from a desire to support the legalization of prediction markets in the United States.

Q: What do you find most interesting about SciCast?

Continue reading

0

SciCast Calls for Science, Technology Experts to Make Predictions

Color_Logopdf_button

 

Contact:
Lynda Baldwin – 708-703-8804;
[email protected] 

Candice Warltier – 312-587-3105;
[email protected]

FOR IMMEDIATE RELEASE 

SciCast Calls for Science, Technology Experts to Make Predictions 

Largest sci-tech crowdsourcing forecast site in search of professionals and enthusiasts to predict future events 

FAIRFAX, Va (June 19, 2014) – SciCast, a research project run by George Mason University, is the largest known science and technology-focused crowdsourced forecasting site. So what makes a crowdsourced prediction market more powerful? An even bigger crowd. SciCast is launching its first worldwide call for participants to join the existing 2,300 professionals and enthusiasts ranging from engineers to chemists, from agriculturists to IT specialists.

Continue reading

0

Fixing Academia Via Prediction Markets on Overcoming Bias

By Robin Hanson

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see herehereherehere). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see herehereherehere). I’ve also talked a lot lately about what I see as the main social functions of academia (see herehereherehere). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques…

Read the full post at Overcoming Bias

0

Academic Stats Prediction Markets from Overcoming Bias

By Robin Hanson

In a column, Andrew Gelman and Eric Loken note that academia has a problem:

Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true.

They consider prediction markets as a solution, but largely reject them for reasons both bad and not so bad. I’ll respond here to their article in unusual detail. First the bad:

Would prediction markets (or something like them) help? It’s hard to imagine them working out in practice. Indeed, the housing crisis was magnified by rampant speculation in derivatives that led to a multiplier effect.

Yes, speculative market estimates were mistaken there, as were most other sources, and mistaken estimates caused bad decisions. But speculative markets were the first credible source to correct the mistake, and no other stable source had consistently more accurate estimates. Why should the most accurate source should be blamed for mistakes made by all sources?

Allowing people to bet on the failure of other people’s experiments just invites corruption, and the last thing social psychologists want to worry about is a point-shaving scandal.

Read the full post at Overcoming Bias.

0