Cultivating a new field

Pioneered by Tetlock et al., the science of human foresight represents one of the fastest-growing interdisciplinary research agendas in the social sciences. Thanks to our partnership with Tetlock’s own Forecasting Research Institute, empirical work in this new field just got a lot easier.

Background: Experimental work has demonstrated that talent-spotting, teaming (Delphi-style collaboration), and aggregation reliably improve the accuracy of human judgments about the future. These general findings have planted the seeds of a new field of research that seeks to 

  • map the temporal boundaries of human foresight (what are the longest time horizons within which we can improve accuracy for a given domain?)

  • invent new ways of structuring collaborative analysis to optimize for the accuracy of forecasts

  • invent new scoring and other incentive systems to optimize for the persuasiveness of forecast rationales or to incentivize accuracy on unresolvable questions

  • develop aggregation algorithms to extract more signal out of team deliberations

  • pursue myriad combinations of the above research agendas, among others

The scope of productive

inquiry is vast


Regrettably, the scope of feasible experimentation in this new field has been circumscribed by limited tooling in two key areas: 1) software for studying collaboration; and 2) services for recruiting qualified, motivated forecasters, without whom many attempts to validate new ideas remain hopelessly underpowered.

As Tetlock’s software specialists since 2018, we’ve had the privilege of witnessing first-hand the birth of a new discipline. Our inboxes have been flooded with clever research proposals from psychologists and economists, international relations scholars and computer scientists, statisticians and political scientists. From where we sit, it seems that everyone has a stake in advancing the science of human foresight.

So what’s the problem?

Mindful of these problems, The Forecasting Research Institute has sponsored the development of a system designed to solve both for their own cutting-edge research — and (soon!) yours, too.

Study templates & stage definitions: Choose from existing study templates or define your study’s stages by creating a custom template. Scheduled for release July 2023. Request early access (and a fuller demo!) below.

Controlling collaboration

Forecasting researchers have distinctive goals, but the software required to pursue those goals often involves the same fundamental components. With few exceptions, forecasting researchers need software that facilitates multi-stage collaborative workflows and confers granular control over team composition, elicitation UIs, team content displays, and stage transition logic.

Team composition

Who gets to see whose forecasts and comments—and when? Our flexible teaming features allow you to randomize team assignments, group forecasters according to background-knowledge or expertise, and explore new ways of optimizing for viewpoint-diversity within teams of any size.

Elicitation interfaces

What does the forecast input interface look like and how do forecasters interact with it? With 13+ question types — including (of course) simple probability elicitation, probability and point estimate time series elicitation, rich text open response, and even Google-doc style collaborative editing interfaces — our platform offers unparalleled optionality.

Content displays

Who sees what, when? Each of our 13+ question types comes with a set of team content displays, so you can decide how (and when) to display forecaster-generated data to other forecasters. Use graphs to summarize team submissions on quantitative questions, or option-rich comment streams for displaying text submissions — strictly when and where your experimental design dictates.

Stage transition logic

When do forecasters get access to which question interfaces and team content displays? Our stage transition logic options let you vary both the user interface and user requirements for the same forecasting question over time, allowing you to observe how changes in the forecaster’s information environment help (or hinder) their ability to update toward the truth. Use schedule-based stage transitions if e.g. you only want to show teammate forecasts once everyone has submitted an independent forecast; use prerequisite-based stage transitions if you want to allow forecasters to proceed at their own pace, once they have fulfilled requirements of prior stages. 

Intrigued? Contact us.

Our full suite of forecasting research tools is scheduled for limited public release in July 2023, but we are extending private beta invites to qualified research teams throughout Spring 2023.

Get in touch to tell us about your software needs. We’d be happy to share more about what we’re up to (including live demos!) over Zoom.