mapping contingent estimations forecasting predictive understanding calculating calibrated wisdom predicting predictive futures mapping quantitative futures mapping the future exploring precise futures forecasting critical wisdom mapping predictive estimations assembling quantitative futures delivering accurate insights generating probable understanding generating intelligent contingencies composing probable contingencies

Metaculus: Pandemics Track Record

One of the core ideas of Metaculus is that predictors should be held accountable for their predictions. In that spirit, we present here a track record of the Metaculus system.

The first of these graphs shows every resolved question, the time it resolved, the resolution, and the score (Brier or Log) for that question. Note that:

  • You can select the score for the community prediction, the Metaculus prediction, or the Metaculus postdiction (what our current algorithm would have predicted if it and its calibration data were available at the question's close).
  • Lower Brier scores are better; higher scores are better for Log scores; in both cases better scores are higher on the plot.
  • The bright line provides a moving average of the scores over time.
  • The dashed line shows the expected score for a maximally uncertain prediction, i.e. for 50% on binary questions, or a uniform distribution for continuous predictions.

The second graph is a histogram that more clearly shows the distribution of scores. Bins to the right of 0 (for log) or 0.25 (for Brier) scores contain predictions better than complete uncertainty.

The next two graphs break the predictions into bins and shows how well-calibrated each bin is. See the details under each graph to see how exactly this works.

These graphs are interactive. You can click on individual data points to see which question they refer to, and you can click on the different calibration bins to highlight the data points. You can also filter by date and category to see the track record for a subset of questions.


Note: The Metaculus prediction, as distinct from the community prediction, wasn't developed until June 2017. At that time, all closed but unresolved questions were given a Metaculus prediction. After that, the Metaculus prediction was only updated for open questions.

For each question, the Metaculus postdiction uses data from all other questions to calibrate its result, even questions that resolved later.