Coronavirus Forecast Model Accuracy on National Forecasts

This page provides forecast-by-forecast details of the accuracy of Covid-19 national death forecast models submitted to the CDC for inclusion in the Ensemble model. This includes models from Johns Hopkins (JHU), MIT, and IHME, as well as the Ensemble model and CovidComplete. For summary-level evaluations, see my main forecast evaluation page and the forecast model scorecards page.  For deep dives on different types of evaluation graphs, see the deep dive forecast evaluation page. For evaluations of state forecast accuracy, see the state forecast evaluation page. For the most detailed coronavirus forecast accuracy information, you can check out the raw forecast evaluation data on my github site.

Baseline Model Performance on National Forecasts

CovidComplete Performance on National Forecasts

Coronavirus forecast model performance on national forecasts

Models are in alphabetical order. The ideal model would have all forecasts on the red line. The further from the red line, the higher the error.  A model with accuracy that is good enough for practical purposes will have nearly all points within the blue lines. Some models did not provide forecasts for certain periods. Those periods are indicated with a vertical dash on the red line. Only forecasts that were eligible for inclusion in the Ensemble model are shown.

Check out the alternate view of forecast model performance below this set of graphs, which makes it easier to see models’ tendencies toward overestimation and underestimation over time, and by forecast date.

The overall accuracy of currently active models is shown on the graphs on the main forecast evaluation page.

Alternate view of Covid-19 forecast model performance on national forecasts

These charts present the same data as the charts above, but show each model’s errors linked by date of forecast. Models are in alphabetical order. The ideal model would have all forecasts on the red line. The further from the red line, the higher the error.  The “actual” line will hide many of the forecasts made by an accurate model, which is by design in these graphs. Gaps in the red line indicate periods for which the model did not provide forecasts and/or was not included in the Ensemble model. The overall accuracy of currently active models is shown on the graphs on the main forecast evaluation page.