Interpreting the skill score form of forecast performance metrics
Performance measures of point forecasts are expressed commonly as skill scores, in which the performance gain from using one forecasting system over another is expressed as a proportion of the gain achieved by forecasting that outcome perfectly. Increasingly, it is common to express scores of probabilistic forecasts in this form; however, this paper presents three criticisms of this approach. Firstly, initial condition uncertainty (which is outside the forecaster's control) limits the capacity to improve a probabilistic forecast, and thus a ‘perfect’ score is often unattainable. Secondly, the skill score forms of the ignorance and Brier scores are biased. Finally, it is argued that the skill score form of scoring rules destroys the useful interpretation in terms of the relative skill levels of two forecasting systems. Indeed, it is often misleading, and useful information is lost when the skill score form is used in place of the original score.
| Item Type | Article |
|---|---|
| Copyright holders | © 2019 Crown Copyright |
| Departments | Centre for Analysis of Time Series |
| DOI | 10.1016/j.ijforecast.2018.11.010 |
| Date Deposited | 10 Dec 2018 09:44 |
| Acceptance Date | 2018-11-11 |
| URI | https://researchonline.lse.ac.uk/id/eprint/91134 |
Explore Further
-
picture_as_pdf -
subject - Accepted Version
-
- Available under Creative Commons: Attribution-NonCommercial-No Derivative Works 4.0