Skip to main content

Yes, this idea of measuring the spread of the ensemble members for the current forecast to the spread found in other years, for the same starting season and target period, is a reasonable way to assess the uncertainty this year compared to that of other years. And it has been used in operational situations at forecast producing centers. Your comment about the statistical robustness of the ensemble spread as a key to the true uncertainty is also right on the mark -- the spread is technically a standard deviation (or variance), and it requires a larger sample for those statistics to "settle down" than it does for the mean. Besides the ensemble spread (or, what you have called RMOP), another way to assess the uncertainty is to look at the quality of forecasts in the past for the same starting month and the same target period. If the correlation coefficient is used to measure the past forecast quality (correlation between the forecasts and the corresponding obserations), then the standard error can be computed from that correlation [technically, it is the square root the quantity (1 - correlation-squared) ] and that standard error gives us the +/- 68% and +/- 95% uncertainty ranges. This latter method gives the same uncertainty for any year, whereas the ensemble spread, or RMOP, gives it differently for each year, which is more useful if it is trustworthy (which it probably is, at least to some extent).