Skip to main content

This is an excellent question. In developing dynamical  models, historical data is indeed used to validate/verify/evaluate model performance, and also to tune some of the parameters used in the models. Therefore, dynamical models are expected to do slightly better in hindcast model than in real-time mode. But the difference between their performance during the two periods (hindcast vs. real-time) is normally smaller than that found in statistical models, which train completely using hindcast data. A good portion of a dynamical model has explicit and completely represented physics, and that part is not tuned. Only where the physics needs to be abridged, as for example in representing tropical convection using grid points larger than the spatial scale of the convection, is there wiggle room for some tuning. But the question is very good. We note in the graph that the errors during the last 2 to 3 years, the real-time period, look a bit larger than those during the hindcast period. That should make us a little bit more worried about how well we can trust the model for the current prediction of weak El Nino coming in the next several months. If there is an error similar to that of the last 2 years (where the outcome was cooler than the forecast), we could end up getting only a marginal El Nino condition instead of a full-blown weak El Nino as predicted. On the other hand, assuming the error will be as that found during the last 2 years is also questionable. A sample of two cases is highly inadequate for coming to a conclusion or decision. The bottom line is that we must wait and see, and should trust the model to at least a moderate extent.