RE: Details on the August 7th ENSO Discussion
There are ways to verify a probabilistic forecast. Let's use a coin with two outcomes (heads-tails) as an example. If you were to make a probabilitic forecast for heads, you would predict 50% chance of heads, right? To verify this forecast in observations (i.e. to see how good the forecast was), what you would do is count how many times you obtained heads. What you *should* see is that close to 50% of the time, you obtained heads.
For example, If you predict heads 80% of the time and then saw heads occur 50% of the time, then your forecasting is "unreliable." The predicted probabiity should MATCH (or nearly match) the observed probability. Forecast reliabilty is a score and you can read more about it here:
http://www.metoffice.gov.uk/research/areas/seasonal-to-decadal/gpc-outl…
... there are many other probabilistic scores as well.
Note that you need a longer record (lots of coin flips in observations) in order to verify how good your forecast is. Going back to the coin flipping analogy, it is easy to randomly obtain "runs" of consecutive heads or tails (H - H - H - T - H) so using a short record to verify your probabilistic forecast is a no-no. After many coin flips, the observed probability should settle down around 50%.
Keep in mind ENSO is a THREE outcome "coin" (see Tony's post: http://www.climate.gov/news-features/blogs/enso/why-do-enso-forecasts-u… ).