model stability related to sufficient numerical precision
I have a grad degree in modeling that is very ancient. We did not know about chaos back in the 1960s BUT we did know that dynamic models could become unstable if insufficient precision was not used especially in floating point. In floating point one gets rounding possibly truncation of results. In some cases A + (B+C) is not equal to (A+B) +C. This often/usually means that at every step of the model one can introduce chaos. Our check was to double the floating point precision. If results did not reasonably match results from the previous precision used than floating point precision was doubled again and rerun. If the hardware or the compiler used could not support any greater precision then one could sometimes modify the code to fix the issue or decide that the model was not useful and maybe try again. Have climate models been tested in this fashion?