Butterfly Effect
Permalink
1) As slight changes in inputs can result in very divergent predictions, how does one guard against the modeller changing the inputs until they get a desired results?
(eg adjust the results by adjusting the inputs and adjusting the forcings - similar to reverse engineering)