An interview with Gavin Schmidt over on Edge explores the nature and development of climate modelling:
What we have decided, as a scientific endeavor, is to extrapolate as much as we can from our knowledge of the individual processes that we can measure: evaporation from the ocean, the formation of a cloud, rainfall coming from a cloud, changes in the wind patterns as a function of the pressure field, changes in the jet stream. What we have tried to do is encapsulate those small-scale processes, put them altogether, and see if we can predict the emerging properties of that fundamental complex system.
He explores the sometimes contradictory predictions of different climate models:
In the same way that you can’t make an average arithmetic be more correct than the correct arithmetic, it’s not obvious that the average climate model should be better than all of the other climate models. So for example if I wanted to know what 2+2 was and I just picked a set of random numbers, the answer by averaging all those random numbers is unlikely to be four. Yet when you come to climate models, that is kind of what you get. You get all the climate models and they give you some numbers between three and five and they give you something that is very close to four. Obviously, it’s not pure mathematics — it’s physics, it’s approximations, there is empirical tuning that goes on.
…
You need to have some kind of evaluation. I don’t like to use the word validation because it implies a kind of binary/true-false set up. But you need an evaluation; you need tests of the model’s sensitivity compared to something in the real world that can give you some credibility that that model has the right sensitivity. That is very difficult.
It is a lengthy essay/video interview but well worth the read/watch, as it is refreshing to hear firsthand from a professional climatologist.