Sunday, October 25, 2009

Perils of modelling: initial assumptions 1

I just thought I'd nick this quote from solar scientist Douglas Hoyt on another blog as it is a succinct summary of what can go wrong with models that have too many adjustable and vague parameters and relying on hindcasts as validations - which I touched on before with respect to fatigue calculations.

"The climate modelers introduced a large upward trend in global aerosols because, without them, their models ran too hot, predicting a global warming of circa 2C in the 20th century, as opposed to the observed 0.6C warming.

As I have pointed out, there is no evidence that the claimed global trend in aerosols existed. At best there were a few regional aerosol clouds covering less than 1% of the globe.

The proper solution to their problem would have been to lower the climate sensitivity to 1C or less. In fact. Lindzen has a convincing paper out recently showing the climate sensitivity is about 0.6C for a CO2 doubling.

The scientific solution to the problem: No large global trend trend in aerosols and low climate sensitivity.

The political “solution” is: Unsupported claims of large aerosol increases which allows the fiction of a high climate sensitivity to be maintained, leading to alarming and false predictions of catastrophic future warming."

Douglas Hoyt

I added. "I wonder if a 3rd party review might have fixed it. There are times when you get a weird modeling result and you can't find the problem so you rationalize it or add a fiddle factor. Only later do you see where the mistake was. Also sometimes throwing money at a group to investigate a problem can fail due to an over-riding need to justify the money and claim more of it."

There's no conspiracy here - just hubris, group-think and self-preservation. Normal science in fact. Just to be fair I'll pick on a few other fields later.


Update: Here is a quote from Lindzen on modelling where he describes the 0.5 feedback:

////////beginning of extract

"IPCC ‘Consensus.’

It is likely that most of the warming over the past 50 years is due to man’s emissions.

How was this arrived at?

What was done, was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multidecadal Oscillation), claim that such models nonetheless accurately depicted natural internal climate variability, and use the fact that these models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man.

The argument makes arguments in support of intelligent design sound rigorous by comparison. It constitutes a rejection of scientific logic, while widely put forward as being ‘demanded’ by science. Equally ironic, the fact that the global mean temperature anomaly ceased increasing by the mid nineties is acknowledged by modeling groups as contradicting the main underlying assumption of the so-called attribution argument (Smith et al, 2007, Keenlyside et al, 2008, Lateef, 2009). Yet the iconic statement continues to be repeated as authoritative gospel, and as implying catastrophe.

Now, all projections of dangerous impacts hinge on climate sensitivity. (To be sure, the projections of catastrophe also depend on many factors besides warming itself.) Embarrassingly, the estimates of the equilibrium response to a doubling of CO2 have basically remained unchanged since 1979.

They are that models project a sensitivity of from 1.5-5C. Is simply running models the way to determine this? Why hasn’t the uncertainly diminished?

There follows a much more rigorous determination using physics and satellite data.

We have a 16-year (1985–1999) record of the earth radiation budget from the Earth Radiation Budget Experiment (ERBE; Barkstrom 1984) nonscanner edition 3 dataset. This is the only stable long-term climate dataset based on broadband flux measurements and was recently altitude-corrected (Wong et al. 2006). Since 1999, the ERBE instrument has been replaced by the better CERES instrument. From the ERBE/CERES monthly data, we calculated anomalies of LW-emitted, SW-reflected, and the total outgoing fluxes.

We also have a record of sea surface temperature for the same period from the National Center for Environmental Prediction.

Finally, we have the IPCC model calculated radiation budget for models forced by observed sea surface temperature from the Atmospheric Model Intercomparison Program at the Lawrence Livermore Laboratory of the DOE.

The idea now is to take fluxes observed by satellite and produced by models forced by observed sea surface temperatures, and see how these fluxes change with fluctuations in sea surface temperature. This allows us to evaluate the feedback factor.

Remember, we are ultimately talking about the greenhouse effect. It is generally agreed that doubling CO2 alone will cause about 1C warming due to the fact that it acts as a ‘blanket.’ Model projections of greater warming absolutely depend on positive feedbacks from water vapor and clouds that will add to the ‘blanket’ – reducing the net cooling of the climate system.

We see that for models, the uncertainty in radiative fluxes makes it impossible to pin down the precise sensitivity because they are so close to unstable ‘regeneration.’ This, however, is not the case for the actual climate system where the sensitivity is about 0.5C for a doubling of CO2 . From the brief SST record, we see that fluctuations of that magnitude occur all the time."

Richard Lindzen

Professor of Atmospheric Sciences, MIT

/////////////////////end of extract

Now just to summarise:

1) It is universally agreed that basic warming from a doubling of CO2 should theoretically be 1 degree C.

2) Model runs say it could be between 1 and 6 degrees C. The extra above the 1 degree number is from a supposed positive feedback where evaporated water vapour from the sea adds to the CO2.

3) This extra warming was needed to match up the recent warming in the Hadley temperature graph with Hadley model output.

4) For this match-up, Hadley modellers had assumed that natural variability had minimal warming effect over the period of study, hence any remaining anomaly must be manmade warming. However they couldn't actually model natural variability - they just pretended that they could.

5) Subsequent non-warming has been blamed on natural variability by those same Hadley scientists which is an admission that the previous assumption has zero foundation.

6) So Lindzen finds out if this supposed positive feedback from water vapour is present in the real, measured satellite data. He finds a sensitivity of 0.5 C from real data, indicating that there must be a negative feedback, not a positive one. One might postulate that this is due to formation of clouds (as Lindzen suggested and as Dr Roy Spencer has obs and published papers to back up).

That's what the real science tells us, ie comparison of the theory with observations - remember that? It's how science used to work. It's not true to say that the models couldn't achieve the same result. In fact they could - all they need to do is adjust the natural variation to reflect real world observations.

However, the latter number implicitly assumes that extra warming is from the CO2 and it is not actually another long-term natural trend. This is fair but not necessarily so because there is a well known descent into a "little ice age" at least in Northern Europe which has not been adequately explained. If that cooling was natural and we started from that natural low point then the heating can be natural too. That said, Northern europe is quite tiny so it's politically correct and probably sensible to assume that man is warming the planet by a small amount. In any event the planet has warmed by mostly natural 0.4 degrees since 1950, the IPCC cutoff date. So what are the implications? For a future post.

Sphere: Related Content

No comments:

Post a Comment