Is Earth's Temperature Governed by Physics Alone?

In summary: Since the factor of 1/4 is obscured, the calculation doesn't lend itself to easy unit checks to make sure you are doing the right thing. In particular, the formula for radiative equilibrium with a star, which has a factor of two in it, is a more transparent calculation.5. "Earth's temperature" is a concept that is hard to define, since it depends on how you weigh various temperature. The average of a bunch of surface temperatures is easy enough to define, but it's not clear that this is the right thing to equate to the temperature of the Earth. For instance, if you integrated the temperature of the Earth over its volume, the result would be near absolute zero. This would be strictly
  • #71
sylas said:
There's one minor complication, because if you look at the literature you'll usually see slightly higher numbers for the Planck response; more like 1.1 or 1.2 K. You can get this with MODTRAN by locating your sensor at about the tropopause, rather than the 70km default. Try getting the radiation at an altitude of 18km with the tropical atmosphere. In this case, you should have something like this:
  • 288.378 W/m2 (375ppm CO2, Ground Temp offset 0, tropical atmosphere, 18km sensor looking down)
  • 283.856 W/m2 (750ppm CO2, Ground Temp offset 0, tropical atmosphere, 18km sensor looking down)
  • 288.378 W/m2 (750ppm CO2, Ground Temp offset 1.225, tropical atmosphere, 18km sensor looking down)

I think I can explain what is going on here. It's a minor additional detail to do with how the stratosphere works.

When you hold surface temperature fixed, MODTRAN will hold the whole temperature profile of the atmosphere fixed.

OK. I would actually object to doing that, except as a kind of loop-around in a model error in MODTRAN, because what actually counts is of course what escapes at the top of the atmosphere, and not what is somewhere in between. So then this is a kind of "bug fix" for the fact that MODTRAN doesn't apparently do "local thermodynamic equilibrium" (I thought it did) adapting the temperature profile.


The cooling of the stratosphere is so immediate that it is not treated as a feedback process at all, but is taken up as part of the definition of a change in energy balance. Hence MODTRAN is not quite giving you what is normally defined as the Planck response. To get that, you would have to drop the stratosphere temperature, which would reduce the thermal emission you are measuring a little bit. By placing the MODTRAN sensor at the tropopause, you are avoiding worrying about the stratosphere at all, and getting a better indication of the no-feedback Planck response.

Ok. So that's the "bug fix", as normally the upward energy flux has to be conserved all the way up.

PS. Just to underline the obvious. The Planck response is a highly simplified construct, and not all like the real climate response. The real climate response is as you quoted from Xnn: somewhere from 2 to 4.5 K/2xCO2. It is the real response that you can try to measure empirically (though it is hard!). You can't measure Planck response empirically, because it is a theoretical convenience.

I would think that you could if you could isolate a "column of atmosphere" in a big tube all the way up and measure the radiation spectrum upward at different altitudes. It's of course an expensive experiment :-)

The full response in reality is just as much physics as the simplified Planck response; real physics deals with the real world in all its complexities, and the climate feedbacks are as much as part of physics as anything else.

Yes. However, the point is that the MODTRAN type of physics response is "obvious" - it is relatively easily modelable, as it is straightforward radiation transport which can be a difficult but tractable problem. So at a certain point you can say that you have your model, based upon elementary measurements (spectra) and "first principles" of radiation transport. You could write MODTRAN with a good measure of confidence, just using "first principles" and some elementary data sets. You wouldn't need any tuning to empirical measurements of it.

However, the global climatic feedback effects are way way more complicated (of course it is "physics" - everything is physics). So it is much more delicate to build models which contain all aspects of those things "from first principles" and "elementary data sets".

And visibly, the *essence* of what I'd call "dramatic AGW" resides in those feedbacks, that turn an initial ~1K signal into the interval you quoted. So the feedback must be important and must be amplifying the initial drive by a factor of something like 3. This is the number we're after.

Now, the problem I have with the "interval of confidence" quoted of the CO2 doubling global temperature rise is that one has to deduce this from what I'd call "toy models". Maybe I'm wrong, but I thought that certain feedback parameters in these models are tuned to empirically measured effects without a full modelisation "from first principles". This is very dangerous, because you could then have included into this fitting parameter, other effects which are not explicitly modeled, and for which this fitting parameter then gives you a different value (trying to accommodate for some other effects you didn't include) than the physical parameter you think it is.

It was the main critique I had on the method of estimation as I read it in the 4th assessment report: Bayesian estimations are only valid if you are sure that the models used in the technique contain "the real system" for one of its parameter values. Otherwise the confidence intervals estimated are totally without value.

Now, this is problematic, because these models have to do the "bulk of the work" given that the initial signal (the "optical drive") is relatively small (~1K). In other words, the whole prediction of "strong temperature rise" and its confidence interval is attached to the idea that the computer models contain, for a given set of fitting parameters, the perfect physics description of the system (on the level we need it here).

I'm not a climate sceptic or anything, I am just a bit wary about the certainties that are sometimes displayed in these discussions, as I would naively think that it would be extremely difficult to predict the things that are predicted here (climate feedback), and hence that one could only be relatively certain about them if one had a pretty good model that masters all the important effects that come into play.
 
Earth sciences news on Phys.org
  • #72
vanesch said:
OK. I would actually object to doing that, except as a kind of loop-around in a model error in MODTRAN, because what actually counts is of course what escapes at the top of the atmosphere, and not what is somewhere in between. So then this is a kind of "bug fix" for the fact that MODTRAN doesn't apparently do "local thermodynamic equilibrium" (I thought it did) adapting the temperature profile.

Yes. It's not really a "bug fix" as such, because MODTRAN is not designed to be a climate model. It does what it is designed to do... calculate the transfer of radiation in a given atmospheric profile.

You can use this to get something close to Planck response, but if you get numbers a little bit different from the literature it is because we've calculating something a little bit different. The hack I have suggested is a kind of work around to get closer to results which could be obtained from a more complete model.

Note that you can get the Planck response with a very simple model, because it is so idealized. You don't have to worry about all the weather related stuff or changes in the troposphere. But you do need to do more than MODTRAN.

vanesch said:
Ok. So that's the "bug fix", as normally the upward energy flux has to be conserved all the way up.

Good insight! However, of course there is more to energy flux than radiant fluxes. The equations used include terms for heating or cooling at different levels. At equilibrium, there is a net energy balance, but this must include convection and latent heat, as well as horizontal transports. MODTRAN does not attempt to model the special heat flow, but simply takes a given temperature profile, and ends up with a certain level of radiant heating, or cooling, at a given level. This radiant heating is, of course, important in models of weather or climate.

I've learned a bit about this by reading Principles of Planetary Climate, by Raymond Pierrehumbert at the Uni of Chicago, a new textbook available online (draft). Not easy reading! The calculations for radiant energy transfers are described in chapter 4.

The radiant heating at a given altitude is in units of W/kg.

In general, you can also calculate a non-equilibrium state, in which a net imbalance corresponds to changing temperatures at a given level. This needs to be done to model changes in temperature from day to night, and season to season, as part of a complete model. For the Planck response, however, a simple equilibrium solution is sufficient, I think.

vanesch said:
Yes. However, the point is that the MODTRAN type of physics response is "obvious" - it is relatively easily modelable, as it is straightforward radiation transport which can be a difficult but tractable problem. So at a certain point you can say that you have your model, based upon elementary measurements (spectra) and "first principles" of radiation transport. You could write MODTRAN with a good measure of confidence, just using "first principles" and some elementary data sets. You wouldn't need any tuning to empirical measurements of it.

Sure. That's what MODTRAN is. The physics of how radiation transfers through the atmosphere for a given profile of temperatures and greenhouse gas concentrations is basic physics; hard to calculate but not in any credible doubt. The really hard stuff is when you let the atmosphere and the rest of the planet respond in full generality.

This is fundamentally why scientists no longer have any credible doubt that greenhouse effects are driving climate changes seen over recent decades. The forcing is well constrained and very large. There is no prospect whatever for any other forcing to come close as a sustained warming influence. And yet, we don't actually have a very good idea on the total temperature impact to be expected for a given atmospheric composition!

vanesch said:
However, the global climatic feedback effects are way way more complicated (of course it is "physics" - everything is physics). So it is much more delicate to build models which contain all aspects of those things "from first principles" and "elementary data sets".

Of course. That is why we have a very good idea indeed about the forcing of carbon dioxide, but the sensitivity is known only to limited accuracy.

The forcing for doubled CO2 is 3.7 W/m2. The sensitivity to that forcing, however, is something from 2 to 4.5 degrees. There are some good indications for a more narrow range of possibilities than this, around 2.5 to 4.0 or so, but the complexities are such that a scientist must realistically maintain an open mind on anything in that larger range of 2 to 4.5.

vanesch said:
And visibly, the *essence* of what I'd call "dramatic AGW" resides in those feedbacks, that turn an initial ~1K signal into the interval you quoted. So the feedback must be important and must be amplifying the initial drive by a factor of something like 3. This is the number we're after.

Yes. The reference I gave previously for Bony et al (2006) is a good survey paper of the work on these feedback interactions.

vanesch said:
Now, the problem I have with the "interval of confidence" quoted of the CO2 doubling global temperature rise is that one has to deduce this from what I'd call "toy models". Maybe I'm wrong, but I thought that certain feedback parameters in these models are tuned to empirically measured effects without a full modelisation "from first principles". This is very dangerous, because you could then have included into this fitting parameter, other effects which are not explicitly modeled, and for which this fitting parameter then gives you a different value (trying to accommodate for some other effects you didn't include) than the physical parameter you think it is.

Well, no; here we disagree, on several points.

The sensitivity value is not simply given by models. It is constrained by empirical measurement. In fact, the range given by Xnn, and myself, of 2 to 4.5 is basically the empirical bounds on sensitivity, obtained by a range of measurements in cases where forcings and responses can be estimated or measured. See:
  • Annan, J. D., and J. C. Hargreaves (2006), http://www.agu.org/pubs/crossref/2006/2005GL025259.shtml, in Geophys. Res. Lett., 33, L06704, doi:10.1029/2005GL025259. (Looks at several observational constraints on sensitivity.)
  • Wigley, T. M. L., C. M. Ammann, B. D. Santer, and S. C. B. Raper (2005), Effect of climate sensitivity on the response to volcanic forcing, in J. Geophys. Res., Vol 110, D09107, doi:10.1029/2004JD005557. (Sensitivity estimated from volcanoes.)
The first combines several different methods, the second is a nice concrete instance of bounds on sensitivity obtained by a study of 20th century volcanoes. I referred to these also in the thread [thread=307685]Estimating the impact of CO2 on global mean temperature[/thread]; and there is quite an extensive range of further literature.

If you are willing to trust the models, then you can get a tighter range, of more like 2.5 to 4.0 The models in this case are not longer sensibly called toy models. They are extraordinarily detailed, with explicit representation for the physics of many different interacting parts of the climate system. These models have come a long way, and they still have a long way to go.

You speak of tuning the feedback parameters... but that is not even possible. Climate models don't use feedback parameters. That really would be a toy model.

Climate models just solve large numbers of simultaneous equations, representing the physics of as many processes as possible. The feedback parameters are actually diagnostics, and you try to estimate them by looking at the output of a model, or running it under different conditions, with some variables (like water vapour, perhaps) held fixed. In this way, you can see how sensitive the model is to the water vapour effect. For more on how feedback parameters are estimated, see Bony et al (2006) cited previously. Note that the models do not have such parameters as inputs.

Some people seem to think that the big benefit of models is prediction. That's just a minor sideline of modeling, and useful as a way of testing the models. The most important purpose of models is to be able to run virtual experiments with different conditions and see how things interact, given their physical descriptions. Obtaining feedback numbers from climate models is an example of this.

Personally, I am inclined to think that the narrower range of sensitivity obtained by models is a good bet. But I'm aware of gaps in the models and so I still quote the wider range of 2 to 4.5 as what we can reasonably know by science.

I'm not commenting on the rest, as I fear we may end up talking past one another. Models are only a part of the whole story here. Sensitivity values of 2.0 to 4.5 can be estimated from empirical measurements.

I don't think many people do express unwarranted confidence. The scientists involved don't. People like myself are completely up front about the large uncertainties in modeling and sensitivity. I've been thinking of putting together a post on what is known and what is unknown in climate. The second part of that is the largest part!

There's a lot of personal skepticism out there, however, which is not founded on any realistic understanding of the limits of available theory and evidence; but on outright confusion and misunderstanding of basic science. I have a long standing fascination with cases like this. Similar popular rejection of basic science occurs with evolutionary biology, relativity, climatology, and it seems vaccinations are becoming a new issue where the popular debate is driven by concerns that have no scientific validity at all.

Cheers -- sylas
 
Last edited:
  • #73
Ok, let me try to understand that precisely. Because the way I understood things when I read about it in the 4th assessment report, I was of the opinion that there was what one could eventually call "a methodological error" or at least an error of interpretation of an applied methodology. Now, I can of course be wrong, but I never had any sensible comment on it but have, on the other hand, seen casually other people make similar comments.

But first some simplistic "estimation theory" as I understand it.

You have a family of models mapping "inputs" on "outputs" (say, humanly produced CO2 and solar irradiation, volcanic activity... in, and atmospheric and oceanic composition and temperature etc as output). They contain "free parameters" p. The fact that these parameters are free means that they are not calculated "from first principles", but contain phenomenological models trying to establish a link between quantities, but with tunable "knobs".

We call them Y = F(X,p)

Now, as you say, these parameters p are constrained by "empirical measurements", that means that you have sets (Xi,Yi) (paleo data, observational record,...) and that you want your model to "fit" them. Now, of course those sets contain errors, the models themselves make statistical predictions and so on, so instead of giving Y = F(X,p), you actually have coming out of F, a probability distribution for Y, with some center value.

This means that for a given value set for the parameters p, say, p0, you will get for Xi, a certain probability to obtain Yi. If your p0 is "far off", then this probability will be very low.
If p0 is close to the "real values", then the probability of Yi will be close to the "actual probability" it had to be the response to Xi.

Now, the Bayesian estimation method allows you to turn these probabilities into "probabilities for the parameters p" (you can even include a priori probabilities for p which play less and less a role if you have better and better data). However, this is only true in the case that the model F(X,p) contains the "true model" for a certain value of p (say, p*), and moreover, makes the correct probability predictions along the trajectory of p for Y.

In fact, this is using the posterior likelyhood function of p as the probability distribution of p, from which one can then deduce a confidence interval. But this only works, as I said, if the probabilistic model Y = F(X,p) is "correct".

This means you have to be sure about the unbiasedness of your model and moreover about its correct error model (predicting the probability distribution of Y correctly) before you can do so.

sylas said:
The sensitivity value is not simply given by models. It is constrained by empirical measurement. In fact, the range given by Xnn, and myself, of 2 to 4.5 is basically the empirical bounds on sensitivity, obtained by a range of measurements in cases where forcings and responses can be estimated or measured.

I interpret what you say as about what I said above - is that right ?

You speak of tuning the feedback parameters... but that is not even possible. Climate models don't use feedback parameters. That really would be a toy model.

No, but they do contain free parameters, which are fitted to data in order to determine them, no ? And those data are then somehow empirical sensitivity measurements, like with those volcanoes, or am I wrong ? So the free parameters are in a way nothing else but transformations of the empirical measurements using the Bayesian parameter estimation method, no ?


Climate models just solve large numbers of simultaneous equations, representing the physics of as many processes as possible. The feedback parameters are actually diagnostics, and you try to estimate them by looking at the output of a model, or running it under different conditions, with some variables (like water vapour, perhaps) held fixed. In this way, you can see how sensitive the model is to the water vapour effect. For more on how feedback parameters are estimated, see Bony et al (2006) cited previously. Note that the models do not have such parameters as inputs.

No, not directly, but they do have free parameters which are fitted to sensitivity measurements, no ?

Personally, I am inclined to think that the narrower range of sensitivity obtained by models is a good bet. But I'm aware of gaps in the models and so I still quote the wider range of 2 to 4.5 as what we can reasonably know by science.

I also think it is a "good bet". But I have my doubts about the confidence intervals because of the above mentioned concern of interpretation of methodology - unless I'm misunderstanding what is actually done.

I'm not commenting on the rest, as I fear we may end up talking past one another. Models are only a part of the whole story here. Sensitivity values of 2.0 to 4.5 can be estimated from empirical measurements.

I don't see how you can measure such a thing "directly" without any model. I thought you always had to use modeling in order to determine the meaning of empirical data like this. Maybe I'm wrong here too.
 
  • #74
vanesch said:
Ok, let me try to understand that precisely. Because the way I understood things when I read about it in the 4th assessment report, I was of the opinion that there was what one could eventually call "a methodological error" or at least an error of interpretation of an applied methodology. Now, I can of course be wrong, but I never had any sensible comment on it but have, on the other hand, seen casually other people make similar comments.

I think you are making a general comment here that applies widely to confidence limits in general.

When a scientific paper gives some quantified account of any phenomenon, they should include some idea of uncertainty, or error bars, or confidence limits. Precisely what these things mean is not always clear; and any interpretation always includes the implicit precondition, "unless we are very much mistaken, ...". You can't really put probabilities on that. Science doesn't deal in certainty ... not even certainty on the basis for estimating confidence limits.

There are instances of genuine methodological error involved in such estimates from time to time. I've recently discussed two cases where IMO the confidence limits given in a scientific paper were poorly founded: the bounds on energy imbalance given in Hansen et al (2005) (0.85 +/- 0.15 W/m2) and the bounds on climate sensitivity of Schwartz (2007) (1.1 +/- 0.5 K/2xCO2). In both cases I have been a little mollified to learn that the main author has subsequently used more realistic estimates. (And in both cases, I personally don't think they've gone far enough, but we can wait and see.)

On the other hand, there are other cases where there's popular dispute about some scientific conclusion, where a sensible set of confidence limits is used that has implications people just don't like, for reasons having no credible scientific foundation.

An example of the latter case is the bounds of 2.0 to 4.5 on climate sensitivity.

I agree with you that it doesn't make much sense to interpret this as a probability range. The climate sensitivity is a property of this real planet, which is going to be a bit fuzzy around the edges (sensitivity may be something that varies a bit from time to time and circumstance to circumstance) but the range of 2.0 to 4.5 is not about climate sensitivity having a probability distribution. It's about how confidently scientists can estimate. There are all kinds of debates on the epistemology of such bounds, and I don't want to get into that.

I don't think there's any significant problem with that bound of 2.0 to 4.5, other than the general point that we can't really speak of a "probability" of being wrong when giving an estimate for a particular value not taken from random samples. As you say, we might not be "correct" in the whole underlying approach. That's science for you.

vanesch said:
sylas said:
The sensitivity value is not simply given by models. It is constrained by empirical measurement. In fact, the range given by Xnn, and myself, of 2 to 4.5 is basically the empirical bounds on sensitivity, obtained by a range of measurements in cases where forcings and responses can be estimated or measured.
I interpret what you say as about what I said above - is that right ?
I guess so. Uncertainty bounds are estimated on the basis of assumptions that in principle might turn out to be wrong. I think that's the guts of it.

No, but they do contain free parameters, which are fitted to data in order to determine them, no ? And those data are then somehow empirical sensitivity measurements, like with those volcanoes, or am I wrong ? So the free parameters are in a way nothing else but transformations of the empirical measurements using the Bayesian parameter estimation method, no ?
Sensitivity is not part of the data used as boundary conditions for climate models. So no, the data are not somehow empirical sensitivity measurements. The free parameters in models, other than boundary conditions, are mainly numbers used to get approximations for things that cannot be calculated directly, either because the model has a limited resolution, or because the phenomenon being modeled is only known empirically.

We've mentioned radiation transfers. A climate model does not attempt to do the full line by line integration across the spectrum which is done in something like MODTRAN. It would be much too slow to apply the full accuracy of theory available. Hence they use approximations, with parameters. The tuning in this case is to fit to the fully detailed physical theory; not to the desired results more generally.

Another case is cloud effects. Part of the problem is resolution; the grid size on a climate model is much larger than a cloud, and so they have to use abstractions, like percentage cloud cover, and then you need parameters to manage these abstractions. This is a bit like the issue with tuning radiative transfers. What's different about cloud, however, is that we don't actually have the fully detailed physical theory even in principle. The best physical theory of cloud is to a large part simply empirical, with parameters of its own that get tuned to observations.

In this case as well, however, the tuning of parameters in the climate model are intended to get the best match possible for the underlying physics, rather than match the final result.

Hence, for example, climate models get tested by attempting to reproduce what we have seen already in the 20th century. You most definitely don't do that by tuning the model to the 20th century record of observables! The whole idea of climate models is that they are independent physical realizations. If, perchance, a climate model gives too small a response to a known volcanic reaction, you do not just tune parameters until the match is better. You try to figure out which part of the underlying physics is failing, and try to tune that better... not to the volcano itself, but to your known physics.

In the end, a climate model will have an imperfect fit to observations. This could be because the observations are inaccurate (models have racked up a couple of impressive cases where theory clashed with observation, and it was observation that turned out to be wrong) or because there's something not modeled properly yet. It would be a bad mistake to try and overfit your model to the observations by tuning, and in general you can't anyway, because the model is not an exercise in curve fitting. The proper tuning is to your underlying physics, followed by a frank estimate of how well or badly the climate model performs under tests. This is what is done, as far as I can tell.

This is not a proper peer-reviewed reference, but it may be useful to look at an introductory FAQ on climate models, which was produced by NASA climate modelers to try and explain what they do for a popular audience. This is available at the realclimate blog, which was set up for this purpose. See FAQ on climate models, and FAQ on climate models: Part II.

Some people simply refuse to give any trust to the scientists working on this, or dismiss out of hand any claim even for limited skill of the models. That moves beyond legitimate skepticism and into what can reasonably be called denial, in my opinion.

No, not directly, but they do have free parameters which are fitted to sensitivity measurements, no ?
No. We don't have sensitivity measurements. Sensitivity for the real world is something calculated on the basis of other measurements. The calculations presume certain models or theories, which are in turn physically well founded but which in principle are always open to doubt, like anything in science.

Sensitivity of a climate model is also calculated from its behaviour. It is not a tunable parameter and not an input constraint.

I don't see how you can measure such a thing "directly" without any model. I thought you always had to use modeling in order to determine the meaning of empirical data like this. Maybe I'm wrong here too.

Seems perfectly sensible to me... all measurement is interpreted in the light of theory, and all estimation requires both theory and data. This applies across the board in science.

Cheers -- sylas
 
  • #75
sylas said:
Science doesn't deal in certainty ... not even certainty on the basis for estimating confidence limits.

There is science and then there is climatology.

We are not certain how, but we know that the physics of aerodynamics works. We have demonstrated it time and agian with minimal mishap.

It would not be wise for us to demolish the interstate highway system because cars are dangerous and we are promised that a theoretical anti-gravity engine is "right around the corner" based upon a display of magnetic levitation.

Giggles...
 
Last edited by a moderator:
  • #76
sylas said:
I guess so. Uncertainty bounds are estimated on the basis of assumptions that in principle might turn out to be wrong. I think that's the guts of it.

Ok, that's what I always understood it to be. I didn't like the tone of the summary reports of the IPCC because that's the kind of phrase that was missing, IMO. In other words, there is no such thing as a "scientific certainty beyond doubt" that the sensitivity to CO2 doubling is within this or that interval, but rather, that "to the best of our current knowledge and understanding, the most reasonable estimate we can give of this sensitivity is within these bounds". And even "this can change, or not, depending on how our future understanding will confirm or modify our current knowledge".

It can sound as nitpicking, but there's a big difference between both. The point is that if ever after a while, one learns more, and the actual value turns out to lie outside of the specified interval, in the first case, "one discredited some scientific claims with certainty (and as such, science and its claims in general)". In the second case, that's just normal, because our knowledge of things improves, so what was reasonable to think some time ago evolved.
 
  • #77
Sylas said:
We don't have sensitivity measurements. Sensitivity for the real world is something calculated on the basis of other measurements. The calculations presume certain models or theories, which are in turn physically well founded but which in principle are always open to doubt,
Then, what is so basic about these calculations? Look at it from the global warming potential point of view. What are the odds of a CO2 molecule staying aloft for a hundred years or more?

MrB.
",,,Greenhouse Effects Within The Frame Of Physics"
http://arxiv.org/abs/0707.1161v4
I have downloaded this badboy. I gather, Sylas, you don't even think it should have been published! But now I think I know where I got the phrase "impact level," as far as various journals are concerned ...(and the talk of real greenhouses= thread id:300667). It comes in around one and a half megabytes...I have just done my tweeting for the day. :)
 

Similar threads

Replies
36
Views
10K
Replies
28
Views
5K
Replies
2
Views
2K
Replies
25
Views
7K
Replies
53
Views
12K
Replies
3
Views
4K
Replies
2
Views
4K
Back
Top