Note that the unicyclist above is a young male. I figured this might lift your spirits as we slouch toward another election.
29 October 2010
19 October 2010
The Real Climate Gate
Today I attended a seminar given by the good folks at PCMDI, the Program for Climate Model Diagnosis and Intercomparison. It was about Uncertainty Quantification of the Inter-Governmental Panel on Climate Change (IGPCC) Community Atmospheric Model (CAM).
What that really means is that the CAM has lots of sub-models inside it - for things like cloud formation, for example. Now cloud formation is a physical process, but the CAM can't afford to model the physics of each and every individual cloud on earth. In order to run in a reasonable time (say 7 hours on a thousand-processor supercomputer) CAM has to use an approximate model that says what density of cloudiness will occur in every 2-degree by 2-degree square patch of atmosphere based on the relative humidity of that patch at each of 26 levels of altitude. The cloudiness will determine how much sunlight gets to the ground and how much is reflected back to space, which helps determine the temperature of the earth under that patch for that day.
But there is an adjustable parameter in the cloudiness model that determines whether water vapor will be deemed to have condensed into cloud at what percentage of relative humidity. And that's just one adjustable parameter. It turns out that the CAM has hundreds of such parameters, as indeed it should. The point of Uncertainty Quantification is to determine how sensitive the CAM is to these parameter settings.
So, the PCMDI folks ran CAM about a thousand times, keeping the same input data (sea surface temperatures) for each run, and varying only the parameters in the sub-models. They used the latest statistical sampling techniques, including Morris One-At-a-Time (MOAT) and others to construct response hyper-surfaces for the most important parameters - the ones that cause the greatest variation in predicted global temperatures 12 years into the future in this case.
Near the beginning of this, however, they noticed a flaw. They ran the code twice with exactly the same model parameters and didn't get the same answer. After a lot of digging they found a subtle round-off error in one of the processors. When it was taken out of service, the flaw disappeared. To give you some idea of how sensitive CAM can be, that one processor's round-off error gave a 3-degree variation in some regional surface temperatures. That's about the same size of the regional variation caused by changing the level of humidity at which clouds form by 10%.
Now that's one of the results of running CAM with sea surface temperatures as a fixed input. Next year they want to run CAM with a "slab-ocean" model, in which the ocean can exchange heat and water vapor with the atmosphere, but can't circulate. Then, the year after next, they want to investigate model sensitivity when CAM is coupled to an Ocean General Circulation Model.
If you haven't seen it coming by now, then here it is: We are only just beginning to understand which are the most sensitive parameters in our Climate Models, and to plan research efforts to minimize the uncertainties to which those most sensitive parameters are known. In other words, our Climate Models are still being tuned.
While prudence would indicate that we should reduce anthropogenic carbon emissions, and geopolitics would indicate that we should reduce fossil fuel consumption, it just doesn't make sense to institute draconian measures based on work at its current level of maturity.
Nevertheless, observational data indicate that the earth is getting warmer, and part of the change is due to anthropogenic emissions, but we don't know how much change to expect and what part of it will have been caused by humans. Anthropogenic Global Warming may be "settled science." But the phrase, "settled science," is political, not scientific jargon.
What that really means is that the CAM has lots of sub-models inside it - for things like cloud formation, for example. Now cloud formation is a physical process, but the CAM can't afford to model the physics of each and every individual cloud on earth. In order to run in a reasonable time (say 7 hours on a thousand-processor supercomputer) CAM has to use an approximate model that says what density of cloudiness will occur in every 2-degree by 2-degree square patch of atmosphere based on the relative humidity of that patch at each of 26 levels of altitude. The cloudiness will determine how much sunlight gets to the ground and how much is reflected back to space, which helps determine the temperature of the earth under that patch for that day.
But there is an adjustable parameter in the cloudiness model that determines whether water vapor will be deemed to have condensed into cloud at what percentage of relative humidity. And that's just one adjustable parameter. It turns out that the CAM has hundreds of such parameters, as indeed it should. The point of Uncertainty Quantification is to determine how sensitive the CAM is to these parameter settings.
So, the PCMDI folks ran CAM about a thousand times, keeping the same input data (sea surface temperatures) for each run, and varying only the parameters in the sub-models. They used the latest statistical sampling techniques, including Morris One-At-a-Time (MOAT) and others to construct response hyper-surfaces for the most important parameters - the ones that cause the greatest variation in predicted global temperatures 12 years into the future in this case.
Near the beginning of this, however, they noticed a flaw. They ran the code twice with exactly the same model parameters and didn't get the same answer. After a lot of digging they found a subtle round-off error in one of the processors. When it was taken out of service, the flaw disappeared. To give you some idea of how sensitive CAM can be, that one processor's round-off error gave a 3-degree variation in some regional surface temperatures. That's about the same size of the regional variation caused by changing the level of humidity at which clouds form by 10%.
Now that's one of the results of running CAM with sea surface temperatures as a fixed input. Next year they want to run CAM with a "slab-ocean" model, in which the ocean can exchange heat and water vapor with the atmosphere, but can't circulate. Then, the year after next, they want to investigate model sensitivity when CAM is coupled to an Ocean General Circulation Model.
If you haven't seen it coming by now, then here it is: We are only just beginning to understand which are the most sensitive parameters in our Climate Models, and to plan research efforts to minimize the uncertainties to which those most sensitive parameters are known. In other words, our Climate Models are still being tuned.
While prudence would indicate that we should reduce anthropogenic carbon emissions, and geopolitics would indicate that we should reduce fossil fuel consumption, it just doesn't make sense to institute draconian measures based on work at its current level of maturity.
Nevertheless, observational data indicate that the earth is getting warmer, and part of the change is due to anthropogenic emissions, but we don't know how much change to expect and what part of it will have been caused by humans. Anthropogenic Global Warming may be "settled science." But the phrase, "settled science," is political, not scientific jargon.
18 October 2010
For us old timers
Back in the late 1970s and early 1980s, before graphical user interfaces (GUIs) were available, computer users had to make do with command-line interfaces (CLIs). One of the first "role playing" games was called Adventure, in which the user found him/herself in a forest, and could type one or two-word commands to navigate the game space (no pictures, just text descriptions) and collect treasure. Now Iowahawk has posted a parody called Beltway Adventure that mocks the Obama Administration. I doubt you will agree with all of it, but it is clever.
The other thing us old timers remember is the era of nuclear explosives testing. At least we think we do. But GeekOSystem has posted a video showing every human-initiated nuclear explosion except for the most recent ones by North Korea. The way we humans were popping those things off might make you think that there was at least as much politics as science motivating that show. Except of course for explosions number two and three, over Hiroshima and Nagasaki, respectively, which ended WWII without a much more devastating invasion of Japan. Those were motivated by war, which is one of the things that happens when politics fails.
The other thing us old timers remember is the era of nuclear explosives testing. At least we think we do. But GeekOSystem has posted a video showing every human-initiated nuclear explosion except for the most recent ones by North Korea. The way we humans were popping those things off might make you think that there was at least as much politics as science motivating that show. Except of course for explosions number two and three, over Hiroshima and Nagasaki, respectively, which ended WWII without a much more devastating invasion of Japan. Those were motivated by war, which is one of the things that happens when politics fails.
Subscribe to:
Posts (Atom)