Testimony of Stephen H. Schneider
I. Does Natural Variability Explain All Climate Change?
Professor, Department of Biological Sciences
July 10, 1997
Climate Change: Cause, Imapcts and Uncertainties
Twenty thousand years ago, a mere blink in geologic time, a visitor to the
now-productive Corn Belt of Illinois would not be sitting in the heart of the
world's foremost granary, but rather open spruce parkland forest, where many
of the tree species seen are the same kinds that are found today 500 to 1,000
miles north in the Boreal Forests of Canada. Similarly, if we could somehow
have been flying over the Great Basin we would have seen the massive fossil
lakes, some stretching hundreds of miles like former Lake Bonneville in Utah,
and the now-fossil beaches (currently visible flying into Salt Lake City Airport
or over Mono Lake) from those high water stands that date back ten to fifteen
thousand years ago. The Ice Age, which at its maximum some 20,000 years ago
was about 5 degrees to 7 degrees C (around 100F) colder than our current global
climate, disappeared in, what is to nature, a relatively rapid period of about
five to ten thousand years. The average rate of temperature change from the
Ice Age to the current 10,000 year period of relative climate stability, our
so-called Holocene Interglacial, is about 1 degrees C change for every thousand
years. Of course there were more rapid periods embedded within this time frame,
but I'm only giving the sustained average rates.
Not only did such change correspond with radical alterations to the ecosystems
of the earth, but have been implicated in the extinction of what is known as
the charismatic megafauna (woolly mammoth, saber tooth tigers, etc.). Fossil
pollen evidence tells us that the vegetation habitats during the more "rapid"
parts of the transition from ice age to interglacial around ten to twelve thousand
years ago saw what paleoclimatologists call "no analog habitats," that is, combinations
of pollen abundances which do not exist on earth today. All of this change was
natural, of course, and there are two reasons for mentioning it in our context.
First, to remind us that the climate and ecosystems change by themselves, without
need of humans (the latter is what we call anthropogenic causation), and, two,
that climate change of about several degrees on a global average basis is a
very significant change from the point of view of natural systems.
Explanations of the Ice Age vary, the most popular one being a change in the
amount of sunlight coming in between (a) winter and summer and (b) the poles
and the equator. These changes in the distribution of seasonal or latitudinal
sunshine are due to slow variations in the tilt of the earth's axis and other
orbital elements, but these astronomical variations alone cannot totally explain
the climatic cycles. If these orbital variations and other factors (such as
the increased reflectivity of the earth associated with more ice) are combined,
our best climate theories (embodied through mathematical models that are comprised
of the physical laws of conservation of mass, energy and momentum) suggest that
the Ice Age should have been several degrees warmer than it actually was --
especially in the Southern hemisphere. What could account for this extra cold?
Perhaps the models are not sensitive enough, that is they do not respond sufficiently
to a change in so called "radiative climate forcing," that is the change in
the amount of radiant energy coming to the earth from external factors like
orbital variations or extra ice. Another (more likely, I think) possibility
is that something else also changed at the same time.
These theories can be better reconciled with what happened between ice ages
and interglacials if one assumes that several watts of energy over every square
meter of the earth were taken away in the ice age by some other mechanism at
a global scale. But what could be such a mechanism? The obvious candidate would
be a change in the composition of the earth's atmosphere which affects both
its reflectivity and its heat trapping capacity (e.g. decreases in the well-known
greenhouse effect or increases in atmospheric dust). But what evidence is there
that greenhouse gases, for example carbon dioxide, methane, nitrous oxide, or
water vapor, had lower concentrations 20,000 years ago than in the interglacial?
About fifteen years ago that evidence came through loud and clear from the ice
caps of the world. Air trapped in these glaciers provides a library of the history
of the earth's atmosphere back some 200,000 years. It shows that during the
past two ice ages carbon dioxide concentration was about 40% less and methane
half of the average value during the current and penultimate interglacials.
It also shows that since the Industrial Revolution carbon dioxide has increased
beyond any levels experienced in the past 150,000 years (at least) by nearly
30% and methane by 150% -- two figures that virtually no knowledgeable scientist
disputes are a result of so-called anthropogenic emissions which are driven
by increasing numbers of people pursuing higher standards of living and using
technology to achieve those growth-oriented goals.
If the carbon dioxide and methane decreases in the last ice age helped to explain
the ice age coldness, can they tell us something about how the anthropogenic
increase of these gases due to human activities might cause climate change in
the future? The answer is "not directly," for it is possible that there are
other factors we have not accounted for in the ice age story that could well
have been involved, and there are still many unanswered questions associated
with the Ice Age cycles. It is simply a circumstantial bit of evidence which
suggests that it is more consistent to explain the ice ages with the heat trapping
power of the greenhouse effect existing at the magnitudes currently envisioned
by most scientists -- i.e. a doubling of CO2 would raise surface temperatures
by about 3 degrees C plus or minus 1.5 degrees C. This is known as the "climate
sensitivity range." The magnitude of climate sensitivity that helps to explain
the ice age coldness best is 2-3 degrees C. If the best estimate were ten degrees
warming, which is twice the value at the high end of the climate sensitivity
range thought by the mainstream of scientist today (e.g.. IPCC 1996a), then
the ice ages should have been even colder than they were. On the other hand,
if the earth would only warm up by half a degree or less if CO2 doubled, then
it would be tougher to explain the magnitude of the ice ages without finding
some other mechanism not yet understood. Of course, the latter is possible,
but what other lines of circumstantial evidence or direct evidence do we have
for estimating climate sensitivity?
We know from quite literally thousands of laboratory experiments and direct
measurements, millions of balloon observations and trillions of satellites data
bits, that the basic structure of the energy flows in and out of the earth's
atmosphere are relatively well understood. We know that water vapor, carbon
dioxide, or methane trap enough energy on the earth to warm the surface up about
33 degrees C (60 degrees F) relative to that which would occur in their absence.
This well known natural greenhouse effect is not under dispute, and has been
known for a century and a half. Nor is the 0.5 degrees C (plus or minus 0.2
degrees C ) globally averaged warming trend at the earth's surface over the
past century in dispute. In dispute is whether a small increment since the Industrial
Revolution in this envelope of greenhouse gases, which our calculations tell
us should have trapped about two extra watts of energy over every square meter
of Earth, would produce a noticeable response (i.e. a "climate signal"). The
debate over whether that signal has been detected has been intense lately and
this intensity has been based upon significant new pieces of evidence - - albeit
each piece is circumstantial -- and a few loud, well-publicized denials that
the totality of evidence has any meaning. In the absence of clear, direct empirical
evidence, one often has to use either circumstantial evidence, or incomplete
bits of direct evidence with uncertainties attached. When the preponderance
of such evidence gets strong enough, then most scientists begin to accept, tentatively
of course, the likelihood of causal connections. Some people shed their skepticism
at different levels than others, so naturally there will be a cacophonous debate
over whether a climate signal has been detected, let alone whether it could
be attributed to human activities. One can always find some scientist who will
want 999 out of a 1000 probability of certainty, and others who will accept
the proposition at eight or nine chances out of ten. This is not science, but
a value judgment about the acceptability of a significant, but not conclusive,
body of evidence. The scientific job is to assess (A) what can happen, and (B)
what the odds are of it happening (see, for example, this discussion in Chapter
6 of Schneider 1997a). Let me discuss this process further.
I have mentioned the ice ages since this is a "natural experiment" that we
use, not to forecast the future, but to build understanding of climate processes
and to validate the tools that we do use to forecast the future -- that is,
our climate theories embodied in mathematical models. Are there any other such
natural experiments? The answer is "yes there are many," the two most prominent
being (1) episodic volcanic eruptions which throw dust in the stratosphere that
reflects for a few years a few watts per square meter of solar energy that otherwise
would have reached the lower atmosphere and (2) the seasonal cycle. Let's consider
volcanic eruptions first. Volcanic dust veils should cool the planet. In fact,
the last major eruption, Mt. Pinatubo in 1991, was forecast to cool the earth's
lower atmosphere on the order of several tenths of a degree by a number of climate
modeling groups -- in advance of the actual data to confirm -- and indeed, that
is roughly what happened. However, it could be argued that a few tenths of a
degree cooling, or warming for that matter, might be a natural fluctuation in
the earth's climate system, and indeed, fluctuations of that magnitude are a
part of the natural background "climatic noise." How then could we distinguish
the climatic signal of the volcanic eruption from the noise of the natural variability?
In any one eruption it is difficult to do so since the signal to noise ratio
is about one, i.e. the magnitude of the cooling expected is about equal to the
magnitude of the natural fluctuations in non-volcanic years, and therefore for
any one event we cannot have very much confidence that a signal has been observed.
So the fact that the Pinatubo results showed up about as predicted doesn't,
by itself, give a lot of confidence, although as a circumstantial bit of evidence
is quite useful. However, another volcanic eruption in 1983, El Chich¢n, was
also followed by several tenths of a degree cooling, as was the effect after
Mt. Agung in 1963 or Mt. Krakatoa in the Victorian period.
In other words, by looking at the results of several volcanic eruptions and
compositing, a number of scientists (including Mass and Schneider, 1977) used
this technique and discovered that indeed there was a clear and obvious correlation
which suggests that when a few watts of energy over every square meter of the
earth is removed by volcanic dust veils in the stratosphere, the lower atmosphere
will indeed cool by a few tenths of degrees -- the very magnitude predicted
by the same computer models that we use to forecast the effects of a few watts
per square meter of sustained heating from global warming.
What other natural experiments might we have to test climate sensitivity? My
favorite is one that happens every year -- the seasons. Winter predictably follows
summer, being some fifteen degrees colder in the Northern Hemisphere and five
degrees colder than summer in the Southern Hemisphere. The reason the Southern
Hemisphere has a smaller seasonal cycle is because it has much more ocean than
land, and water has a higher heat retaining capacity than land or air. Since
a season is not long enough for the planet to reach an equilibrium temperature
change, therefore, the more land dominated Northern Hemisphere has lower heat
capacity and thus a larger seasonal cycle of surface temperature. How well do
the climate models do in reproducing this change? The answer is "extraordinarily
well." Although what the absolute temperatures models may simulate can be off
by as much as five or six degrees in some regions of the world for some seasons,
the models' capacity to reproduce the amplitude of the seasonal cycle of surface
air temperatures, by and large, is quite good. (It is less good for other variables,
however, particularly hydrological systems.) Now, if we were making a factor
of ten error by either overestimating or underestimating the sensitivity of
the climate to radiative forcing, it would be difficult for the models to reproduce
the different seasonal cycle surface temperature amplitudes over land and oceans
as well as they do. This is another piece of circumstantial evidence suggesting
that current estimate of climate sensitivity is not off by a factor of ten,
as some "contrarians" assert. Indeed, indirect evidence like ice ages, volcanic
eruptions and the seasonal cycle simulation skills of models are prime reasons
why many of us in the scientific community have for the past twenty years expected
that "demonstrable" (e.g.. see p.11 of Schneider and Mesirow, 1976 -- in which
I projected just such a change) anthropogenic climate change was not unlikely
by the 21st century.
In summary, then, in my opinion it is unlikely that natural variability is
the explanation of all climate change, especially that which has been documented
in the 20th century. However, since much of the debate over detection and attribution
of human- caused climate change hinges on the projections of climatic models,
it is necessary to have at least a cursory understanding of how they work. Although
it is impossible to treat more than the highlights of the nature and use of
climatic models in a dozen pages, I nonetheless offer the following section
in the hopes of reducing somewhat the confusion that may exist in many peoples'
minds after listening to the often acrimonious and technically complex debate
over climatic models and their credibility.
II. Overview Of Climate Modeling Fundamentals
Engineers and scientists build models -- either mathematical or physical ones
-- primarily to perform tests that are either too dangerous, too expensive,
or perhaps impossible to perform with the real thing. To simulate the climate,
a modeler needs to decide which components of the climatic system to include
and which variables to involve. For example, if we choose to simulate the long-term
sequence of glacials and interglacials (the period between successive ice ages),
our model needs to include explicitly the effects of all the important interacting
components of the climate system operating over the past million years or so.
These include the atmosphere, oceans, sea ice/glaciers (cryosphere), land surface
(including biota), land sub-surface and chemical processes (including terrestrial
and marine biogeochemical cycles), as well as the external or "boundary forcing"
conditions such as input of solar radiant energy (e.g., see IPCC, 1996a).
The problem for earth systems scientists is separating out quantitatively cause
and effect linkages from among the many factors that interact within the earth
system. It is a controversial effort because there are so many sub-systems,
so many forcings and so many interacting complex sets of processes operating
at the same time that debates about the adequacy of models often erupt.
Modeling the Climate System. So how are climate models constructed? First,
scientists look at observations of changes in temperatures, ozone levels and
so forth. This allows us to identify correlations among variables. Correlation
is not necessarily cause and effect -- just because one event tracks another
doesn't mean it was caused by it. One has to actually prove the relationship
is causal and explain how it happened. Especially for cases where unprecedented
events are being considered, a first principles, rather than a purely empirical-statistical
approach is desirable. However, observations can lead to a hypothesis of cause
and effect -- "laws" -- that can be tested (for example, see Root and Schneider,
1995). The testing is often based on simulations with mathematical models run
on a computer. The models, in turn, need to be tested against a variety of observations
-- present and paleoclimatic. That is how the scientific method is typically
applied. When a model, or set of linked models, appear plausible, they can be
fed "unprecedented" changes such as projected human global change forcings --
changes that have not happened before -- and then be asked to make projections
of future climate, ozone levels, forests, species extinction rates, etc.
The most comprehensive weather simulation models produce three dimensional
details of temperature, winds, humidity, and rainfall all over the globe. A
weather map generated by such a computer model -- known as a general circulation
model or GCM -- often looks quite realistic, but it is never faithful in every
detail. To make a weather map generated by computer we need to solve six partial
differential equations that describe the fluid motions in the atmosphere. It
sounds in principle like there's no problem: we know that those equations work
in the laboratory, we know that they describe fluid motions and energy and mass
relationships. So why then aren't the models perfect simulations of the atmospheric
One answer is that the evolution of weather from some starting weather map
(known as the initial condition) is not deterministic beyond about 10 days --
even in principle. A weather event on one day cannot be said to determine an
event 20 days in the future, all those commercial "long-range" weather forecasts
notwithstanding. But the inherent unpredictability of weather details much beyond
ten days (owing to the chaotic internal dynamics of the atmosphere) doesn't
preclude accurate forecasts of long-term averages (climate rather than weather).
The seasonal cycle is absolute proof of such deterministic predictability, as
winter reliably follows summer and the cause and effect is known with certainty.
Grids and Parameterization. The other answer to the imperfection of general
circulation model simulations, even for long-term averages, is that nobody knows
how to solve those six complex mathematical equations exactly. It's not like
an algebraic equation where one can get the exact solution by a series of simple
operations. There isn't any known mathematical technique to solve such coupled,
nonlinear partial differential equations exactly. We approximate the solutions
by taking the equations, which are continuous, and breaking them down into discrete
chunks which we call grid boxes. A typical GCM grid size for a "low resolution"
model is about the size of Colorado horizontally and that of a "high resolution"
GCM is about the size of Connecticut. In the vertical dimension there are two
(low resolution) up to about twenty (high resolution) vertical layers that are
typically spanning the lowest 10 to 40 kilometers of the atmosphere.
Now, we've already noted that clouds are very important to the energy balance
of the earth-atmosphere system since they reflect sunlight away and trap infrared
heat. But because none of us have ever seen a single cloud the size of Connecticut,
let alone Colorado, we have a problem of scale -- how can we treat processes
that occur in nature at a smaller scale than we can resolve by our approximation
technique of using large grid boxes. For example, we cannot calculate clouds
explicitly because individual clouds are typically the size of a dot in this
grid box. But we can put forward a few reasonable propositions on cloud physics:
if it's a humid day, for example, it's more likely to be cloudy. If the air
is rising, it's also more likely to be cloudy.
These climate models can predict the average humidity in the gridbox, and whether
the air is rising or sinking on average. So then we can write what we call a
parametric representation or "parameterization" to connect large scale variables
that are resolved by the grid box (such as humidity) to unresolved small scale
processes (individual clouds). Then we get a prediction of grid box-averaged
cloudiness through this parameterization. So-called "cumulus parameterization"
is one of the important -- and controversial --elements of GCMs that occupy
a great deal of effort in the climate modeling community. Therefore, the models
are not ignoring cloudiness, but neither are they explicitly resolving individual
clouds. Instead, modelers try to get the average effect of processes that can't
be resolved explicitly at smaller scales than the smallest resolved scale (the
grid box) in the GCM. Developing, testing and validating many such parameterizations
is the most important task of the modelers since these parameterizations determine
critically important issues like "climate sensitivity." The climate sensitivity
is the degree of response of the climate system to a unit change in some forcing
factor: typically, in our context, the change in globally-averaged surface air
temperature to a fixed doubling of the concentration of atmospheric carbon dioxide
above pre-industrial levels. This brings us to one of the most profound controversies
in earth systems science, and one of the best examples of the usefulness, and
fragility, of computer modeling.
The Greenhouse Effect. If the earth only absorbed radiation from the sun without
giving an equal amount of heat back to space by some means, the planet would
continue to warm up until the oceans boiled. We know the oceans are not boiling,
and surface thermometers plus satellites have shown that the earth's temperature
remains roughly constant from year to year (the interannual globally-averaged
variability of about 0.2 C or the 0.5 C warming trend in the 20th century, notwithstanding).
This near constancy requires that about as much radiant energy leaves the planet
each year in some form as is coming in. In other words, a near-equilibrium or
energy balance has been established. The components of this energy balance are
crucial to the climate.
All bodies with temperature give off radiant energy. The earth gives off a
total amount of radiant energy equivalent to that of a black body -- a fictional
structure that represents an ideal radiator -- with a temperature of roughly
-18 C (255 K). The mean global surface air temperature is about 14 C (287 K),
some 32 C warmer than the earth's black body temperature. The difference is
due to the well-established greenhouse effect.
The term greenhouse effect arises from the classic analogy to a greenhouse,
in which the glass allows the solar radiation in and traps much of the heat
inside. However, the mechanisms are different, for in a greenhouse the glass
primarily prevents convection currents of air from taking heat away from the
interior. Greenhouse glass is not primarily keeping the enclosure warm by its
blocking or re-radiating infrared radiation; rather, it is constraining the
physical transport of heat by air motion.
Although most of the earth's surface and thick clouds are reasonably close
approximations to a black body, the atmospheric gases are not. When the nearly
black body radiation emitted by the earth's surface travels upward into the
atmosphere, it encounters air molecules and aerosol particles. Water vapor,
carbon dioxide, methane, nitrous oxide, ozone, and many other trace gases in
the earth's gaseous envelope tend to be highly selective -- but often highly
effective -- absorbers of terrestrial infrared radiation. Furthermore, clouds
(except for thin cirrus) absorb nearly all the infrared radiation that hits
them, and then they reradiate energy almost like a black body at the temperature
of the cloud surface -- colder than the earth's surface most of the time.
The atmosphere is more opaque to terrestrial infrared radiation than it is
to incoming solar radiation, simply because the physical properties of atmospheric
molecules, cloud and dust particles tend on average to be more transparent to
solar radiation wavelengths than to terrestrial radiation. These properties
create the large surface heating that characterizes the greenhouse effect, by
means of which the atmosphere allows a considerable fraction of solar radiation
to penetrate to the earth's surface and then traps (more precisely, intercepts
and re-radiates) much of the upward terrestrial infrared radiation from the
surface and lower atmosphere. The downward re- radiation further enhances surface
warming and is the prime process causing the greenhouse effect.
This is not a speculative theory, but a well understood and validated phenomenon
of nature. The most important greenhouse gas is water vapor, since it absorbs
terrestrial radiation over most of the infrared spectrum. Even though humans
are not altering the average amount of water vapor in the atmosphere very much
by direct injections of this gas, increases in other greenhouse gases which
warm the surface cause an increase in evaporation which increases atmospheric
water vapor concentrations, leading to an amplifying or "positive" feedback
process known as the "water vapor-surface temperature-greenhouse feedback."
The latter is believed responsible for the bulk of the climate sensitivity (IPCC,
1996a). Carbon dioxide is another major greenhouse gas. Although it absorbs
and re-emits considerably less infrared radiation than water vapor, CO2 is of
intense interest because its concentration is increasing due to human activities.
Ozone, nitrogen oxides, some hydrocarbons, and even some artificial compounds
like chlorofluorocarbons are also greenhouse gases. The extent to which they
are important to climate depends upon their atmospheric concentrations, the
rates of change of those concentrations and their effects on depletion of stratospheric
ozone --which in turn, can indirectly modify the radiative forcing of the lower
atmosphere thus changing climate -- currently offsetting a considerable fraction
of the otherwise expected greenhouse warming signal.
The earth's temperature, then, is primarily determined by the planetary radiation
balance, through which the absorbed portion of the incoming solar radiation
is nearly exactly balanced over a year's time by the outgoing terrestrial infrared
radiation emitted by the climatic system to earth. As both of these quantities
are determined by the properties of the atmosphere and the earth's surface,
major climate theories that address changes in those properties have been constructed.
Many of these remain plausible hypotheses of climatic change. Certainly the
natural greenhouse effect is established beyond a reasonable scientific doubt,
accounting for natural warming that has allowed the coevolution of climate and
life to proceed to this point ( e.g., see Schneider and Londer, 1984). The extent
to which human augmentation of the natural greenhouse effect (i.e., global warming)
will prove serious is, of course, the current debate.
Model Validation. There are many types of parameterizations of processes that
occur at a smaller scale than our models can resolve, and scientists debate
which type is best. In effect, are they an accurate representation of the large-scale
consequences of processes that occur on smaller scales than we can explicitly
treat? These include cloudiness, radiative energy transport, turbulent convection,
evapotranspiration, oceanic mixing processes, chemical processes, ecosystem
processes, sea ice dynamics, precipitation, mountain effects and surface winds.
In forecasting climatic change, then, validation of the model becomes important.
In fact, we can not easily know in principle whether these parameterizations
are "good enough." We have to test them in a laboratory. That's where the study
of paleoclimates has proved so valuable ( e.g., Hoffert and Covey, 1992). We
also can test parameterizations by undertaking detailed small-scale field or
modeling studies aimed at understanding the high resolution details of some
parameterized process the large-scale model has told us is important. The Second
Assessment Report of IPCC (IPCC, 1996a) Working Group I devoted more than one
chapter to the issue of validation of climatic models, concluding that "the
most powerful tools available with which to assess future climate are coupled
climate models, which include three-dimensional representations of the atmosphere,
ocean, cryosphere and land surface. Coupled climate modeling has developed rapidly
since 1990, and current models are now able to simulate many aspects of the
observed climate with a useful level of skill. [For example, as noted earlier,
good skill is found in simulating the very large annual cycle of surface temperatures
in Northern and Southern Hemispheres or the cooling of the lower atmosphere
following the injection of massive amounts of dust into the stratosphere after
explosive volcanic eruptions such as Mt. Pinatubo in the Philippines in 1991.]
Coupled model simulations are most accurate at large spatial scales (e.g., hemispheric
or continental); at regional scales skill is lower". [sentence in square brackets
One difficulty with coupled models is known as "flux adjustment"-- a technique
for accounting for local oceanic heat transport processes that are not well
simulated in some models. Adding this element of empirical-statistical "tuning"
to models that strive to be based as much as possible on first principles has
been controversial. However, not all models use flux adjustments, yet nearly
all models, with or with out this technique, produce climate sensitivities within
or near to the standard IPCC range of 1.5 to 4.5 C. Flux adjustments do, however,
have a large influence on regional climatic projections, even if they prove
not to be a major impact on globally-averaged climate sensitivity. Improving
coupled models is thus a high priority for climate researchers since it is precisely
such regional projections that are so critical to the assessment of climatic
impacts on environment and society (e.g., IPCC, 1996b; IPCC, 1997).
Transient versus Equilibrium Simulations. One final issue needs to be addressed
in the context of coupled climate simulations. Until recently, climate modeling
groups did not have access to sufficient computing power to routinely calculate
time evolving runs of climatic change given several alternative future histories
of greenhouse gases and aerosol concentrations. That is, they did not perform
so-called transient climate change scenarios. (Of course, the real Earth is
undergoing a transient experiment.) Rather, the models typically were asked
to estimate how the Earth's climate would eventually be altered (i.e., in equilibrium)
after CO2 was artificially doubled and held fixed indefinitely rather than increased
incrementally over time as it has in reality or in more realistic transient
model scenarios. The equilibrium climate sensitivity has remained fairly constant
for over twenty years of assessments by various national and international groups,
with the assessment teams repeatedly suggesting that, were CO2 to double, climate
would eventually warm at the surface somewhere between 1.5 and 4.5 C. (Later
on we will address the issue of the probability that warming above or below
this range might occur, and how probabilities can even be assigned to this sensitivity.)
Transient model simulations exhibit less immediate warming than equilibrium
simulations because of the high heat holding capacity of the thermally massive
oceans. However, that unrealized warming eventually expresses itself decades
to centuries later. This thermal delay, which can lull us into underestimating
the long-term amount of climate change, is now being accounted for by coupling
models of the atmosphere to models of the oceans, ice, soils, and biosphere
(so-called earth system models -- ESMs). Early generations of such transient
calculations with ESMs give much better agreement with observed climate changes
on Earth than previous calculations in which equilibrium responses to CO2 doubling
were the prime simulations available. When the transient models at the Hadley
Center in the United Kingdom and the Max Planck Institute in Hamburg, Germany
were also driven by both greenhouse gases (which heat) and sulfate aerosols
(which cool), these time evolving simulations yielded much more realistic "fingerprints"
of human effects on climate( e.g., Chapter 8 of IPCC, 1996a). More such computer
simulations are needed to provide high confidence levels in the models, but
scientists using coupled, transient simulations are now beginning to express
growing confidence that current projections are plausible.
Transients and Surprises. However, such a very complicated coupled system like
an ESM is likely to have unanticipated results when forced to change very rapidly
by external disturbances like CO2 and aerosols. Indeed, some of the transient
models run out for hundreds of years exhibit dramatic change to the basic climate
state (e.g., radical change in global ocean currents). Thompson and Schneider
(1982) used very simplified transient models to investigate the question of
whether the time evolving patterns of climate change might depend on the rate
at which CO2 concentrations increased. For slowly increasing CO2 buildup scenarios,
the model predicted the standard model outcome: the temperature at the poles
warmed more than the tropics.
Any changes in equator-to-pole temperature difference help to create altered
regional climates, since temperature differences over space influence large-scale
atmospheric wind patterns. However, for very rapid increases in CO2 concentrations
a reversal of the equator-to-pole difference occurred. If sustained over time,
this would imply difficult to forecast, transient climatic conditions during
the century or so the climate adjusts toward its new equilibrium state. In other
words, the harder and faster the enormously complex earth system is forced to
change, the higher the likelihood for unanticipated responses. Or, in a phrase,
the faster and harder we push on nature, the greater the chances for surprises
-- some of which are likely to be nasty.
Noting this possibility, the Summary for Policy makers of IPCC Working Group
I concluded with the following paragraph:
Future unexpected, large and rapid climate system changes (as have occurred
in the past) are, by their nature, difficult to predict. This implies that future
climate changes may also involve "surprises." In particular these arise from
the non-linear nature of the climate system. When rapidly forced, non-linear
systems are especially subject to unexpected behavior. Progress can be made
by investigating non-linear processes and sub-components of the climatic system.
Examples of such non-linear behavior include rapid circulation changes in the
North Atlantic and feedbacks associated with terrestrial ecosystem changes.
Of course, if the Earth system were somehow less "rapidly forced" by virtue
of policies designed to slow down the rate at which human activities modify
the land surfaces and atmospheric composition, this would lower the likelihood
of non-linear surprises. Whether the risks of such surprises justify investments
in abatement activities is the question that Integrated Assessment (IA) activities
are designed to inform (IPCC, 1996c). The likelihood of various climatic changes,
along with estimates of the probabilities of such potential changes, are the
kinds of information IA modelers need from earth systems scientists in order
to perform IA simulations. We turn next, therefore, to a discussion of methods
to evaluate the subjective probability distributions of scientists on one important
climate change issue, the climate sensitivity.
Subjective Probability Estimation. Finally, what does define a scientific consensus?
Morgan and Keith (1995) and Nordhaus (1994) are two attempts by non- climate
scientists, who are interested in the policy implications of climate science,
to tap the knowledgeable opinions of what they believe to be representative
groups of scientists from physical, biological and social sciences on two separate
questions: first the climate science itself and second impact assessment and
policy. Their sample surveys show that although there is a wide divergence of
opinion, nearly all scientists assign some probability of negligible outcomes
and some probability of very highly serious outcomes, with one or two exceptions,
like Richard Lindzen at MIT (who is scientist number 5 on Fig. 1 of Morgan and
In the Morgan and Keith study, each of the 16 scientists listed in Table 1
were put through a several hour, formal decision-analytic elicitation of their
subjective probability estimates for a number of factors. Figure 1 shows the
elicitation results for the important climate sensitivity factor. Note that
15 out of 16 scientists surveyed ( including several IPCC Working Group I Lead
Authors -- I am scientist 9) assigned something like a 10% subjective likelihood
of negligible (less than 1 C) climatic change from doubling of CO2. These scientists
also typically assigned a 10% probability for extremely large climatic changes
--greater than 5 C, roughly equivalent to the temperature difference experienced
between a glacial and interglacial age, but occurring some hundred times more
rapidly. In addition to the lower probabilities assigned to the mild and catastrophic
outcomes, the bulk of the scientists interviewed (with the one exception) assigned
the bulk of their subjective cumulative probability distributions in the center
of the IPCC range for climate sensitivity. What is most striking about the exception,
scientist 5, is the lack of variance in his estimates--suggesting a very high
confidence level in this scientist's mind that he understands how all the complex
interactions within the earth-system described above will work. None of the
other scientists displayed that confidence, nor did the Lead Authors of IPCC.
However, several scientists interviewed by Morgan and Keith expressed concern
for "surprise" scenarios -- for example, scientists 2 and 4 explicitly display
this possibility on Figure 1, whereas several other scientists implicitly allow
for both positive and negative surprises since they assigned a considerable
amount of their cumulative subjective probabilities for climate sensitivity
outside of the standard 1.5 to 4.5 range. This concern for surprises is consistent
with the concluding paragraph of the IPCC Working Group I Summary for Policymakers
IPCC Lead Authors, who wrote the Working Group I Second Assessment Report,
were fully aware of both the wide range of possible outcomes and the broad distributions
of attendant subjective probabilities. After a number of sentences highlighting
such uncertainties, the Report concluded: "nevertheless, the balance of evidence
suggests that there is a discernible human influence on the climate." The reasons
for this now-famous subjective judgment were many, such as the kinds of factors
listed above. These include a well validated theoretical case for the greenhouse
effect, validation tests of both model parameterizations and performance against
present and paleoclimatic data, and the growing "fingerprint" evidence that
suggests horizontal and vertical patterns of climate change predicted to occur
in coupled atmosphere-ocean models has been increasingly evident in observations
over that past several decades. Clearly, more research is needed, but enough
is already known to warrant assessments of the possible impacts of such projected
climatic changes and the relative merits of alternative actions to both mitigate
emissions and/or make adaptations less costly. That is the ongoing task of integrated
assessment analysts, a task that will become increasingly critical in the next
century. To accomplish this task, it is important to recognize what is well
established in climate theory and modeling and to separate this from aspect
that are more speculative. That is precisely what IPCC (1996a) has attempted
III. Assessing The Impacts Of Climatic Change Projections
One of the most dramatic of the standard "impacts" of climatic warming projections
is the increase in sea level typically associated with warmer climatic conditions.
An EPA study used an unusual approach: combining climatic models with the subjective
opinions of many scientists on the values of uncertain elements in the models
to help bracket the uncertainties inherent in this issue. Titus and Narayanan
(1996) -- including teams of experts of all persuasions on the issue -- calculated
the final product of their impact assessment as a statistical distribution of
future sea level rise, ranging from slightly negative values (i.e., a sea level
drop) as a low probability outcome, to a meter or more rise, also with a low
probability (see Fig 2). The midpoint of the probability distribution is something
like half meter sea level rise by the end of the next century.
Since the EPA analysis stopped there, this is by no means a complete assessment.
In order to take integrated assessment to its logical conclusion, we need to
ask what the economic costs of various control strategies might be and how the
costs of abatement compare to the economic or environmental losses (i.e. impacts
or damages as they are called) from sea level rises. That means putting a value
-- a dollar value of course -- on climate change, coastal wetlands, fisheries,
environmental refugees, etc. Hadi Dowlatabadi at Carnegie Mellon University
leads a team of integrated assessors who, like Titus, combined a wide range
of scenarios of climatic changes and impacts but, unlike the EPA studies, added
a wide range of abatement cost estimates into the mix. Their integrated assessment
was presented in statistical form as a probability that investments in CO2 emissions
controls would either cost more than the losses from averted climate change
or the reverse (e.g., Morgan and Dowlatabadi, 1996). Since their results do
not include estimates for all conceivable costs (e.g., the political consequences
of persons displaced from coastal flooding), the Carnegie Mellon group offered
its results only as illustrative of the capability of integrated assessment
techniques. Its numerical results have meaning only after the range of physical,
biological and social outcomes and their costs and benefits have been quantified
-- a Herculean task. Similar studies have been made in Holland by a Dutch government
effort to produce integrated assessments for policy makers. Jan Rotmans, who
heads one of their efforts, likes to point out that such modeling of complex
physical, biological and social factors cannot produce credible "answers" to
current policy dilemmas, but can provide "insights" to policy makers that will
put decision-making on a firmer factual basis (Rotmans and van Asselt, 1996).
Understanding the strengths and weaknesses of any complex analytic tool is essential
to rational policy making, even if quantifying the costs and benefits of specific
activities is controversial.
William Nordhaus, an economist from Yale University, has made heroic steps
to put the climatic change policy debate into an optimizing framework. He is
an economist who has long acknowledged that an efficient economy must internalize
externalities (in other words, find the full social costs of our activities,
not just the direct cost reflected in conventional "free market" prices). He
tried to quantify this external damage from climate change and then tried to
balance it against the costs to the global economy of policies designed to reduce
CO2 emissions. His optimized solution was a carbon tax, designed to internalize
the externality of damage to the climate by increasing the price of fuels in
proportion to how much carbon they emit, thereby providing an incentive for
society to use less of these fuels.
Nordhaus (1992) imposed carbon tax scenarios ranging from a few dollars per
ton to hundreds of dollars per ton -- the latter which would effectively eliminate
coal from the world economy. He showed that, in the context of his model and
its assumptions, that these carbon emission fees would cost the world economy
anywhere from less than 1 percent annual loss in Gross National Product to a
several percent loss by the year 2100. The efficient, optimized solution from
classical economic cost-benefit analysis is that carbon taxes should be levied
sufficient to reduce the GNP as much as it is worth to avert climate change
(e.g. the damage to GNP from climate change). He assumed that the impacts of
climate change were equivalent to a loss of about one percent of GNP. This led
to an "optimized" initial carbon tax of about five dollars or so per ton of
carbon dioxide emitted. In the context of his modeling exercise, this would
avert only a few tenths of a degree of global warming to the year 2100, a very
small fraction of the 4 C warming his model projected.
How did Nordhaus arrive at climate damage being about 1 percent of GNP? He
assumed that agriculture was the most vulnerable economic market sector to climate
change. For decades agronomists had calculated potential changes to crop yields
from various climate change scenarios, suggesting some regions now too hot would
sustain heavy losses from warming whereas others, now too cold, could gain.
Noting that the US lost about one third of it's agricultural economy in the
heat waves of 1988, and that agriculture then represented about 3 % of the US
GNP, Nordhaus felt the typically- projected climatic changes might thus cost
the U.S. economy something like 1% annually in the 21st century. This figure
was severely criticized because it neglected damages from health impacts (e.g.,
expanded areas of tropical diseases, heat-stress deaths, etc.), losses from
coastal flooding or severe storms, security risks from boat people created from
coastal disruptions in South Asia or any damages to wildlife, fisheries or ecosystems
that would almost surely accompany temperature rises at rates of degrees per
century as are typically projected. It also was criticized because his estimate
neglected potential increases in crop or forestry yields from the direct effects
of increased CO2 in the air on the photosynthetic response of these marketable
plants. Nordhaus responded to his critics by conducting a survey, similar to
that undertaken by Morgan and Keith, but this time focused on the impacts of
several scenarios of climatic change on world economic product -- including
both standard market sector categories (e.g., forestry, agriculture, heating
and cooling demands) and so-called non-market amenities like biological conservation
and national security.
When Nordhaus surveyed the opinions of mainstream economists, environmental
economists and natural scientists (I am respondent #10, in Nordhaus, 1994),
he found that the former expressed a factor of twenty less anxiety about the
economic or environmental consequences of climate change than the latter (see
Fig.3 -- Scenario A is for 3 C warming by 2100 A.D. and Scenario C for 6 C by
2100 A.D.). However, the bulk of even the conservative group of economists Nordhaus
surveyed considered there to be at least a ten percent probability that typically
projected climate changes could still cause economic damages worth several percent
of gross world product (the current US GNP is around five trillion dollars --
about twenty percent of the global figure). And, some of these economists didn't
include estimates for possible costs of "non-market" damages (e.g., harm to
nature). One ecologist who did explicitly factor in non-market values for natural
systems went so far as to assign a ten percent chance of a hundred percent loss
of GNP -- the virtual end of civilization! While Nordhaus quipped that those
who know most about the economy are less concerned, I countered with the obvious
observation that those who know the most about nature are very concerned.
We will not easily resolve the paradigm gulf between the optimistic and pessimistic
views of these specialists with different training, traditions and world views,
but the one thing that is clear from both the Morgan and Keith and Nordhaus
studies is that the vast bulk of knowledgeable experts from a variety of fields
admits to a wide range of plausible outcomes in the area of global environmental
change -- including both mild and catastrophic eventualities -- under their
broad umbrella of possibilities. This is a condition ripe for misinterpretation
by those who are unfamiliar with the wide range of probabilities most scientists
attach to global change issues. The wide range of probabilities follows from
recognition of the many uncertainties in data and assumptions still inherent
in earth systems models, climatic impact models, economic models or their synthesis
via integrated assessment models (see Schneider, 1997a,b). It is necessary in
a highly interdisciplinary enterprise like the integrated assessment of global
change problems that a wide range of possible outcomes be included, along with
a representative sample of the subjective probabilities that knowledgeable assessment
groups like the IPCC believe accompany each of those possible outcomes. In essence,
the "bottom line" of estimating climatic impacts is that both "the end of the
world" and "it is good for business" are the two lowest probability outcomes,
and that the vast bulk of knowledgeable scientists and economists consider there
to be a significant chance of climatic damage to both natural and social systems.
Under these conditions -- and the unlikelihood that research will soon eliminate
the large uncertainties that still persist -- it is not surprising that most
formal climatic impact assessments have called for cautious, but positive steps
both to slow down the rate at which humans modify the climatic system and to
make natural and social systems more resilient to whatever changes do eventually
IV. Policy Implications
What Are Some Actions to Consider? Decision making, of course, is a value judgment
about how to take risks -- gambling, if you will -- in the environment-development
arena. Despite the often bewildering complexity, making value choices does not
require a Ph.D. in statistics, political science or geography to comprehend.
Rather, citizens need simple explanations using common metaphors and everyday
language that ordinary people can understand about the terms of the debate.
Once the citizens of this planet become aware of the various tradeoffs involved
in trying to choose between business-as-usual activities and sustainable environmental
stewardship, the better will be the chances that the risk- averse common sense
of the "average" person may be thrust into the decision-making process by a
public that cares about its future and that of its planet, and knows enough
not to be fooled by simple solutions packaged in slick commercials or editorials
by any special interest.
What are the kinds of actions that can be considered to deal with global change
problems like climate change. The following list is a consensus from a multi-
disciplinary, business, university and government assessment conducted by the
National Research Council in 1991. It is encouraging that this multi-discipline,
ideologically diverse group (including economist Nordhaus, industrialist Frosch
and climatologist Schneider) could agree that the United States, for example,
could reduce or offset its greenhouse gas emissions by between 10 and 40 percent
of 1990 levels at low cost, or at some net savings, if proper policies are implemented.
Here is the Council's entire suggested list:
(1) Continue the aggressive phaseout of CFC and other halocarbon emissions
and the development of substitutes that minimize or eliminate greenhouse gas
(2) Study in detail the "full social cost pricing" of energy, with a goal of
gradually introducing such a system. On the basis of the principle that the
polluter should pay, pricing of energy production and use should reflect the
full costs of the associated environmental problems.
(3) Reduce the emissions of greenhouse gases during energy use and consumption
by enhancing conservation and efficiency.
(4) Make greenhouse warming a key factor in planning for our future energy
supply mix. The United States should adopt a systems approach that considers
the interactions among supply, conversion, end use, and external effects in
improving the economics and performance of the overall energy system.
(5) Reduce global deforestation.
(6) Explore a moderate domestic reforestation program and support international
(7) Maintain basic, applied, and experimental agricultural research to help
farmers and commerce adapt to climate change and thus ensure ample food.
(8) Make water supply more robust by coping with present variability by increasing
efficiency of use through water markets and by better management of present
systems of supply.
(9) Plan margins of safety for long-lived structures to take into consideration
possible climate change.
(10) Move to slow present losses in biodiversity.
(11) Undertake research and development projects to improve our understanding
of both the potential of geoengineering options to offset global warming and
their possible side-effects. This is not a recommendation that geoengineering
options be undertaken at this time, but rather that we learn more about their
likely advantages and disadvantages.
(12) Control of population growth has the potential to make a major contribution
to raising living standards and to easing environmental problems like greenhouse
warming. The United States should resume full participation in international
programs to slow population growth and should contribute its share to their
financial and other support.
(13) The United States should participate fully with officials at an appropriate
level in international agreements and in programs to address greenhouse warming,
including diplomatic conventions and research and development efforts.
This NRC (1991) assessment produced a remarkable list, considering the diversity
of the participants' backgrounds and their varying ideological perspectives.
But in the crucible of open debate that permeated that assessment activity,
self-interest polemics and media grandstanding are incinerated. This group didn't
assert that catastrophe was inevitable, nor that it was improbable. We simply
believed that prudence dictates that "despite the great uncertainties, greenhouse
warming is a potential threat sufficient to justify action now."
Integrated assessments of the policy options offered by the National Research
Council Report are actively being pursued with a variety of models.
It is interesting that this comprehensive list of 13 recommendations from the
National Research Council report still ignored two fundamental aspects: the
desperate need for (1) an intelligent, non-polemical public debate about global
change and (2) interdisciplinary public education that also teaches students
about whole systems and long-term risk management, not only traditional areas
of isolated specialization.
Environment and (or versus) Development? While the NRC report did acknowledge
the importance of international dimensions of global change policy making, it
was still largely a developed country perspective. Developing countries often
have very different perspectives. First of all, LDCs are struggling to raise
literacy rates, lower death rates, increase life expectancy, provide employment
for burgeoning populations and reduce local air and water pollution that pose
imminent health hazards to their citizens and environments. Protecting species
or slowing climate change are simply low on their priority lists as compared
to more mature economic powers like the OECD nations. It is ironic, even if
understandable, that LDCs put abatement of global change disturbances so low
on their priority lists despite the fact that nearly all impact assessments
suggest that it is these very countries that are most vulnerable to climatic
change, for example.
There is a phrase in economics known as "the marginal dollar." In our context
it means that given all the complexity of interconnected physical, biological
and social systems, climate abatement may not be perceived as the best place
to invest the next available dollar so as to bring the maximum social benefit
to poor countries. I have heard many representatives of LDCs exclaim that until
poverty is corrected, preventable disease stamped out, injustice redressed and
economic equity achieved, they will invest their precious resources on these
priorities. My response has been that climatic changes can exacerbate all of
those problems they rightly wish to address, and thus we should seek to make
investments that both reduce the risks of climate change and help with economic
development (transfer of efficient technologies being a prime example). It is
a great mistake, I believe, to get trapped in the false logic of the mythical
"marginal dollar," for it is not necessary that every penny of the next available
dollar go exclusively to the highest priority problem whereas all the rest (particularly
problems with surprise potential and the possibility of irreversible damages)
must wait until priority one is fully achieved. To me, the first step is to
get that marginal dollar cashed into small change, so that many interlinked
priority problems can all be at least partially addressed. Given the large state
of uncertainty surrounding both the costs and benefits of many human and natural
events, it seems most prudent to address many issues simultaneously and to constantly
reassess which investments are working and which problems -- including global
change -- are growing more or less serious.
It takes resources to invest, of course, and since the bulk of available capital
is in developed countries, it will require international negotiations -- "planetary
bargaining" it has been called -- to balance issues of economic parity and social
justice with environmental protection. Such negotiations are underway under
U.N. auspices, and will likely take many years to work out protocols that weigh
the diverse interests and perceptions of the world's nations.
There is a lively debate among economists, technologists and environmentalists
about what are the most cost-effective strategies for abating carbon emissions
which also can reduce potential impacts of climatic changes to below the undefined
"dangerous" levels referred to in the Framework Convention on Climate Change
language. Most economists argue that some policy to "internalize the externality"
of potential climate damage is already appropriate, reflecting the recommendations
already published by the National Research Council in 1991. Environmentalists
usually argue that major efforts to spur immediate abatement of carbon emissions
are necessary if climatic changes less than one more degree Celsius are to likely
be avoided (which they typically define as "dangerous"). Most economists, on
the other hand, often argue that new technologies will be able to accomplish
carbon abatement more cheaply in the future as such technologies are discovered
and deployed (Wigley et al, 1996). Thus, their logic suggests that a cost- effective
time profile of abatement would be to postpone most carbon reductions until
later in the 21st century. This seemingly implacable debate will echo in Kyoto
chambers, I am sure, in December 1997.
My colleague, the Stanford University economist Lawrence Goulder, and I have
used state-of-the-art economic modeling tools to study this debate, and conclude
that both the stereotypical environmentalist (who argue to abate now) and economist
positions (abate later) are actually not incompatible, but complimentary! We
show (please see the Appendix in which our submitted Commentary to Nature magazine
is reproduced) that although the economist view that future abatement is likely
to be cheaper is probably correct, so too is the environmentalist argument that
current actions are urgently needed, since such technologies referred to in
economic cost-effectiveness studies won't simply invent themselves. In other
words, policy actions to help induce technological changes are needed now in
order to bring about a profile of cost-effective abatement in the decades ahead.
We also address the relative economic efficiency of alternative policy instruments:
contrasting carbon taxes versus research and development subsides. Although
we recognize the political reluctance of many to embrace any new taxes, in truth,
most economic analyses show that a fee for the use of the atmosphere (currently
a "free sewer") will reduce incentives to pollute, increase incentives to develop
and deploy less polluting technologies, and can be more economically efficient
than other policies -- particularly if some of the revenues generated by a carbon
tax were recycled back into the economy. R&D subsidies can be economically
efficient, our conventional economic analyses suggest, to the extent that current
R&D markets are already subsidized or otherwise not optimally efficient
-- a likelihood.
Therefore, it is my personal view that all parties should recognize that potential
damages to a global commons like the Earth's climate are not mere ideological
rhetoric, nor are solutions necessarily unaffordable. Moreover, " win-win" solutions
in which economic efficiency, cost-effectiveness and environmental protection
can happily co- exist are possible -- if only we put aside hardened ideological
V. Personal Observations On The Global Warming Media Debate
A very intense, too-often personal and ad hominem , media debate has attended
the global warming problem in the past five years. As a participant in this
process, I can attest to the frustration one experiences in seeing a complex
scientific problem with many policy implications often trivialized into an ideological
boxing match in which polar extremes are pitted against each other and the work
of the vast bulk of the knowledgeable community is marginalized. A baffling
array of claims and counter claims appears, particularly in op-ed pieces, and
a general state of public confusion is fostered. It is my belief that this confusion
does not reflect the ordered state of knowledge, in which many aspects of the
climate change issue enjoy strong consensual views, other aspects are considered
plausible, whereas yet others are clearly (to insiders at least) highly speculative.
Public dialogue would be much richer if we all strove to separate out what is
well known from what is speculative, an effort not attempted often enough in
most public accounts of the issue. How is this best accomplished?
For twenty years the scientific community, or at least the broad cross section
scientific community represented by the deliberations of the National Research
Council, IPCC and other international assessment groups, have suggested that
if CO2 were to double and be held fixed, then at equilibrium (i.e. the change
in steady state after a few hundred years) the earth's temperature would warm
up some one and a half to four and a half degrees centigrade -- the uncertainty,
as noted earlier, in this climate sensitivity range largely being associated
with the well recognized processes that we treat crudely in our climate models,
mostly clouds and water vapor. The reason that very few scientist set the climate
sensitivity range above four and a half degrees or below one and a half degrees
is primarily because of natural experiments such as ice ages, volcanoes and
seasonal cycles, as well as other technical questions dealing with theory and
modeling (see IPCC 1996a for details). Nevertheless, a few have asserted, some
with very high confidence, that global warming from CO2 doubling would only
cause a few tenths of a degree C equilibrium temperature rise, and even have
argued that certain processes that they can name, but cannot demonstrate to
have global scale effects, would be responsible for this diminishing effect
(e.g. Lindzen, 1990). Such debates (e.g. see Schneider, 1990) are very difficult
for the lay public to penetrate, and even for relatively skilled but still non-professional
observers, they are hard to follow. It is for such reasons that groups like
the National Research Council or The World Meteorological Organization and the
United Nations Environment Program have called a community of scientists holding
a spectrum of views, but all knowledgeable in the basic art, to meet together
to debate the relative merits of various lines of evidence and to provide assessments
which give the best guess as well as a judgment for the ranges of uncertainty
of a variety of climate changes, as well as their potential impacts on environment
and society and the costs of mitigation from alternative policies. Indeed, the
Intergovernmental Panel on Climate Change (IPCC 1996a, b, and c) is now the
premier such assessment activity and represents the effort of hundreds of directly
involved scientists and thousands of indirectly involved scientists, industrialists,
NGOs or policy makers who serve as reviewers and commentators.
The IPCC Peer Review Processes Let me contrast the IPCC process with that of
some of its critics. In July of 1996 an extraordinary meeting of about six dozen
climate scientists from dozens of countries took place. It was the third installment
of a process to write a Second Assessment Report for the IPCC. This meeting,
in Asheville, North Carolina, was designed to make explicit the points of agreement
and difference among the scientists over exceedingly controversial and difficult
issues, including the signal detection and attribution chapter -- the most controversial.
Chapter 8 was controversial since new lines of evidence had been brought to
bear by three modeling groups around the world, each suggesting a much stronger
possibility that a climate change signal has been observed and that its pattern
(or fingerprint) is much closer matched to anthropogenic caused changes than
heretofore believed. Scientists are by nature a skeptical lot, and typically
submit their work for peer review before publishing. When scientists have new
ideas or new tests, as the dozen or so representing these modeling groups in
fact had, they typically write a journal article and submit it for publication.
The journals, peer reviewed of course, typically send the article out to two
or three peers, who write anonymous reviews, (unless the reviewers have the
courage to confess as I, the editor of the journal Climatic Change, encourage
my reviewers to do). The authors then rewrite their article in response to the
reviewers and the editor serves as referee. The process usually goes back and
forth several times with several revised drafts of the article until a suitable
compromise is achieved among reviewers, authors and the editor.
Contrast this normal journal peer review process in which a few people are
involved, with what happened in Asheville in 1995 at the IPCC's third workshop.
Ben Santer from Lawrence Livermore National Lab, who had assembled the results
of a number of modeling groups and was the first author of the submitted manuscript
(Santer et al, 1996) on climate signal detection and the Convening Lead Author
of Chapter 8 of the IPCC report (the controversial IPCC chapter on signal detection
and attribution), presented the results of his group's effort not to just the
half dozen Lead Authors of Chapter 8, as is typical in IPCC meetings, but to
the entire assembled scientific group at Asheville. Not only did Santer have
to explain the work of him and his colleagues (many of whom were there) to his
most knowledgeable peers, but also to scores of others from communities as diverse
as stratospheric ozone experts like Susan Solomon and Dan Albritton, to satellite
meteorologists like John Christy or biospheric dynamics experts such as Jerry
Melillo. Climatologists such as Tom Karl or myself were also present, along
with heads of weather services and other officials from several countries who
served on the IPCC's assessment team as a member of the scientific delegations
of the various nations. Not everybody was equally knowledgeable in the technical
details of the debate, of course, but even these less familiar participants
served an essential role: of witnesses to the process of honest, open debate.
Perhaps only twenty-five percent of those assembled had truly in-depth knowledge
of the full range of details being discussed. However, all understood the basic
scientific issues and most know how to recognize slipshod work -- to say nothing
of a fraud or a "scientific cleansing" -- when they see it. This remarkable
session lasted for hours, was occasionally intense, always was cordial, and
never turned polemical. As a result, words for Chapter 8 were changed, ideas
and concepts altered somewhat, but by and large basic conclusions were unchanged
because the vast bulk of those assembled (and no one proclaimed to the contrary)
were convinced that the carefully hedged statements the lead authors proposed
were, in fact, an accurate reflection of the state of the science based upon
all available knowledge -- including the new results. This was not only peer
review, but this was peer review ten times normal! As the editor of a peer review
journal it would be inconceivable for me to duplicate this process, as I have
to hope that a few referees and myself can serve the peer reviewing role half
as well as this remarkable, open process at Asheville. Moreover, after the Asheville
meeting there were two more IPCC drafts written and reviewed by hundreds of
additional scientists industrialists, policy makers, and NGOs from all over
Contrast this open IPCC process then, to the harsh critics of the IPCC, alleging
"scientific cleansing", "herd mentality", and first presenting their detailed
technical counter arguments in such "refereed scientific literature" as the
editorial pages of the Wall Street Journal (Singer 1996, Seitz 1996,). Some
had the temerity, although I do not understand how they could do it with a straight
face, to allege that Chapter 8 conclusions were all based upon non peer reviewed
work, despite the fact that the Asheville process was ten times normal peer
review, to say nothing of the hundreds of scientific reviewers of the next draft
of the IPCC report that followed. In the wake of all these reviews, textual
alterations needed to be made, and these were minor, but were done over the
course of time. The last round of changes were made by the Convening Lead, Ben
Santer. Some interests subsequently alleged that these minor changes dramatically
altered the report and, with no evidence, asserted they were politically motivated
("scientific cleansing" one charged -- and launched a vicious personal attack
on one of the least political, most cautious scientists, Ben Santer). Any honest
evaluation will reveal that this irresponsible charge -- published in the unrefereed
opinion pages of a business daily -- is utterly absurd. In fact, the most famous
line in the IPCC report (that there is a "discernible" human effect on climate)
appeared as one sentence in a short paragraph that was 80% caveats! The IPCC
report essentially "drips" with caveats.
Moreover, the "discernible" line is not a radical statement, as it reflects
a lowest common denominator consensus view of the vast bulk of people exposed
to the evidence. It does not assert climate signal detection to be proven beyond
any doubt, nor do I or any other responsible scientists I know of make such
assertions. Nor can such evidence of human effects be dismissed as wholly random
at a very high probability by responsible scientists -- except perhaps in the
opinions section of some newspapers. To ignore such contrarian critics would
be inappropriate, I agree. However, to give them in news stories comparable
weight to a hundred-scientists, thousand-reviewer document, as if somehow a
small minority of scientists who are skeptical deserve equal weight, without
informing the readership or viewership that the contrarians represent a tiny
minority, is to mislead a public who cannot be expected to look up for themselves
the relative weights of conflicting opinions. And to publish character-assassinating
charges of "scientific cleansing" without checking the facts is simply unethical
-- at least in any system of ethics I respect.
VI. Concluding Remarks
A condensed summary of the principal conclusions I would like to draw is as
follows, beginning with the more narrowly technical issues and proceeding to
broader generalizations about impacts, uncertainties and policy choices:
Hierarchy of models. A hierarchy of models, ranging from simple zero or one-
dimensional, highly parameterized models up to coupled three-dimensional models
that simulate the dynamics and thermodynamics of connected physical and biological
sub- systems of the earth-system are needed for climatic effects assessment.
The simpler models are more transparent -- allowing cause-and-effect processes
to be more easily traced -- and are much more tractable to construct, run and
diagnose, whereas multi- dimensional, dynamical models can provide geographic
and temporal resolution needed for regional impact assessments and -- hopefully
-- provide more realistic and detailed simulations, even if at much higher costs
for construction, computation, diagnosis and interpretability. Since the real
climate system is undergoing a transient response to regionally heterogeneous
(patchy) forcings (e.g., aerosols and greenhouse gasses combined, which both
vary over time and space), eventually it will be necessary to run fully-coupled
three-dimensional earth systems models in order to "hand off" their results
to a variety of regional impact assessment models. In the interim, lower resolution
"simple" climate models can be hybridized into more comprehensive models to
produce hybrid estimates of time-evolving regional patterns of climatic changes
from a variety of emissions and land use change scenarios. Such estimates may
be instructive to policy makers interested in the differential climatic impacts
of various climate forcing scenarios and/or various assumptions about the internal
dynamics of both climate and impact models.
Sensitivity studies are essential. It is unlikely that all important uncertainties
in either climatic or impact models will be resolved to the satisfaction of
the bulk of the scientific community in the near future. However, this does
not imply that model results are uninformative. On the contrary, sensitivity
analyses in which various policy-driven alternative radiative forcing assumptions
are made can offer insights into the potential effectiveness of such policies
in terms of their differential climatic effects and impacts. Even though absolute
accuracy is not likely to be assured for the foreseeable future, considerable
precision concerning the sensitivity of the physical and biological sub- systems
of the earth can be studied via carefully planned and executed sensitivity studies
across a hierarchy of models.
Validation and testing are required. Although it may be impractical, if not
theoretically impossible, to validate the precise future course of climate given
the uncertainties that remain in forcings, internal dynamics and unpredictable
surprise events, many of the basic features of the coupled physical and biological
sub-systems of the earth can already be simulated to a considerable degree.
Testing models against each other when driven by the same sets of forcing scenarios,
testing the overall simulation skill of models against empirical observations,
testing model parameterizations against high resolution process models or data
sets, testing models against proxy data of paleoclimatic changes and testing
the sensitivity of models to radiative forcings of anthropogenic origin by computing
their sensitivity to natural radiative forcings (e.g., season radiative forcing,
volcanic dust forcing, orbital element variation forcings etc.) comprise a necessary
set of validation-oriented exercises that all modelers should agree to perform.
Similarly, impacts models should also be subjected to an analogous set of validation
protocols if their insights are to gain a high degree of credibility.
Subjective probability assessment. In addition to standard simulation modeling
exercises in which various parameters are specified or varied over an uncertainty
range, formal decision-analytic techniques can be used to provide a more consistent
set of values for uncertain model parameters or functional relationships. The
embedding of subjective probability distributions into climatic models is just
beginning (e.g., Titus and Narayanan, 1996), but may become an important element
of integrated assessment modeling in future generations of model building (e.g.,
see the discussion of the hierarchy of integrated assessment models in Schneider,
"Rolling reassessment." It is obvious that the projection of climatic effects
and related impacts will continue to change as the state-of-the-art in both
kinds of models improves over the next few decades. Therefore, the most flexible
management possible of a global commons like the Earth's climate seems a virtual
necessity, since the potential seriousness of the problem -- or even the perception
of that seriousness -- is virtually certain to change with new discoveries and
actual climatic and other environmental or social events. Therefore, a series
of assessments of climatic effects, related impacts, and policy options to prevent
potentially dangerous impacts will be needed periodically -- perhaps every five
years as IPCC has chosen for the repeat period of its major Assessment Reports
that treat climatic effects, impacts and policy issues as separable assessments.
It seems important that whatever policy instruments are employed (to either
mitigate anthropogenic forcings or help reduce damage from projected climatic
effects) be flexible enough to respond quickly and cost-effectively to the evolving
science that will emerge from this rolling reassessment process.
Consider surprises and irreversibility. Given the many uncertainties that still
attend most aspects of the climatic change and impacts debate, priority should
be considered for those aspects which could exhibit irreversible damages (e.g.,
extinction of species whose already-shrinking habitat is further stressed by
rapid climatic changes) or for which imaginable "surprises" have been identified
(e.g., alterations to oceanic currents from rapid increases in greenhouse gasses).
For these reasons, management of climatic risks needs to be considered well
in advance of more certain knowledge of climatic effects and impacts.
"Win-win" strategies. Economically efficient, cost-effective and environmentally
sustainable policies have been identified and others can be found to help induce
the kinds of technological innovations needed to reduce atmospheric emissions
in the decades ahead. Some mix of emissions "cap and trade" , carbon taxes with
revenue recycling, or technology development incentives can provide "win-win"
solutions if all parties to the environment-development debate would lower the
intensity of their ideological preconceptions and work together for cost-effective
and equitable measures to protect the global commons.
Hoffert, M.I. and Covey, C. (1992) "Deriving global climate sensitivity from
paleoclimate reconstructions," Nature 360: 573-76.
Intergovernmental Panel on Climatic Change (IPCC), (1996a). Climate Change
1995. The Science of Climate Change: Contribution of Working Group I to the
Second Assessment Report of the Intergovernmental Panel on Climate Change. Houghton,
J.T., Meira Filho, L.G., Callander, B.A., Harris, N., Kattenberg, A., and Maskell,
K., eds. Cambridge: Cambridge University Press. 572 pp.
Intergovernmental Panel on Climatic Change (IPCC), (1996b). Climate Change
1995. Impacts, Adaptations and Mitigation of Climate Change: Scientific-Technical
Analyses. Contribution of Working Group II to the Second Assessment Report of
the Intergovernmental Panel on Climate Change. Watson, R.T., Zinyowera, M.C.,
and Moss, R.H., eds. Cambridge: Cambridge University Press. 878 pp.
Intergovernmental Panel on Climatic Change (IPCC), (1996c). Climate Change
1995. Economic and Social Dimensions of Climate Change. Contribution of Working
Group III to the Second Assessment Report of the Intergovernmental Panel on
Climate Change. Bruce, J.P., Lee, H., and Haites, E.F., eds. Cambridge: Cambridge
Intergovernmental Panel on Climatic Change (IPCC), 1997. Workshop on Regional
Climate Change Projections for Impact Assessment, Imperial College, London,
24-26 September 1996.
Lindzen, R.S. 1990. "Some Coolness Concerning Global Warming". Bull. Amer.
Meteor. Soc. 71: 288-299
Mass, C. and S. H. Schneider, 1977. "Influence of sunspots and volcanic dust
on long- term temperature records inferred by statistical investigations". J.
Atmos. Sci. 34 (12): 1995-2004.
Morgan, M.G. and H. Dowlatabadi. 1996. "Learning from Integrated Assessment
of Climate Change". Climatic Change 34 (3-4): 337-368.
Morgan, M.G. and D.W. Keith 1995. "Subjective judgments by climate experts",Environmental
Science and Technology 29: 468A-476A
National Research Council. 1991. Policy Implications of Greenhouse Warming.
National Academy of Sciences; Washington, D.C.
Nordhaus, W.D. 1992. "An Optimal Transition Path for Controlling Greenhouse
Gases", Science 258: 1315-1319.
Nordhaus, W.D. Jan-Feb 1994. "Expert opinion on climate change", American Scientist
Root, T. L. and S. H. Schneider. 1995. "Ecology and climate: research strategies
and implications", Science 269: 331-341.
Rotmans J. and van Asselt, M. 1996. "Integrated assessment: a growing child
on its way to maturity - an editorial". Climatic Change 34 (3-4): 327-336.
Santer, B.D., K.E. Taylor, T.M.L. Wigley, T.C. Johns, P.D. Jones, D.J. Karoly,
J.F.B. Mitchell, A.H. Oort, J.E. Penner, V. Ramaswamy, M.D. Schwarzkopf, R.J.
Stouffer, and S. Tett. 1996. "A search for human influences on the thermal structure
of the atmosphere." Nature 382: 39-46.
Schneider, S.H. 1990. Global Warming: Are We Entering the Greenhouse Century?
Vintage Books, New York, NY. 343 pages.
Schneider, S.H. 1997a. Laboratory Earth: The Planetary Gamble We Can't Afford
to Lose. Basic Books: New York.
Schneider, S.H. 1997b. "Integrated assessment modelling of global climate change:
Transparent rational tool for policy making or opaque screen hiding value-laden
assumptions?" Environmental Modelling and Assessment (submitted).
Schneider, S.H. and R. Londer. 1984. The Coevolution of Climate and Life. Sierra
Club Books, San Francisco, CA.
Schneider, S.H. and L.E. Mesirow. 1976. The Genesis Strategy: Climate and Global
Survival. Pleanum, New York, NY. 419 pages.
Seitz, F. 1996. "A Major Deception on Global Warming". Wall Street Journal.
New York. June 12.
Singer, S.F. 1996. "Letter to the Editor". Wall Street Journal. New York. July
Thompson, S. L. and S. H. Schneider, 1982. "CO2 and Climate: The importance
of realistic geography in estimating the transient response", Science 217: 1031-
Titus, J. and V. Narayanan, 1996. "The Risk of Sea Level Rise: A Delphic Monte
Carlo Analysis in which Twenty Researchers Specify Subjective Probability Distributions
for Model Coefficients within their Respective Areas of Expertise". Climatic
Change 33 (2):151-212.
Wigley, T.M.L., R. Richels, and J.A. Edmonds. 1996. "Economic and environmental
choices in the stabilizations of atmospheric CO2 concentrations," Nature 379: