Understanding Global Climate Change
Home Contact Site Map
Climate Science
Impacts
Policy
Cross Cutting Issues
Mediarology
 
Click on a Topic Below
  Home » Mediarology
 
Stanford University
 
  "Mediarology"

The Roles of Citizens, Journalists, and Scientists
in Debunking Climate Change Myths

In reporting political, legal, or other advocacy-dominated stories, it is both natural and appropriate for honest journalists to report "both sides" of an issue. Got the Democrat? Better get the Republican!

Gore-Schneider et al. 2007 Nobel Peace Prize Announcement

In science, it's different. There are rarely just two polar opposite sides, but rather a spectrum of potential outcomes, oftentimes accompanied by a considerable history of scientific assessment of the relative credibility of these many possibilities. A climate scientist faced with a reporter locked into the "get both sides" mindset risks getting his or her views stuffed into one of two boxed storylines: “we’re worried” or “it will all be OK.”  And sometimes, these two "boxes" are misrepresentative; a mainstream, well-established consensus may be "balanced" against the opposing views of a few extremists, and to the uninformed, each position seems equally credible. Any scientist wandering into the political arena and naively thinking "balanced" assessment is what all sides seek (or hear) had better learn fast how the advocacy system really functions. (See the Edwards-Schneider chapter, "Self-Governance and Peer Review in Science-for-Policy: The Case of the IPCC Second Assessment Report".)

Being stereotyped as the "pro" advocate versus the "con" advocate as far as action on climate change is concerned is not a quick ticket to a healthy scientific reputation as an objective interpreter of the science — particularly for a controversial science like global warming. In actuality, it encourages personal attacks and distortions (see the "Double Ethical bind" pitfall below).

This is all part of the problem I have, somewhat whimsically, called "mediarology." I will explore this problematic world of communications in some depth below. (See also the Epilogue and the chapter titled “Mediarology” in my book, Global Warming.)

[For an audio interview on aspects of 'Mediarology', listen to Journalism and Environmental Reporting, on The Environment Show, Greg Dahlmann, WAMC, March 8, 2002]

Go to top

Courtroom Epistemology

"The fundamental question related to climate change, then, is: how can we make, or at least encourage, advocates to convey a balanced perspective when the 'judge' and 'jury' are Congress or public opinion."

Expert witnesses spouting diametrically opposing views — in congress, courtrooms, or on editorial pages — often obscure an issue more than they enlighten the public about it. They often refuse to acknowledge that the issue of concern is multifaceted, and they only present their argument, ignoring opposing views. This is really no big surprise, but what is shocking is how often that strategy is deliberate (see “Science Friction”). Stakeholders increasingly select information out of context to protect their interests, and clear exposition and balanced assessment have sunk even lower on their priority lists. I call this "courtroom epistemology." (For further details, see the article "Defining and Teaching Environmental Literacy" and see McCright and Dunlap's "Defeating Kyoto: The Conservative Movement’s Impact on U.S. Climate Change Policy".)

The attitude that “It’s not my job to make my opponent’s case!” arises not only in courtroom histrionics, but also in most political debates and in much of the media. Scientists claim to be disdainful of this behavior, and they often pretend to be above such polemics in their "objective," detached, and dispassionate assessment of "the facts" — at least that is our official mantra. It’s not that reporters, politicians, lawyers, and others or their methods are wrong or that "impartial" scientists are morally superior; the question is whether the techniques of advocacy-as-usual are suited fora subject like climate change. Indeed, just as it would be a breach of scientific ethics to elliptically spin the facts, it would be a breach of ethics for a professional advocate not to advance his or her client’s interests, even if it means picking and choosing from the full range of the facts.

In fact, if I were on trial, I admit I’d want my lawyer to make defending me his number one priority, and I'd prefer that he didn't dwell on lofty abstractions about finding balanced truth. Indeed, courts of law, political forums, and much of the media are steeped in just such practices. So, I’m not accusing advocates of immorality; I’m just saying that standard advocacy (i.e., defining only one side of an issue) it is a poor way to give non-specialists "full disclosure" of complex, controversial topics.

But the problem is that scientists tend to think that advocacy based on a "win for the client" mentality that deliberately selects "facts" out of context is highly unethical. Unaware of how the advocacy game is played outside the cloister of the scientific peer review culture, some scientists stumble, perhaps naively, into the pitfall of being labeled as an advocate lobbying for a special interest, even if they had no such intention.

When the scientist merely acknowledges the credibility of some contentious information or endorses actions that affect stakeholders differentially, opposing advocates often presume the scientist is spinning the information for some client’s benefit. Even when the expert (scientist) also admits that there is a wide range of possibilities and refers to extensive peer-reviewed assessments, the opposition accuses the expert of currying favor from some alleged funding agent (see remarks by Bjørn Lomborg, or Michael Parsons, quoting Sonja Boehmer-Christiansen). After all, isn’t that what everybody else is doing? (See Charles Krauthammer's op-ed attacking me in “Global Warming Fundamentalists” and my rebuttal.)

The fundamental question related to climate change, then, is: how can we make, or at least encourage, advocates to convey a balanced perspective when the "judge" and "jury" are Congress or public opinion, and the polarized advocates get only twenty second sound bites each on the evening news or five minutes in front of a Congressional hearing to summarize a topic for which it would take hours just to outline the range of possible outcomes, much less convey the relative credibility of each claim and rebuttal? For over three decades, this has been my repeated frustration in dealing with the climate change debate, and it seems to be getting worse.

Is there a solution to this advocacy-truth conundrum? On the one hand, it is indeed an expert's responsibility to honestly report the range of plausible cases (what can happen?) and their associated subjective probability distributions (what are the odds?) and confidence levels. (See the Moss/Schneider “Uncertainties Guidance” paper and the Summer 2002 Nature story on it.) On the other hand, an expert could have a personal opinion on what society ought to do with a particular risk assessment. Can a scientist who expresses such value preferences about a controversial topic also provide an unbiased assessment of the factual components? This may be a feasible tightrope to walk, but even if one is scrupulously careful to separate factual from value-laden arguments, will the outside world of advocates and advocate institutions buy it? (See a Detroit News editorial and my rebuttal.)

The more we discuss our initial assessments with colleagues of various backgrounds, the higher the likelihood we can illuminate unconscious biases. We may not ever reach the archetype of "pure objectivity"— but "pure objectivity" is, of course, a myth in science. The path to objectivity does not involve scientists holding back their opinions in order to maintain a pretense of some higher calling as "objective scientist." Rather, only active effort to make our biases conscious and explicit via outside review is likely to effectively keep our science-advocacy more objective and allow us to better manage the “advocacy-truth” conundrum, (see Forums).

Go to top

The Scientist-Advocate

Let’s unpack the advocacy issue a bit more. Is the scientist-advocate an oxymoron?

Before we address advocacy issues in policy agendas (like carbon emission reductions, in the case of global warming), we must ask: how do we define what is objective, or how do we discover "truth"? Doing this often uncovers the sources of many unconscious biases (see Table).

Table — Potential Scientific Biases (source: Schneider 2003, this website)

  • A favored theory
  • A familiar model or technique
  • A comfortable measurement or instrumental system
  • A crony
  • Our institution
  • A national report
  • A philosophical paradigm/epistemological construction of reality

Scientists often don’t think about the categories in the biases Table as advocacy problems per se, but our experiences, relationships, and professional interests do influence not only our judgment, but also the very questions we ask. As an example, most scientists think that science entails a series of reductions in which experimental or empirical observations are used to construct frequency distributions of phenomena that can be used to reject a null hypothesis with some level of certainty. This is the basis for Karl Popper’s famous aphorism that science is falsification of hypotheses (see the "Science always falsifies" pitfall, below). This is still widely believed to be the way in which science works today. (For an example, see the letter “Identifying Dangers in an Uncertain Climate”, which was a response to “What is 'Dangerous' Climate Change?”. See also my editorial in Climate Change (March 2002), which was a further response).

"The best safeguard for public participation in science-based policy issues is to leave subjective probability assessment to the larger scientific community rather than a few charismatic individuals."

But living in this frequentist paradigm based on falsification of hypotheses requires an infinite set of replicable experiments, which is itself an unobtainable abstraction. (See “Characterizing and Communicating Scientific Uncertainty” by Moss and Schneider and “Bayesian Approaches to Characterizing Uncertainty” by Berk; see also Environmental Literacy, a seminar session on climate change (Real Player)). Many factors make it impossible to obtain direct empirical data — i.e., we simply cannot obtain empirical data about future events. Instead, we must make inferences about the future by using past information to construct a simulation model that produces pseudo-frequentist data about a hypothesized future. Of course, these simulations are only as good as our model assumptions, and our estimates are valid only as long as future conditions are similar to the conditions used to build the model. This exercise necessarily entails subjective judgments, and not falsification, since the latter is possible only after the future occurs.

Fundamentally, the frequentist paradigm assumes that the underlying probability distribution is known and asks whether our observations are consistent with the known distribution. In reality, the underlying distribution is unknown (or only partially known), yet we want to know whether our hypothesis is likely to be true based on our observations -- which are often incomplete. Thus, determining the likelihood of our hypothesis is easier said than done. An alternative is to use Bayesian, or subjective, probabilities that compile all the information we can possibly bring to bear on the problem, including, but not limited to, direct measurements and statistics on various components of the problem. Use of these methods can be extremely controversial. Some frequentist die-hards believe that if we can’t measure it directly, it isn’t science, what I playfully call "the tyranny of the null hypothesis." However, the belief that the frequentist paradigm is superior to the subjective paradigm is epistemological advocacy; in short, a bias. In fact, dogmatic adherence to a frequentist paradigm limits the dissemination of valuable expert judgment that doesn’t fit into conventional evaluation of scientific knowledge, yet is crucial information for both scientific understanding and social processes like-decision making.

While scientific advocacy is typically subtle, political advocacy is usually more obvious (see the Table):

Table — Conflicting Political Values in Environmental Debates (source: Schneider 2003, this website)

  • Entrepreneurial rights transcend protection of global commons
  • “One dollar one vote” — cost/benefit efficiency is the best decision rule
  • The present is more valuable than the future (meaning a high discount rate is deemed appropriate)
  • The present generation has an obligation not to borrow from the future (a low discount rate is deemed appropriate)
  • Commons protection justifies curbs on individual, corporate or national actions
  • A risk aversion/precautionary principle is needed, especially for large-scale, potentially irreversible changes
  • Other species have intrinsic existence rights, even if they fall outside of traditional cost/benefit calculations for human welfare
  • Distribution of costs and benefits are as or more important than the values aggregated by traditional cost-benefit analyses (i.e., equity counts as much as efficiency)

In my view, the best safeguard for public participation in science-based policy issues is to leave subjective probability assessment to the larger scientific community rather than a few charismatic individuals. Some will say, as I noted above, that it’s impossible for an expert to maintain his/her scientific objectivity in a value-laden public debate, but after thirty years of striving to do just that, I think that science-advocacy can be done honestly. Just because some people cheat doesn’t mean all do. No one is exempt from prejudices and values, but the people who know when they are bringing in values and make their biases explicit are more likely to provide balanced assessments -- and to be able to single out those who do not.

Go to top

The Scientist-Popularizer

"Responsible advocacy and popularization are not, in my view, oxymoronic — but it takes discipline to minimize trouble."

Let’s turn to that other “oxymoron” problem: the role of the scientist as popularizer. In the real world, we want to make a lasting impression and ensure that our ideas are heard and our suggestions are followed, yet none of us is granted unlimited time to explain the nuances of complex issues. We are forced to be selective in our disclosure of facts, or we risk being ignored. However, intentionally distorting the likelihoods of certain outcomes is just dishonest. Balancing the need to be effective in sound-bite situations with the responsibility to be “honest" (i.e., fully disclosing complexities) is what I call the “double ethical bind."

Scientists must take one additional step to more fully ensure their credibility. Those who make public statements should also produce a hierarchy of backup products ranging from op-ed pieces (see a few of my opinion editorials), to longer popular articles (see “The Evolution of the Earth” and “Degrees of Certainty”), which provide more depth, to full length books, which meticulously distinguish the aspects of an issue that are well understood from those that are more speculative. Books should also provide an account of how one’s views have changed as the scientific evidence has changed (see a series of my books, from The Genesis Strategy to Global Warming to Climate Change Policy). Even if only a minute segment of the public really wants this level of detail, this hierarchy of articles and books in the popular and scientific literature gives a scientist credibility in the popularization process. One excellent example of popularization is Richard Somerville's 1996 book, The Forgiving Air: Understanding Environmental Change. In it, Somerville discusses the ways in which humans have influenced various components of the environment, including the climate, in a style that is scientifically credible yet understandable to non-experts. He also details the interconnectedness of human technology and environmental change and suggests that citizens must educate themselves to make good judgments about such topics. The lengthy coverage of these subjects supports Somerville's shorter interviews and articles on them.

Since "full disclosure" (like archetypal "scientific objectivity") is simply not possible in time-constrained congressional or media debates, the hierarchy of back-ups is crucial for elaborated disclosure beyond these forums.

In summary, responsible advocacy and popularization are not, in my view, oxymoronic — but it takes discipline to minimize trouble. Scientists will never succeed in pleasing everyone, especially since many continue to think scientists should stay out of the public arena. But if we do avoid the public arena entirely, then we merely abdicate the popularization to someone else — someone who is probably less knowledgeable or responsible (See also “What Makes a Good Science Story” and “Interpreting Uncertainty”.) In my view, staying out of the fray is not taking the “high ground”; it is just passing the buck.

The Table below summarizes my primary "rules" for minimizing the chances of being misrepresented in the “real world” out there. (More on the Scientist/Advocate). I often summarize them as the "three know-thys": 1) Know thy audience; 2) Know thyself; 3) Know thy stuff! The more detailed rules appear in the table.

Table Advocacy/Popularization “Rules” (source: Schneider 2003, this website)

  • Understand your own values and biases — use the relevant scientific/technical communities to help you overcome your own dogmatism or denial
  • Make your values and biases explicit, and separate them from your scientific priors on probabilities and consequences
  • Do not allow personal value positions to distort your subjective priors on the probabilities of various outcomes or “facts”
  • Defend value positions separately from assessments of probabilities and consequences
  • Encourage popularizers who follow responsible practices, and censure those who are unclear, obscure or biased

Be forewarned that these guidelines are not without their dangers. Many have asserted that my disdain for advocates who don’t make their values conscious and explicit and my willingness to maneuver in the sound-bite/advocacy world is tantamount to promoting exaggeration. (See the “double ethical bind”, the "scientist-advocate", the 1996 opinion piece by Julian Simon and my rebuttal, and a Detroit News editorial and my rebuttal).

Go to top

The "Double Ethical Bind" Pitfall

"As scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but – which means that we must include all doubts, the caveats, the ifs, ands and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climate change. "

Would you trust a scientist who advises his/her colleagues to use scary scenarios to get media attention and to shape public opinion by making intentionally dramatic, overblown statements? Would you have confidence in his or her statements if the scientist said that “each of us has to decide what the right balance is between being effective and being honest”? Understandably, you’d probably be suspicious and wonder what was being compromised.

I confess: those were SOME of my words, yet their meaning is completely distorted when viewed out of context like this. You will find hundreds of places — especially on the web sites of industrial or economic growth advocates opposed to global warming policies that might harm their or their clients' interests — in which I am similarly (mis)quoted alongside a declaration that my environmental cronies and I should never be trusted.

I’ll spend a few paragraphs telling you what I really said and why, as I want to illustrate the sorts of pitfalls that will confront a scientist or other expert diving headlong into scientific popularization, media appearances, advocacy, or some combination of these. This example illustrates the risks of stepping from the academic cloister to the wide world out there. A scientist's likelihood of having his/her meaning turned on its head is pretty high — especially with highly politicized topics such as global warming.

First, consider a movie theater marquis selectively quoting a critic as having said a movie was “spectacular,” when the critic might have actually written: “...the film could have been spectacular if only the acting wasn’t so overplayed and the dialog wasn’t so trite…” You get the idea. We see this kind of distortion in sales and advocacy, by citizens and politicians, from businesses and ideologists, in the public and private sectors.

My first experience in being misrepresented in the public debate began after the 1988 heat waves in the US, when global warming made daily headlines. I probably gave twenty interviews a day for several months that year. The global warming debate migrated from the ivy-covered halls of academia into the public policy spotlight via congressional hearings, daily media stories and broadcasts, pressure on the government from environmental groups pushing for control of CO2 emissions, and loud and angry denial by industries with high CO2 emissions of both their contribution to global warming and the credibility of the science behind climate change. I was — and still am — quite frustrated about the capricious sound-bite nature of the public debate. Typically, a scientist or other party in the global warming debate is given twenty seconds (maximum) on the evening news for his or her quote, which is supposed to represent either the “catastrophe” or the “no problem” side of the debate, for this is how the media have too often categorized it. If one decides to elaborate on the various complexities associated with the problem, one risks being overlooked or boxed in.

I expressed my frustration to Jonathan Schell, a Pulitzer-prize-winning writer doing a story on the contentious climate debate for Discover magazine. I guess my first mistake was to be a bit tongue-in-cheek — I painted a stark picture of the opposing viewpoints in the climate change debate: gloom-and-doom stories from deep ecology groups and others versus pontifications on uncertainties from big industry and others, who used that to argue against preemptive action. I complained that even though I always make a point in my interviews to discuss the wide range of possibilities, from catastrophic to beneficial, media stories rarely convey the entire range. All too often, a scientist's viewpoint is boxed into one extreme or the other. Usually, but not always, I am put in the "it is a big problem" box rather than the "it is too uncertain to do anything" box, even though I acknowledge both perspectives have some plausible arguments. (See the opening paragraph in my review of Lomborg for Scientific American).

I tried to explain to Schell how to be both effective and honest: by using metaphors that simultaneously convey both urgency and uncertainty, and also by producing supporting documents of all types and lengths (see the "scientist popularizer"). Unfortunately, this clarification is absent from the Discover article, and this omission opened the door for fifteen years of subsequent distortions and attacks. Ironically, this is the consummate example of my grievance about problems arising from short reports of long interviews.

Here is the published quote from that interview with Discover, from which selected lines have been used for over a decade as "proof" that I exaggerate environmental threats:

On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but – which means that we must include all doubts, the caveats, the ifs, ands and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climate change. To do that we need to get some broad based support, to capture the public’s imagination. That, of course, means getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This “double ethical bind” we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.

The Detroit News selectively quoted this passage, already not in full context, in an attack editorial on 22 November 1989:

On the one hand, as scientists we are ethically bound to the scientific method. On the other hand, we are not just scientists but human beings as well. To do that we need to get some broad based support, to capture the public’s imagination. That, of course, means getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. Each of us has to decide what the right balance is between being effective and being honest.

The most egregious omission in the Detroit News quotation is of the last line of the Discover quote, the one about being both honest and effective. The Detroit News clearly misquotes me, presumably since including the addendum would have weakened the effectiveness of their character attack. In response, I prepared a rebuttal containing the full quote and the context of my interview, which actually showed that I disapproved of the sound-bite system and the media's polarization of the climate change debate. (See the Detroit News editorial and my rebuttal).

While the Detroit News readers had an opportunity to see my true intent, albeit a month later, when the rebuttal was published, I simply cannot respond and correct every article misquoting me, as they have proliferated and now number in the hundreds. Despite many attempts on my part — in my books, papers, talks, and other op-eds — to outline my opinions and dispel the media-propagated myths, the distortions continue to this day, even in "respectable" publications like the Economist, which ran a partial quote (also taken from the Discover article) without even calling me to see if it was valid. (See the quote from the Economist. The 'brave' editor of this attack does not even sign his polemic, but I am told it was Clive Crook.) The most egregious distortion I am aware of was in a 1996 opinion piece by Julian Simon (see also my rebuttal), a business professor at the University of Maryland, in which he not only used an out-of-context quote from the Discover article to "prove" that I advocate exaggeration in order to get attention, but he also invented a preamble, that I advise people to “stretch the truth,” and he attributed that to me, while (of course) leaving off the last sentence of my actual remark.

Some friends have advised me to file lawsuits against such distortionists engaging in showcase journalism, but as a public figure, I have just learned to deal with character assassination and polemics as part of the "real world" of public policy debate. Moreover, lawyer friends have told me that partial quotes, even those that turn the original meaning of the full quote  upside down, are generally protected by the First Amendment. In the face of this no-win scenario, I warn those who venture into this quagmire simply to expect such pitfalls and to prevent them from causing too much discouragement. It is difficult to correct these reporters and other media icons, who are the ones actually stretching the truth, since most people do not check the originals quotes or stories for accuracy or fairness.

Go to top

The “Science Always Falsifies” Pitfall

"In the case of climate change, where replicable experimentation is difficult, if not impossible, computer simulation models of past and future climate changes are essential. "

The dominant paradigm in science is to administer replicable experiments that test, or “falsify”, existing hypotheses. If the test fails, the hypothesis is rejected. Sociologists of science have long pointed out the deep flaws in the exclusive use of falsification as a test of "truth." Objective science based on collecting frequency information from observations, is indeed a good and necessary part of transforming speculative ideas into better hypotheses, but it is applicable only under very limited conditions. For starters, only past or present systems are observable. Falsification based on observation requires that an infinite set of replicable experiments be performed — an unobtainable abstraction in many important applications, like climate change. Futhermore, obtaining frequency data on future events is impossible before the fact. Some die-hard frequentists deliberately avoid problems built on the subjectivity of climate change projections based on the non-falsifiablility of future events. I was told in 1985 by a senior member of the atmospheric science community at a National Research Council assessment on “nuclear winter” that I was “irresponsible” for working on post-war climate change at all since it couldn’t be “falsified.” Before I could shut my dropped jaw to rebut, a social geographer delivered an eloquent oration, saying that it is a scientist who lets a professional paradigm impede him or her from helping society anticipate problems who is irresponsible, not the scientist trying to peer into the shadowy future with the best available knowledge. He also correctly noted that while such projections use subjective rather than objective science, they are still very important expert judgments.

In the case of climate change, where replicable experimentation is difficult, if not impossible, computer simulation models of past and future climate changes are essential. Empirical data plays a major role, not as a simple basis for predicting the future, but rather in building the tools we use to make projections. Observations of the historical climate record are essential for deriving and testing simulation models in order to select those that best encapsulates our understanding of how the climate works. These models are then used to forecast future climate changes based on various scenarios of possible human activities. The validity of a model depends on how it deals with structural change — evolving functional relationships or parameters. Predictions based on past observations are valid only as long as future conditions replicate past conditions. This is unlikely to be the case for large climate changes, which are expected to arise from unprecedented rapid changes in the composition of atmospheric greenhouse gases, land surface changes, etc.  When contrarian skeptics assert that an "objective" analysis of the "facts" indicate the climate will change negligibly (e.g., see a Lomborg quote), they often ignore the effects of structural changes that limit the ability to extrapolate statistics from past observations.

When uncertainty is great, as in the case of climate change, the use of subjective probability assessments is particularly necessary -- and controversial. Richard Moss and I prepared a guidance paper on uncertainties to be used in association with the IPCC TAR (see “Uncertainties Guidance”) in which we advocate that the authors of the TAR assign confidence levels to each of their statements, and that authors distinguish explicitly the extent to which that confidence comes from direct observations or from expert judgments. Thereafter, Richard and I were nicknamed “the uncertainty cops” (see a Nature story), but in spite of the goofy nickname, we were indeed able to reduce authors' fears of using subjective probability assessments. (See Pittock and Jones: “Probabilities will help us plan for climate change”; see also Grubler and Nakicenovic). Unfortunately, many politicians and political bodies favor the objective approach (though it is impossible in principle in the case of future climate change), and they, too, prefer to avoid the speculative use of 'subjective' estimations derived from imperfect models. I can only reiterate that making predictions about an uncertain and complex future necessarily implies the use of models and subjective assessment.

Go to top

Sticking Your Neck Out: Some Guidelines for Communication

So how do we scientists deal with this bubbling cauldron of special interests, paradigmatic misunderstandings, and time-honored and entrenched professional practices? While I don’t have any simple answers, I do offer some guidelines that work for me — sometimes. First and foremost, we must drop any superiority judgments; they only stiffen the resolve of those who have been "toilet-trained" in their profession’s paradigms. Next, we should thoroughly explain how we arrive at our conclusions to those asking us for expert opinion. This explanation should include an explicit accounting of our personal value judgments, such as how much of a carbon tax we think is 'appropriate' given some estimation of climate damages from carbon emissions. I do not hesitate to give such personal judgments when asked, as I, too, am a citizen entitled to preferences, but I always preface any such offerings by saying that my personal judgment is an opinion about how to take risks — not an expert assessment of the probabilities and consequences of future events. The latter is an assessment of “what can happen and what are the odds of it happening,” and the former is a value judgment regarding what to do about those probabilities and consequences. Third, it is essential that scientists go into explicit detail on how they arrived at their risk estimates (with risk being probability times consequence). How did objective data contribute? How good was the data? What is subjective in the risk judgment? How did you arrive at the assessment?

In addition, I often try to summarize what my colleagues say and publish, keeping in mind that scientific articles that have been through multiple rounds of peer review are far different from op-eds, and which are far different from individuals' congressional testimonials. Perhaps most important, if I can put my “uncertainties cop” hat back on, I encourage scientists to explicitly state what confidence levels they assign to their risk assessments and the degree of subjectivity needed to make that confidence label.

It is also important, as noted, to acknowledge all sides of an issue, and especially to refute any contrarian opinions that are fictional or based on shaky assumptions or evidence. This is especially difficult in countries like the United States, where the current Bush Administration has decided against signing the Kyoto Protocol, supporting voluntary rather than mandatory emissions reduction measures that Raymond Bradley, director of the Climate System Research Center at the University of Massachusetts at Amherst, has called "ludicrous" (see an Associated Press article by Scott Sonner), weakened environmental laws, censored environmental research, denied scientifically-based climate change claims, and even gone so far as to "soften" its climate change vocabulary to make the issue appear less salient, as mentioned in a New York Times article and also at Luntzspeak.com. Also see Contrarian Science in the Climate Science section. Because of this situation, some scientists are attempting to show that the Bush Administration has misused climate science in its formulation of environmental policy, as evidenced by web sites like scienceinpolicy.org, which was created by a group of graduate students, post-docs, faculty members, and other scientists to separate what they see as fact from fiction in the U.S. climate policy debate. The Union of Concerned Scientists, too, has been an excellent communicator and has documented a long list of examples of the current Bush Administration's "misuse of science", as they call it (see a UCS press release). In February, they released a statement signed by 60 leading scientists urging the government to "restore scientific integrity to federal policymaking". In conjunction with the statement, they released a report, Scientific Integrity in Policymaking, that outlines their evidence for the Bush administration's distortions of science and makes suggestions for restoring scientific integrity to the U.S. policymaking process. (In early April 2004, John Marburger III, the director of the White House's Office of Science and Technology Policy (OSTP), gave a statement to Congress claiming to have refuted the accusations, saying that on the issue of climate change, the White House has actually promoted public understanding of the issue and did not tamper with a 2003 EPA report on the environment (though further evidence to the contrary is presented in Contrarian Science), among other things. The UCS has persisted, however, releasing a short "Analysis of White House Claims", which points out that the "White House document often offers irrelevant information and fails to address the central point of many charges of the UCS report.")

Finally, when communicating with laypersons, I try to use accessible language and metaphors. Scientific jargon is effective for communicating with colleagues, but is often misunderstood in the public arena and increases the probability that a scientist will be "boxed in," misquoted, or ignored altogether. For me, metaphors that convey both urgency and uncertainty are best — particularly for controversial cases like climate change. For example, I often say climate is like a die: it has some hot faces, some wet faces, some dry faces, etc. I think our (in)action on global warming is loading the climatic die for more heat and intense drought and flood faces. Similarly, I might ask an audience: “If you put a pan full of water in the sun and another in the shade, which will evaporate first?” Since everybody knows the answer, such a metaphor for intensifying the hydrological cycle that will occur with global warming adds to clear communication (even though in the global warming case it is infrared energy that is getting trapped near the surface, not more sunlight -- in fact, sunlight reaching the Earth's surface appears to have decreased recently due to air pollution hazes, as discussed in Liepert, 2002; Roderick and Farquhar, 2002; "Is 'global dimming' under way?"; and "Look forward to a darker world"). The water pans metaphor is somewhat imprecise in its characterization of the effects of global warming on the hydrological cycle, but for me, it drives the point home well enough, and I can live with that metaphor for mass consumption and supplement that with longer articles and books for those who really want to know more about the real physical processes.

This is a case of deciding "what the right balance is between being effective and being honest," as I told Discover magazine (see the double ethical bind, above, and a Detroit News editorial and my rebuttal). And as I also told Discover, I honestly hope that scientists strive to "do both". I hope my suggestions above for doing both will be debated and refined as more scientists decide to do just that as they enter the public debate.

Go to top

Environmental Literacy and the Citizen-Scientist

"I hope that citizens will take responsibility for increasing their scientific, political, and environmental literacy and recognize the importance of the positive effect that an informed public will have on the policy process."

While we have emphasized that scientist-advocates have an important role to play in informing the public, it is also important to evaluate the role of the citizen in evaluating complex issues of climate change. Is there such a thing as a "citizen-scientist," or is that yet another oxymoron? (Read more on “Is The Citizen-Scientist an Oxymoron?”) In my view, the citizen-scientist is a critical complement and counterbalance to the scientist-advocate. The Table below lists some of my views of the responsibilities of a citizen-scientist.

Table — Role of Citizen-Scientists (source: Schneider 2003, this website)

  • Citizens should demand that scientists answer the "three questions of environmental literacy": What can happen? What are the odds of it happening? And how are such estimates made?
  • Citizens must be informed enough that they feel comfortable making value judgments — that is, choosing policies — based on scientists' assessed risks and benefits.
  • Citizens must determine what constitutes fair burden-sharing related to paying for the implementation of policies that manage risks.
  • Citizens need to assure that the assessment process is open — that is, that all relevant stakeholders are heard. However, citizens should not be responsible for estimating the credibility of scientific arguments, given their lack of training in complex analysis and frequent bias for clients' interests. Citizens should be responsible, though, for finding out what the scientific consensus is about important claims; correlatively, scientists should be responsible for making clear what that scientific consensus is.
  • Citizens need to be sure that scientific assessment is being performed on issues that the public believes need such assessment.
  • Citizens should avoid being hypocritical by blaming others for climate damages while not themselves engaging in climate-friendly practices at the individual level. (For examples of some household "solutions," see Heede (2002).)

Sadly, a recent study by Brechin (2003) (also see accompanying press release) has shown that, despite having better resources for dealing with climate change, developed nations, and especially the U.S., are as uninformed or misinformed as people everywhere. The long-term solution to ensuring that citizen-scientists are informed involves the creation of entities like the National Research Council and the Intergovernmental Panel on Climate Change. These quasi-official bodies evaluate complex issues with a high degree of transparency, consider the input of numerous governments and stakeholders, and assess the relative credibility of conflicting claims. It is the hope of these organizations that the citizens consuming these assessments are scientifically literate, which means they should understand the scientific process, the policy options, and the role of media and advocacy. Achieving this competence involves education on both content and process, with the aim of attaining political, scientific, and environmental literacy. (See “Defining and Teaching Environmental Literacy”, “Education and Global Environmental Change”.)

Scientific literacy is not just knowledge of chemistry or ecology or economics, and in fact, it isn’t practical or necessary to teach detailed scientific content of a dozen or more relevant disciplines to all citizens. What citizens need to understand is the difference between a factual statement and a value judgment, the difference between objective and subjective probabilities, the difference between a paradigm and a validated theory, the difference between a law and a system, and the difference between a phenomenological model and a regression model (by that I mean the difference between a process-based theory and an association between data sets).

Perhaps the last distinction is the most important. Many people think that a correlation between two variables is synonymous with causation or predictive power, but it isn't. A correlation unaccompanied by a theory is not very convincing; even if a certain association occurs consecutively a few times, that doesn’t mean it will always do so. Coincidence and special circumstances need to be assessed, which is why using theory is part of the process of assessing confidence.

Becoming a successful citizen-scientist (as defined in the Role of Citizen-Scientists) is challenging and requires a serious commitment on the part of both the individual citizen and the government. Just as popularization of potential probabilities and consequences will occur with or without input from scientists, policy decisions will be made with or without input from an informed citizenry. And just as I hope that scientists will join in the popularization process (see the 'scientist-popularizer', above), I hope that citizens will take responsibility for increasing their scientific, political, and environmental literacy and recognize the importance of the positive effect that an informed public will have on the policy process. In addition, the citizen-scientist must continuously repeat the evaluation process, as complex and uncertain problems like climate change require another look as new knowledge and understanding comes to the fore (see 'Rolling Reassessment').

Environmental literacy, which involves understanding the social process of knowledge transfer (e.g., media) and the political process through which decisions are made, is also an important tool for the citizen-scientist. This includes the ability to sort out the credibility of claims and counterclaims by “one fax, one vote” advocates who saturate the media and political institutions with their usual exaggerated claims — and that’s where the meta-institutions like the Intergovernmental Panel on Climate Change and the National Research Council come in.

Sadly, environmental literacy is almost nonexistent in formal education. Similarly, scientific literacy is rarely taught in schools, even though that is the purported goal of science distribution requirements. I would like to see elementary schools teach these concepts, like how to separate facts from values, the difference between objective and subjective probability, efficiency versus equity considerations, and conservation of nature versus economic development tradeoffs. It could be done through teaching by examples and via dialogues with students. (See the World Monitor article: A Better Way to Learn.) I think that scientific and environmental literacy can empower citizens to begin to pick scientific signals out of the political noise that all too often paralyzes the policy process.

Go to top

Bringing it Together - Rolling Reassessment and the Interactions of Scientist-Advocates and Citizen-Scientists

What happens when our current understanding of a complex issue turns out to be incorrect, when we have either under- or overstated a potentially dangerous outcome or not pinpointed the correct outcome at all? To address this, I recommend employing "rolling reassessment." We should initiate flexible management schemes to deal with long-term issues that have potentially irreversible consequences, and also revisit each issue, say, every five years. The key word here is flexible. Knowledge is not static — there are always new outcomes to discover and old ones to rule out. New knowledge allows us to reevaluate theories and policy decisions and make adjustments to policies that are too stringent, too lax, or targeting the wrong cause or effect. Both scientist-advocates and citizen-scientists must see to it that once we’ve set up political establishments to carry out policy that people do not become so vested in a certain process or outcome that they are reluctant to make adjustments, either to the policies or the institutions.

Continuously updating our knowledge base is what society asks us scientists to do, but some scientists don’t like subjective assessment, for using it means we could be proven wrong at any moment (what economists have long called “the type I error”). However, the "answer" shouldn't be as important to a scientist as whether or not he or she gave his or her best judgment given everything that was known at the time. Science doesn’t assign credibility to people who arrive at the right answer using the wrong reasons or hypotheses; the process is more important than the product. Science wants to know why we reach certain tentative conclusions. So should citizen-scientists. I'll bet those who get the process right more often also get the answer right more often.

While scientists are more adverse to type I errors, from a citizen's point of view, the more important problem is what is called the “type II error.” This can occur if, because of inherent uncertainty, we wait for more data, and we ignore a subjective forecast which turns out to be true. Oftentimes, both society and nature suffer the consequences of a type 2 error. (See notes on Type I and Type II Errors.)

Go to top

Responsible Reporting and the Journalist-Scientist-Citizen Triangle

"Journalists do indeed need to replace the knee-jerk model of 'journalistic balance' with a more accurate and fairer doctrine of perspective..."

Citizens' and scientists' unwillingness or inability to enter into the climate change debate has proven to be a mutually reinforcing and devastiating behavior that contributes to false-dichotomy reporting and "in the box" or "balanced" journalism: polarizing an issue (despite it being multifaceted) and making each "side" equally plausible, mainly for the sake of simplicity but sometimes also to "sex up" a story by introducing bipolar conflict. (See a letter to the Wall Street Journal by Seitz and the Wall Street Journal op-ed, "A Major Deception on Global Warming";. then, read "No Deception in Global Warming Report" and Self Governance and Peer Review in Science-for-Policy.) To redress the problem, all three groups must raise their consciousness. Journalists do indeed need to replace the knee-jerk model of "journalistic balance" with a more accurate and fairer doctrine of perspective that communicates not only the range of opinion, but also the relative credibility of each opinion within the scientific community. Just as a good scientist records and analyzes all relevant data before reaching conclusions, a good reporter will not just take a story at face value, but will delve deep into the issue to ensure accuracy and see how many varying opinions there truly are. Fortunately, most sophisticated science and environment reporters abandoned the process of polarization of two "sides", but this model of reporting still exists, especially in the political arena. When political reporters cover science, they typically revert to form: equally credible polar opposites. Scientists can help remedy this by taking a more proactive responsibility for the public debate. They should help journalists by agreeing to participate in the public climate change debate, and by using clear metaphors once they do so.

“ ... [scientists] should deliberately outline the consensus before revealing the contention.”

If we scientists fail to address the public arena, claiming it is “dumbed down” and beneath our lofty "objectivity," we will only add to the miscommunication. We should go out of our way to write review papers from time to time and to present talks that stress well-established principles at the outset of our meetings before we turn to more speculative, cutting-edge science; we should deliberately outline the consensus before revealing the contention. Citizens should make sure that the public debates take into account all knowledge available on climate change, including the relative probabilities of various "sides".

It would be worthwhile for scientists, citizens, and reporters to better understand each other's paradigms. We could improve public dissemination of scientific knowledge if we required our science graduate students to take a survey course of the public communication process, including the process of political advocacy and science policy formulation. Similarly, journalism schools could show the consequences of misapplying "balanced" reporting techniques to complex issues in which not all opinions deserve — or should receive — equal billing in a story. A perspectives approach that elaborates on the relative credibility of many views on complex issues — not just the extreme opposites — is what is needed to properly inform the public (see an American Scientist Macroscope). Literate citizens must take responsibility for educating themselves about all sides of the climate change debate so that they can see past biased media opinions or bipolar "dueling scientists". 

We live in complex and confusing times, and rationality (that is, knowing enough about what might happen and how likely it is, and being willing to change our current beliefs given challenging new evidence) is the only way to clearly define our values when it is time to make policy — and that is the job of all citizens, including journalists and scientists.


Stanford University

Go to top

For further information, see:

Go to top

Copyright 2011, Stephen H. Schneider, Stanford University