Science & Policy Roundtable
Enormous streams of data generated by government, industry, and university researchers drive the regulatory apparatus of the U.S. government. Sometimes the data reveal new risks, other times it frames the boundaries of known ones. Uncertainty permeates the process, both in the science identifying the risk and the regulations issued to manage it.
In the following roundtable, Law School professors Jon Cannon, Jason Johnston, and Michael Livermore talk about science and research, its inherent uncertainties, and the government’s efforts to establish and promote good policy.
Jon Cannon served as the Environmental Protection Agency’s general counsel (1995-98) and assistant administrator for administration and resources management. He is the Blaine T. Phillips Distinguished Professor of Environmental Law and the Class of 1941 Research Professor of Law. He is also Director of the Law School’s Environmental and Land Use Law Program, which aims to develop leaders who combine knowledge in law, science, economics, ethics, psychology, and politics with the skills to put sound policy into practice at all levels of government and in the private sector. Cannon is completing a book on environmentalism and the Supreme Court for publication in spring 2015.
Jason Johnston is an expert in law and economics and is the Henry L. and Grace Doherty Charitable Foundation Professor of Law. He is currently working on a book that critically analyzes the foundations of global warming law and policy, a series of articles on the economics of regulatory science, and another series of articles on various aspects of the law and economics of consumer protection. Before coming to the Law School, Johnston was the founding Director of the University of Pennsylvania Program on Law, Environment and Economy.
Michael Livermore is an associate professor of law whose primary teaching and research interests are in administrative law, environmental law, cost-benefit analysis, and executive review of agency decision-making. Livermore spent five years as the founding executive director of the Institute for Policy Integrity at New York University School of Law, a think tank dedicated to improving the quality of government decision-making through advocacy and scholarship in the areas of administrative law, cost-benefit analysis, and regulation.
UVAL: Science will always have uncertainty, so how do we manage that uncertainty when translating it into policy?
Johnston: I think the problem is that we call for science-based regulation and act as if there isn’t uncertainty. But at the frontier of any science, there’s always uncertainty. I think the problems that we have now boil down to having institutions that don’t incentivize scientists to fully reveal the extent of the uncertainty and in not understanding that, because of uncertainty, scientific decisions are often inseparable from policy decisions. Different scientists are making different decisions about the policy consequences of different kinds of errors. Scientists are expressing their points of view in the decisions they make about what to publish and how to value different kinds of studies, normative decisions about consequences external to science, so you just can’t look to science and think that you’re going to get clear answers.
Cannon: I agree with Jason that we don’t have a systematic way of dealing with uncertainty. Depending on the policy preferences of those making decisions, I’ve seen attempts to downplay uncertainty and also attempts to exaggerate uncertainty. I think there should be a more systematic accounting for uncertainty. I also agree that policy judgments can be embedded in the science.
There’s a phenomenon among policymakers to try to disguise policy judgments as science, to say “the science makes me do it,” when in reality the science is too uncertain or indeterminate to drive a particular policy choice. Even if the science is very clear, there’s always a value judgment associated with a policy decision. Sometimes policymakers want to obscure the judgment component to reduce their own accountability and fob it off on the scientists. It’s important to keep clear the points at which value judgments enter the decision process and to understand the effect of those judgments.
Livermore: Part of the tricky thing with uncertainty in policymaking is how it intersects with public perceptions of risk. It would be great if we had a fully rational system with the right incentives that could use available information to maximize expected net benefits for society. But in a democratic political system, uncertainty quickly becomes politicized, both as a reason not to act and as a reason to act.
So in a political system where you have to think about communication to a broader public and where you have actors with all kinds of incentives —incentives to trump up risk, incentives to downplay risk—you really have to think not just in terms of what a fully rational decision-maker might do, but how we in our political discourse are going to deal with scientific uncertainty.
You also have concepts like the precautionary principle or burdens of proof where a lot turns on defaults. But if you’re going to default into some behavior in the face of uncertainty, then if everything’s uncertain all the time, your defaults are always going to govern.
UVAL: Is the system we have now working?
Johnston: I would say no.
Cannon: I would say it’s working. It’s not working as well as we would like it to work. I think it’s achieving some level of policy judgment based on scientific information, but it’s far from perfect.
Johnston: The reason I say no is because, for example in climate change policy, the science has been presented by the Intergovernmental Panel on Climate Change (IPCC) and by EPA in a way that conceals rather than reveals the enormous uncertainties. The IPCC makes statements that really aren’t even scientific. They say that some things about recent climate are “unequivocally” true and that they are “highly confident,” or “confident,” or “somewhat confident” that certain kinds of projections about future harm and future climate change will occur. Those aren’t scientific statements. The IPCC uses an invented terminology that I would say was designed to conceal the uncertainty.
Another example of the failure of current regulatory institutions when it comes to the use and evaluation of science is air pollution and the regulation of fine particulates. The entire system for calculating the cost and benefits of tightened standards for fine particulate regulation hinges on a set of scientific studies that are boiled down to a very simple number – a number that expresses supposedly the increase in expected mortality as concentrations of fine particulates increase -- and that simple number really is meaningless when you look at what’s known about what actually causes increases in the kinds of mortality used in the fine particulate studies. If people actually saw what the data looks like, they’d realize that there is no true linear relationship between the level of particulates and mortality, and that using the slope of such a line (a linear regression) to say “we will save x number of statistical lives if we reduce fine particulars by y amount” is highly, highly misleading, indeed almost surely false.
In the species area, the Fish and Wildlife Service looks mostly to the U.S. Geological Service for its scientific information about whether, for example, a particular population of mice constitutes a subspecies, but there’s no definition of what a subspecies is. That’s not even a really useful scientific category, and so the underlying science is all over the place. When you then present a finding from an agency that a particular population of mice is a subspecies and therefore has to be protected under the Endangered Species Act, it’s not revealing. It pretends to be a scientific conclusion, when in fact a true representation of the science would lead any reasonable person to conclude that we can’t even make a decision without weighing the consequences – not just for the mouse population but for local economies – of categorizing the population as a subspecies or not.
Deciding not to decide
Livermore: I think that’s the problem, right? If the conclusion is that we can’t make a decision, not a lot of decisions are going to get made in the face of scientific uncertainty. Typically, we face situations where we have to make a decision. That’s kind of the starting place. We have the information that we have. A decision has to be made. Not making the decision turns out to be defaulting into a decision to keep the status quo.
So I think there are two questions. One is how ought the decision-maker take the relevant evidence into account in order to make the best decision possible. Then the second question, which I think is what Jason is touching on, is how a decision-maker should communicate the relationship between the final decision and the evidence the decision-maker used. I think the second question is at some level trickier.
For example, in the particulate matter pollution case that Jason mentioned, should EPA communicate very broad risk parameters and say that fine particulates might cause great damage, or very little damage, or damage somewhere in between? Or is it better to communicate just a central estimate? I think that’s a hard question to answer.
Johnston: I agree with that, though I don’t know if I would characterize the first question or problem the way Mike did, but it is true that deciding not to regulate is a decision just as much as deciding to regulate. But to presume that the decision has to be made at a particular time is already to embed in a normative criterion that it’s costly to wait. There is what we could call the regulator’s objectives, a sense that if we wait and don’t regulate and it turns out there’s some problem, there will be bad political fallout and public condemnation for the failure to regulate. But that’s not a product of science. That’s a product of the environment in which regulators operate. If we could take that away, I think a lot of times a regulator would postpone a decision until there’s more certainty. But the political environment often makes it very difficult to say that, and instead regulators feel they have to make an up or down decision.
UVAL: How would you solve that?
Johnston: There are various suggestions people have come up with. I personally think a lot of these problems come about because the regulatory agency that’s responsible for promulgating regulations also assesses the science, so one proposal is to separate those two functions. We could have a federal agency that just assesses science, but doesn’t promulgate regulations. You could separate the two functions. That’s one proposal. I don’t know if that would really work.
Another proposal is to create systems for selecting the people who serve on scientific advisory boards. When there’s a competition among scientists at the frontier of science, you could appoint a board that comprises people on both sides of the scientific controversy. We don’t really do that now.
You could also re-think the system that scientific assessment institutions currently use to assess the science. How should they aggregate the various scientific opinions? Nobody’s really tried to work on those problems yet, believe it or not. Nobody’s sat down and tried to formally think of how to best design these institutions to elicit from the scientists their honest opinion as to the strength and weakness of different studies. It’s not an easy problem.
Cannon: These are all good points. I think there’s some effort by agencies to move in a direction of more inclusive and more frequent review by outside scientists. This can enhance the credibility of agency science. EPA uses a Science Advisory Board of outside experts to review the quality of its science and research programs and relies on other groups of outside experts, like the Clean Air Science Advisory Committee, for guidance in particular program areas. Significant work by EPA scientists used in decision making is subject to peer review. When issues arise about scientific methods or conclusions, EPA may submit those issues to the National Academy of Sciences for review. Sometimes the responses are flattering, and sometimes not.
An agency like EPA is embattled. One of the battlefronts is the science that it produces or relies on, so to the extent that it can protect itself by getting broader acceptance of that science as good science, there is incentive for it to do so.
Having scientific assessments done outside EPA might improve acceptance, but EPA or other agencies would still have to interpret the assessments. You can’t totally insulate the scientific process from the policy process.
Separating science decisions from policy decisions
Livermore: This has been tried and this has been discussed for decades, this idea of separating the scientific decision-making from the policy decision-making. Part of the problem is that you can’t intellectually separate the scientific from the policy judgments, so all you’ve done is create a different institution that will be making science/policy judgments. It’s been tried. Where it’s been tried, like in the occupational health context, it’s not clear that it actually increases reliance on science because at some level, having the scientists in the political institution helps ground the decision-making in scientific norms.Part of the issue, I think, is also that we often ask scientists within agencies the wrong kinds of questions that have a statutory basis rather than scientific basis. For example, in the context of the Clean Air standards, the statute requires the agency to set cost-blind standards that are adequate to protect public health and welfare. That’s set up as a scientific question, but it’s just not a scientific question. When you ask a bunch of scientists to answer a non-scientific question, you’re going to get gobbledygook back. We’ve set up the inquiry incorrectly. If we can figure out what the right inquiry is to be asking our scientific bodies and then use that an input in the policymaking process, we will have taken an important step.
Scientific advisory board bias
Johnston: That makes a lot of sense to me. If I could just respond to a couple of things Jon said. There is a problem with these scientific advisory boards. When 70% of the people on the EPA’s Clean Air Act scientific advisory board have been funded by EPA for many years to work on precisely the problem that they’re asked to act on as peer reviewers, that’s a problem. Now, it would seem like that’s an easy problem to overcome. You just change the composition of the board. But it’s not that easy to do because some of the things the agency’s interested in, say the health effects of ozone, are not really considered cutting edge scientific issues, and not many scientists are interested in them. Most of the people working on an issue such as ozone do so because EPA, or perhaps industry on the other side, is paying them to work on it.
It’s a problem that the FDA has, too. When the FDA convenes a panel to consider the evidence for approval of a new drug, it’s very difficult for them to find people who aren’t paid consultants, either for that drug manufacturer or for a competitor. But if you disqualified everybody who’s being paid by somebody, then there wouldn’t be anybody in the room who knows anything about that drug or that category of drugs.
Here is something else I wanted to pick up on. We now have competing agencies. Let me give you two examples: Bisphenol-A is a chemical that is found in plastics and there are two agencies concerned with it. One is the FDA, which has been funding people down at Research Triangle Park to do pretty much traditional animal toxicological studies. At the same time, the National Institute for Environmental Health and Safety (NIEHS) a few years ago approved spending $30 million to study exactly the same chemical. They are funding scientists who are using completely non-standard methods — such as injecting the substance directly into rodents, at dose levels that have no relationship to the actual level of human exposure. These methods have been rejected even by European regulators as having no scientific justification, and they seem designed quite intentionally to suggest to the public that Bisphenol-A is harmful even at extremely low doses.
So we have two agencies each funding separate work using separate methodologies exploring the potential health hazards from this compound, and with different agendas. The FDA is employing very standard toxicological methods to investigate the riskiness of Bisphenol-A; the NIEHS, by contrast, is funding methodologically weak studies that seemed clearly to be aimed at creating a perception that this chemical is very risky even at low doses.
Another example of competing agencies involves EPA. We’ve got all this controversy over fracking. On the one hand, EPA is compiling a bunch of evidence on the various adverse effects of fracking, from contamination of groundwater to air pollution. The Department of Energy is doing its own separate thing, employing different methodologies but looking at a lot of the same questions, so now we have this problem with agencies competing.
Cannon: I don’t think diversity is a bad thing. EPA and NIEHS have had differing views about non-monotonic dose response in certain chemicals. These agencies have different interests or points of view that get reflected in different positions on the science, at least initially. But that diversity of views can be helpful; it can force fruitful deliberation and further research.
It’s also important to remember that much of the science that gets done relevant to policy is not done by federal agencies. It’s done by universities and companies. All of the data that EPA relies on in pesticides registration and toxic chemical review is industry-generated. You have a whole sea of data flowing on these different issues, and part of the challenge now, because the data is so huge, is systematically canvassing it and synthesizing it into some meaningful pattern that policy-makers can use.
Livermore: Just to return to the earlier point that Jason made about funding and outside bias. Most of the experts on the issue will have received funding at some point in their careers from agencies. I think this is an interesting area for institutional reform. For example, EPA has done some work with the National Science Foundation (NSF) where they jointly fund scientific research that EPA is ultimately going to rely on. NSF needs the agency involved because EPA knows the questions that need answers, but some separate entity, like the NSF or NIH, could be required to make some judgments about which researchers get the funding. That seems like a very promising institutional reform.
UVAL: What about public allegations that agencies or companies are funding studies to reach certain findings, so that is exactly what they’re going to find?
Johnston: Well, let’s put it this way. Nobody’s making stuff up. Even if it’s true that somebody receiving funding from a particular entity knows the answers the entity would like, and somebody sets up a study that does generate answers that are consistent with the funding entity’s preferences – and this is really important from my point of view – it still has to be the case that the results of those studies have to be presented transparently so people can critique the study and understand how it was done. The biggest problem comes when things are hidden and it’s difficult to find out the methods used and what the data was. You really need transparency.
In a sense, one proposal is that everything should be presented in the way we’re accustomed to in the Law School, in the legal world, which is in the form of a law office memo: here’s this side and here’s the other side, and everything is presented very transparently and thoroughly. Maybe that would give the public more confidence at least. That’s a separate problem. How do you communicate these complex scientific studies?
Before science is used by an agency, there has got to be a discussion not only of the evidence that supports a particular regulatory decision, but also of the evidence and studies on the other side. There has to be an opportunity for rebuttal by the scientist whose work has been discounted by the agency.
Judicial deferenceUVAL: Does peer review figure in this?
Livermore: The administrative process figures in this. At some level I think the “arbitrary and capricious” review does require the agency to respond substantively to adverse evidence that’s in the administrative record. If the agency fails to do that, it’s not a guarantee that the regulation’s going to get knocked down by a court, but they are subjecting themselves to substantial risk.
Now, we might ask whether a generalist Article III judge is the best person to make sure that the scientific exploration that Jason was talking about is actually carried out by the agency in a rigorous way, but at some level that is the idea behind “arbitrary and capricious” review as it relates to scientific inquiry.
Johnston: Well, here’s the problem with that, and it’s something I’m working on. I’ve got a theoretical paper that shows that even if the reviewing entity is really random and bad, that is still a good thing in terms of increasing incentives for the experts to be fully honest in presenting their evaluation of the science. My model would imply, for instance, that the fact that Article III judges are just generalists is not such a bad thing because if you know in advance that they’re going to do their own assessment of the science, and you’re the scientific expert contributing to the process, you know you have to be more credible, with more evidence, precisely because they are pretty random in their own assessment.
But right now the way it works for the most part in the courts is that they’re unbelievably deferential to agency scientific findings. It’s almost impossible to find a federal circuit court of appeals ever overturning an agency on the grounds that the science wasn’t adequate. It’s really just extremely deferential. Nobody reviews the agency’s science. Congress doesn’t. The courts don’t. Nobody does. And experts know a lot about their subject, but they also have policy preferences, and any time that’s true you cannot expect an expert to give you an unbiased, expert opinion. Our problem is that we act as if experts and scientists are saints, completely unbiased and unaffected by self-interest. The evidence defies that belief.
Cannon: But I think that’s inevitable. Policy-relevant science is inevitably in the political realm, so it’s going to be debated from the standpoint of different interests and how they’re affected by it. It’s already part of the adversarial process, whether scientists like it or not. EPA’s  greenhouse gas endangerment finding is an example. The Agency’s assessment of the climate science drew from a lot of different peer-reviewed sources. That assessment was subject to internal federal scientific review and notice and comment rulemaking, in which adverse comments on the Agency’s assessment were made and became part of the record that the agency had to defend when it went up on appeal. The scientific process here was inextricably bound up with the in the policy decision, whether to regulate greenhouse gas emissions, and has continued to be debated by those with an economic and political stake in the outcome – not only in the courts but in Congress and other public venues.
Reviewing the review process
Johnston: People on the other side of this, such as me, would look at the agency’s response to the outside comments critical of its assessment and read the agency’s dismissal of one or another comment and say, “That’s false, that’s just wrong. It’s a misstatement of the science.” But the way things are now, it doesn’t make any difference how many of us look at that and say the EPA said a bunch of things that are wrong. The court’s not going to look at it that way. As long as the agency responds, no one’s going to scrutinize the response.
Cannon: We should have review processes that are as balanced and objective as possible. But at some point, you’ve got to cut off the process and let decisions be made and implemented. I think that’s part of the tension here.
Johnston: And I shouldn’t overestimate the problem. After all, members of Congress -- right now, given the political composition of Congress, primarily and most importantly the House Committee on Science, Space, and Technology -- do bring in experts on the other side and convene hearings with people from the other side. So Congress has an opportunity to hear about this, and they do hear about it.
Livermore: We’ve experimented with more formalized processes for overseeing agency decisions where experts submit testimony under oath and are subject to cross-examination. The downside of that is it’s just not used because it takes an enormous amount of time and agency resources, and it’s not obvious that you’re getting a better outcome. The famous example used was when the FDA took several years to compile thousands pages of documents to figure out the correct percentage of peanuts in peanut butter. You can really build out the regulatory processes but it’s not obvious that you are improving outcomes, and there does have to be a point where you end analysis and make a decision.
But on the other hand, we don’t want to incentivize agencies to hide behind the science. Under President George W. Bush, EPA made a policy decision not to address greenhouse gases under the Clean Air Act. That wasn’t a scientific decision that climate change isn’t real, but a policy decision not to move forward with regulation. And the administration was very clear about that, which is why in Massachusetts v. EPA, the Supreme Court struck it down, because it wasn’t a scientific decision. If the Bush Administration had cloaked that policy choice as a scientific determination, they might have gotten more deferential review. I agree with the Court’s decision in that case, but there is a worry that it creates skewed incentives.
Judgment calls and managing riskJohnston: Here’s my problem with the greenhouse gas endangerment finding. EPA doesn’t have a bunch of top-flight climate scientists. There are a lot of climate scientists, but not in the EPA. EPA relied primarily on the IPCC’s assessment report and some reports done by the United States Global Change Research Program. The problem is, those assessment reports are not prepared in a way that presents the evidence and the counter evidence. One of the contributors to the book we put out a couple of years ago on institutions and incentives and regulatory science explicitly proposed that assessment reports be done differently. The contributor would do what Mike just explained about the peanut butter and require an opportunity for cross-examination. It has to be a trial-like procedure, at least within the scientific assessment body, at least to the extent that the assessment report fully reflects the different views. It would’ve been a different discussion for EPA if it had used assessment reports that really were transparent in fully revealing areas of active scientific disagreement.
UVAL: But if they had done that—let’s assume they did – how would the result differ?
Johnston: Well, it wouldn’t have mattered in terms of judicial review because the courts are so deferential.
UVAL: So EPA gets the balanced reports and then they decide to make an endangerment finding. They’re getting into policy since they’ve been charged to make a decision.
Johnston: I think it would have forced EPA—maybe I’m wrong, and these guys can disagree with me— I think it would have changed the kind of document they would have produced.
Cannon: It might have. But endangerment, as your question suggests, is still a judgment. A lot of people have different opinions about what level of risk reaches the level of endangerment. If everyone agreed on the level of risk and the uncertainty surrounding it, it would be more straightforward.Livermore: There are two pieces here. Would it have changed EPA’s judgment about endangerment, and would it have changed the public justification for the endangerment finding? My guess is that the folks at EPA are sophisticated enough that their judgment wasn’t clouded by the setup of the IPCC report. They were aware of the science and I doubt whether it was packaged in a particular way by the IPCC had major consequences for the agency’s own internal deliberations.
UVAL: Maybe it gave them political cover, if nothing else.
Livermore: That’s a different question. I’m assuming that, at least initially, the agency’s just trying to figure out the right answer. But then there’s the political question, and it’s certainly plausible that if the IPCC report had looked different, the agency’s public justifications would’ve looked different, although I’m not sure that its internal deliberations would have been different.
UVAL: So science, however uncertain its findings, does have a role in defining risk? And policymakers have to make a decision on how to manage risk, but the content of the information they use to make that decision is not presented as being as uncertain as it is.Johnston: The uncertainty in science and the actual process of science on the frontier with radically competing viewpoints is not very well understood or represented in the regulatory process. That’s what I think.
UVAL: Do you think that airing all of the competing evidence would have changed the debate?
Johnston: I think it would change the debate. I don’t know about outcomes, but I think it would change the debate.
Livermore: The question is whether it would change the debate in a good way or not. I take this back to the question of public discourse. When one wants to communicate something to someone, you need to be mindful that what you say is not always what the audience hears. It’s not realistic to expect citizens to gain a high level of sophistication on every scientific question with policy implications. We have scientists and experts that specialize in these matters. If we’re not going to have a population, or even a Congress, that’s composed entirely of scientists, it will be impossible to effectively communicate all the nuances at the frontiers of scientific understanding. And trying to do so might end up causing more confusion than clarity.
UVAL: Thank you very much for your time.