by Cullen Couch
Solar and Wind
Moreover, because the energy flow in the broader transmission grid must stay constant, utilities supplying the power must scale up and down on a micro-second basis to keep pace with demand. Essentially, solar and wind energy must transform their intermittent source supply into a steady one. Efficient and inexpensive fuel cell development and a “smarter” electric grid are critical to their success as next-generation energy sources.
Further, each requires enormous amounts of land to be effective. The largest wind farm in the country is in Texas, considered the “Saudi Arabia of wind.” The Roscoe wind farm, one of several dozen in Texas, has 627 turbines on 100,000 acres. Large solar arrays in California and Texas cover several hundred acres each.
“There’s been a lot of advancement in battery technology,” says Michael Alvarez ’80, president and CFO of First Wind, an independent North American wind energy company focused exclusively on the development, ownership, and operation of wind farms. “We are now deploying a very innovative battery energy storage system on the island of Oahu, the first wind energy project to gain a U.S. Department of Energy loan guarantee for innovative technologies.” The project, a 30-megawatt wind farm in Hawaii, will include a 10-megawatt battery designed to help maintain grid stability. “The technology itself is not new,” he says, “but it is the first time that anyone has taken it up to a larger scale. The Hawaiian Electric Company is excited because the state has a very ambitious goal to reduce its dependence on oil.” Hawaii has a very good reason. It uses a far higher percentage of oil in electricity generation than other states, somewhere between 75-90 percent depending on the island.
Then there is the need for patent protection of these innovations to keep investment dollars flowing. “The Patent Office is of crucial importance. If proprietary protection is not available, research dollars will not flow into the technology.” says Ed Baranowski ’71, a partner with Porter Wright in Columbus, Ohio, who specializes in patent law. “We’ve encountered three to five year wait times for examination in many early cases, an extraordinary delay, principally caused by insufficient staffing in the Patent Office - and an increase in the number of applications filed.”
According to Baranowski, technology development is limited by the number of qualified people available to enter the field. These numbers are now increasing because of market demand. “When I began,” he says, “I was a rare bird with degrees in physics and law, and I was not that easy to find. Researchers in fuel cells are now routinely expected to have at least a doctoral degree. My expectation is that technology development will increase faster now that a foundation is established. Whether investment funds will be available, and from where that money will come, are separate issues.”
Finally, wind farms and solar arrays are located remotely. Transmitting that power to users in distant cities without significant line loss is a challenge. Each needs to find the closest connection point to which it can build a transmission line. For example, in order to serve Los Angeles, First Wind secured a site to build a wind farm 88 miles away from the Intermountain Power Plant (IPP), a coal-fired power plant in Utah owned in part by the Los Angeles Department of Water & Power (LADWP). First Wind built a generation lead line to connect the wind farm to the IPP, which itself delivers power into Los Angeles on a line owned by the LADWP, effectively connecting the Utah wind farm to downtown Los Angeles.
Remote siting also raises fairness issues in the host community. Why, residents ask, must we have a wind farm here to serve others who live far away?
Nuclear power is a compelling energy source. Its fuel is dense, abundant, and leaves no carbon footprint. Just six ounces of enriched uranium could power the entire city of San Francisco for one year. According to the International Atomic Energy Agency (IAEA) and the Organization for Economic Cooperation and Development (OECD), there is enough uranium in the world to meet present energy consumption for the next 100 years. Developing more efficient fast reactors can extend that period to more than 2,500 years. The United States has the fourth largest uranium reserves in the world, behind Australia, Canada, and Kazakhstan.
Anyone old enough to remember Three Mile Island (TMI) will easily understand why not a single nuclear reactor has been built in the United States since that unnerving March day in 1979. The catastrophe was at the same time terrifying (especially if one was within 100 miles and downwind) and revealing. While it caused panic for those within a five-mile radius of the plant (140,000 people were evacuated), the crisis did show that a meltdown in a pressurized light water reactor would not cause a “China Syndrome,” the theory that molten reactor core products could burn down clear through the floor of the containment building. It didn’t help public confidence that the hit movie, The China Syndrome, had opened in theaters nationwide just 12 days before the accident.
Reviewing the accident in an August 2009 report, Backgrounder on the Three Mile Island Accident, the Nuclear Regulatory Commission says that TMI “permanently changed the nuclear industry and the NRC. Public fear and distrust increased, NRC’s regulations and oversight became broader and more robust, and management of the plants was scrutinized more carefully. The problems identified from careful analysis of the events during those days have led to permanent and sweeping changes in how NRC regulates its licensees – which, in turn, has reduced the risk to public health and safety.”
Some of those changes involved retrofitting into the existing reactor fleet simpler, better, and standard designs, which help build an information library that all operators can use to improve plant safety. The results speak for themselves. An OECD summary of severe accidents causing five or more deaths that occurred in fossil, hydro, and nuclear energy chains from 1969 to 2000 – which would include TMI (but not Chernobyl, a non-OECD country) – shows that nuclear is by far the safest energy source. Even with Chernobyl, which used other, older technologies operated under a loose management regime, nuclear accidents had one accident with 31 fatalities. By comparison, coal had 75 accidents with 2,259 fatalities, and oil had 165 accidents with 3,713 fatalities.
“A number of new reactor model designs have simplified their plant systems, including operations, maintenance, inspections, and quality assurance,” says Farrell. “They have greatly reduced the number of valves, pumps, piping, HVAC ducting, and other complex components. The safety systems are predominantly passive and rely on the natural forces of gravity, circulation, convection, evaporation, and condensation, instead of AC power supplies and motor-driven components.” According to Farrell, Dominion produces about two-fifths of its power at its nuclear facilities, twice the national average, and he says the company aims to “have at least 50% of our power sourced from nuclear.”
The industry has emerged from the dark days following TMI to become a necessary player in the overall energy mix. It has not suffered a single significant accident since 1979, and the workforce and technology used are sophisticated to a degree unheard of 30 years ago. It is simply inaccurate to appraise the present viability of nuclear energy using a TMI mindset, although continued caution is certainly appropriate. The American public agrees. Gallup polls now show a growing number of Americans favor the use of nuclear power, 62% in the latest poll (March 2010).
A fleet of 104 commercial power plants currently operate in the United States, about 80% of which sit east of the Mississippi River. They have generated some 60,000 tons of nuclear waste stored at 121 facilities around the country within impermeable concrete bunkers that can resist the direct impact of a fully loaded commercial airliner (the same as the nuclear reactor containment vessel itself). Each year, these power plants add some 2,000 more metric tons to the pile. If you were to put all of the existing accumulated waste in one place, it would fit into UVA’s Scott Stadium rising to the height of the goal posts.
What do we do about this waste? And how dangerous is it? Decades ago, experts believed that there was a limited uranium supply to fuel nuclear power, prompting a drive for reprocessing and recycling spent nuclear fuel (SNF) into plutonium to use in fast breeder reactors. It is a complicated, expensive, and controversial process. While it can provide new fuel and reduce waste, the United States halted it during the Carter administration out of concerns that the weapons-grade plutonium might escape into the wrong hands.
Nonetheless, about a dozen countries around the world are reprocessing SNF safely and securely, obviating the concern. Moreover, any residual concerns may be moot because new reprocessing technologies under development would not yield any plutonium. Even if we wanted to reprocess SNF, renewing the program here would be very expensive. According to some estimates, it would cost $20 billion to build a plant capable of reprocessing annually 2,000 tons of nuclear waste, the same amount the U.S. currently generates every year.
A study released this September by the Massachusetts Institute of Technology adds a new wrinkle. It argues that the industry misjudged the size of domestic uranium supplies from the beginning. Knowing now that the U.S. has plentiful uranium reserves changes the calculus that created the drive for reprocessing in the first place. SNF is not weapons grade and poses little danger to the general public as it is presently situated. The report “strongly recommends that interim storage of spent nuclear fuel for a century or so, preferably in regional consolidated sites, is the best option.” According to the report, this would allow the fuel to cool, and most importantly, preserve our options for future SNF management.
“There are only three technologies for base load power: coal-fired generation, natural gas-fired generation, and nuclear power,” says Euclid Irving ’77, a Jones Day lawyer in New York with extensive experience in electric utility mergers and financing. “Of the three options, nuclear power is the only carbon-free technology. Today’s fleet of U.S. commercial reactors provides over 70% of the nation’s carbon-free energy. I have yet to find any serious proposal for mitigating greenhouse gas emissions that does not include a large and growing role for nuclear power in the energy mix.”
Natural gas has the lowest carbon footprint of all the fossil fuels, about half the amount of coal and a third less than oil. It can be easily transported via pipeline and tanker, and is relatively abundant. New horizontal drilling and hydraulic fracturing technologies (or “fracking”) have opened up for commercial development significant shale gas reserves, the most recent being the estimated 500 trillion cubic feet in the Marcellus Shale formation in the Appalachian basin. By comparison, New York alone uses about 1.1 trillion cubic feet per year. According to the EIA, the U.S. produces about 3% of the world total.
The fracking used in shale drilling has caused a number of serious environmental concerns. The process calls for injecting large amounts of a pressurized water-based chemical solution deep into the shale to crack the rock and release the gas. That solution can seep into the ground water, potentially polluting it. But it doesn’t remain there. After fracking the well, the drillers pump most of the solution back out and dispose of it in another deep well drilled below the water table. The risk of some contamination remains.
A recent HBO documentary, Gasland, widely publicized the potential impact of this procedure, including a dramatic demonstration of a homeowner’s water faucet bursting into flame. The Appalachian basin borders the watersheds of major metropolitan areas such as New York City and Philadelphia, and many millions of people could be affected. Complicating the matter is the so-called “Halliburton Loophole,” a provision inserted into the Energy Policy Act of 2005 that stripped the EPA of its authority to regulate the process. Instead, regulatory authority is retained by the states in which the production occurs.
“The successful development of the Barnett Shale in Texas, the first shale resource developed and which underlies Ft. Worth and other cities in North Texas, demonstrates that the industry is able to respond to those challenges as they arise, even in urban areas,” says Brad Keithley ’76, partner and co-head of the oil and gas practice at Perkins Coie in Anchorage and Washington. “The Marcellus may present some different issues that require additional focus, but producers are diligently responding to those under the direction of state regulators.”
Moreover, natural gas is mostly methane, a greenhouse gas itself that is about 80 times stronger than carbon dioxide. Care must be taken that un-combusted natural gas does not leak into the atmosphere during storage or transmission (this is why the methane released during oil drilling is burned off on-site). The recent San Bruno explosion in California also reveals the danger of gas leaks in an aging infrastructure.
Since the 1950s, oil has become the world’s most important energy source due to its high energy density, transportability, and relative abundance. Estimates vary, but known world-wide reserves amount to 3.74 trillion barrels. The U.S. imports 12.9 million barrels each day, or about 69% of the 18.7 million barrels it uses daily. Worldwide consumption is about 84 million barrels per day.
Oil is the product of prehistoric biomass, zooplankton and algae, heated and compressed over geologic time. An oil field arises from rock rich in hydrocarbons and deep enough to be cooked into oil, a surrounding permeable rock in which the oil gathers, and a rock seal that prevents it from rising to the surface (oil is lighter than both rock and water). Once drilled, enough natural pressure exists to force it up to the surface. Eventually, that pressure dissipates, requiring increasingly expensive drilling methods to extract it, such as waterflooding, steam injection, carbon dioxide and other gases or chemicals.
Geologist M. King Hubbert introduced in 1956 the concept of “peak oil,” or the point at which we reach the maximum rate of global oil extraction. Hubbert predicted accurately that peak oil would be reached in the U.S. between 1965 and 1970. Expanding his model to global production has been more problematic and controversial. There is a wide variety of opinion among experts whether the world has reached peak oil. Keithley is one who thinks we haven’t.
“I can recall at least three times during my career that there have been predictions we have reached peak oil or gas,” he says. “What happened then was that the higher price resulting from the anticipation of decreasing supply spurred technology gains which resulted, again, in increased supplies.”
Keithley cites as good example current developments with shale gas in the U.S. Ten years ago, analysts expected that, by now, the U.S. would have to import a significant amount of its natural gas supplies due to anticipated significant declines in domestic production. Instead, higher U.S. natural gas prices spurred investment in technology that could reach shale gas. Now, some estimate that the U.S. has a 100-year supply of recoverable domestic gas reserves. The same sort of thing is occurring with the development of deepwater drilling technology and the recovery of oil from the Canadian tar sands.
“Price has a way of generating solutions to challenges,” Keithley says. “While it may be at a higher price, I don’t believe we have run out of ideas of how – or areas – to explore and develop oil and gas. Until we do, I strongly doubt that we have hit peak oil.”
Whether or not we have reached peak oil, and even though the debate has quieted, the BP Gulf disaster certainly had an effect on the oil industry. No company or industry, no matter how big, can absorb that kind of environmental, financial, and public relations debacle without reassessing risks and rewards. Much like the events leading to TMI, the industry and the government had perhaps grown complacent after enjoying a long and fairly remarkable safety record (recall the government opening up further deep sea oil development only weeks before the Deepwater Horizon accident). We now know that problems did exist and that closer oversight and management may well have prevented the explosion.
“One of the developments that will occur in the wake of the Gulf spill is a change in the way that the industry manages the risks of offshore drilling,” says Keithley. “Indeed, that has already started with the recent formation of a joint venture among ExxonMobil, Chevron, Shell and ConocoPhillips to develop a company focused on responding to and containing any blowouts or other events that threaten to cause spills in the Gulf of Mexico.”
That will be a big challenge. The Gulf now has about 3,500 platforms, yielding about a third of total U.S. production. They are linked together by thousands of miles of underwater pipeline clustered along the Texas, Louisiana, and Mississippi coastline. When the first underwater well was drilled in 1938, it was in less than 15 feet of water. Since then, wells have gone ever deeper to find the oil, the deepest now being the Tiber well which reaches nearly six miles below the Gulf seafloor.
Like oil, coal comes from compressed prehistoric biomass, but from the remains of terrestrial plants instead of zooplankton and algae. According to the EIA, the U.S. holds about 23% of the world’s reserves. At current extraction rates, there is enough coal in the world to last up to 132 years.
Cheap and plentiful, coal is the world’s largest energy source for electricity, and the biggest emitter of carbon dioxide. Coal-fired power plants release into the atmosphere huge amounts of coal ash and heavy metals such as mercury, selenium, and arsenic harmful to human health and the environment. Strip mines cause significant environmental damage to surrounding communities.
A growing consensus in the industry and the general public see carbon emissions as a problem. The industry is promoting technology to alleviate it. Coal is a key component of the energy mix, and likely will be so for a very long time. Fifty percent of the electricity produced in the U.S. comes from coal-fired power plants. “It is not feasible for that share to be drastically reduced anytime soon,” says Irving.
The industry is developing carbon capture and sequestration (CCS) technology, a technique that captures the carbon dioxide at the stack and stores it underground. Commercially proven technology to gasify the coal for a cleaner, more efficient combustion, already exists, but CCS is still in the early stages of development.
“The time frame I have seen for large scale commercial deployment of CCS begins in the early to mid-2020s,” says Irving. “This means the economics are still uncertain, but if the technology is commercially available it will be in demand, if for no other reason than the abundance of coal.”
In the meantime, the demand for electricity continues to grow. “The industry faces enormous challenges in trying to balance responsible environmental protection with society’s need for reliable and affordable power supplies that fuel our economy and our way of life,” says Farrell. “We simply need more power plants to meet the country’s demand for electricity.”