Monday, January 30, 2012
Low Impact Development and Oaks III Project
The Oaks III project was approved by the Prince William County Board of Supervisors after a public hearing on January 10th 2012. This proposal to rezone almost 18 acres of land adjacent to the Oaks II development and near the Town of Occoquan is an example of how Prince William County is attempting to continue to grow under the demands of the Total Maximum Daily Load (TMDL) mandated by the EPA. Prince William County finds itself along with a large portion of Virginia, Maryland, Washington DC and portions of several other states needing to reduce the amount of run off and better manage existing storm water to meet the goals of the Chesapeake Bay pollution diet, the TMDL mandated by the EPA.
Excessively high levels of nitrogen, phosphorus and sediment in the Chesapeake Bay cause algae blooms that consume oxygen and create “dead zones” where fish and shellfish cannot survive, block sunlight that is needed for underwater Bay grasses, and smother aquatic life on the bottom. The result is fish kills and murky water that threaten the fishing and shellfish industries and recreational use of the bay. The high levels of nitrogen, phosphorus and sediment enter the water from a variety of sources, including agricultural operations, urban and suburban runoff, wastewater treatment facilities, septic systems, air pollution, and minor contribution from natural processes. However, the largest share of nutrient and sediment pollution results from man: suburban development, cars and roadways, agricultural activities to feed man and human and animal waste.
The TMDL sets a total Chesapeake Bay watershed limit for the six states and Washington DC of 185.9 million pounds of nitrogen, 12.5 million pounds of phosphorus and 6.45 billion pounds of sediment per year which is a 25% reduction in nitrogen, 24% reduction in phosphorus and 20 % reduction in sediment from the current levels. The pollution limits are then partitioned to the various states and river basins based on the Chesapeake Bay computer modeling tools and monitoring data. The TMDL addresses only pollution from excess nitrogen, phosphorus and sediment and does not address toxic, carcinogenic or endocrine disruptors that may be present in the Watershed.
The Virginia Watershed Implementation Plans (WIP) Phase I and II lay out a series of pollution control measures called best management practices, BMPs that need to be put in place by 2025, with 60% of the BMPs completed by 2017. While it will take years after 2025 for the Bay and its tributaries to fully heal, EPA expects that once the required BMPs are in place there will be gradual and continued improvement in water quality as the BMPs reduce the nutrient and sediment run off and better control storm water so that the Chesapeake Bay ecosystem can heal itself.
About 37% of Prince William County is served by the HL Mooney Waste Water Treatment Plant which after its recent expansion and upgrade is state of the art in waste water treatment with monthly discharge averages are less than 0.1 for phosphorus, TSS 1 mg/l, BOD non- detect and nitrogen is currently 3 mg/l. Nonetheless, Prince William County needs to reduce the amount of nitrogen, phosphorus and sediment released to the Chesapeake Bay each year to meet the demands of the Virginia Watershed Implementation Plan because the cost to upgrade every waste water treatment plant and every municipal storm sewer system in the state was estimated at three times the cost of implementing best management practices throughout the state.
So, Prince William County finds itself needing to reduce existing runoff from the remaining agricultural operations within the county, urban runoff from towns, suburban runoff, septic systems, and air pollution (if possible) and still be a vibrant community. The Oaks III is the first example of the steps that Prince William County is taking to implement low impact development, LID, features with new growth and use the opportunity of new development to implement BMPs on older projects. LID is the latest catch phase in ecologically friendly site development and consists of five elements: preserving open space and minimizing land disturbance; protecting natural drainage ways, soils and sensitive areas; incorporating natural site elements like wetlands, stream corridors, and woodlands as site features; reducing the size of traditional infrastructure; and decentralize and manage storm water at its source.
The almost 18 acre parcel will be divided into four areas, the largest, 13.6 acres, will be a conservation area. Though public access to the area was not outlined in the proposal, this will serve to preserve open space and limit land disturbance on site. The reduced size of the development planned for the site will result in 8.5% of the site to be covered with impervious surfaces (roads, buildings, parking lots and sidewalks). The developer intends to use LID techniques to manage storm water and runoff on-site including methods to slow storm water flow rates. Instead designing the storm water management system so that it rapidly drains the site, low-impact development relies on design tools and control practices to preserve the natural hydrologic functions of the site. The specifics of the design will be addressed during site plan review in consultation with the developer’s engineer. In addition, the developer will use BMPs to restore 400 feet of the stream channel of the existing on-site intermittent stream. Then, the developer will be required to go back and improve the storm water management on the Oaks II development by installing a new stilling basin in the conservation area where the Oaks II storm water outfall is located. This is planned to slow the storm water flow during large storm events to allow water to infiltrate the soil. The Department of Watershed Management will approve the design to make sure that these BMPs improve the existing storm water management and generate “credit” under the TMDL.
Together the development of Oaks III should result in additional commercial space, additional housing and a reduction in storm water peak flow by using on-site infiltration, on-site bio-retention ponds, grass swales, rain water cisterns and French drains-all tools in the LID technique to mimic natural drainage through distributed control of storm water throughout the entire site. The challenging soils and slope at the Oaks III project should test the effectiveness of implementing these strategies.
Thursday, January 26, 2012
Coal Production, the EPA and Atmospheric Pollution
Some of the nation's coal-fired power plants were originally built as petroleum fired. Both types of electrical power plants were built as the nation grew and industrialized in the first half of the 20th century when coal and oil were the most abundant and cheapest available fuel. However, by 2009 coal burning power plants supplied 45% of the electricity produced, and petroleum supplied about 1%. After the end of World War II the use of coal for rail and water transportation and for heating declined. Coal demand grew starting growing again in the 1960’s with the post-War growth in American industry and increased use for electricity generation. In 1950, U.S. coal production was 508 million metric tons. In 2010, U.S. coal production was 1,050 million metric tons, but what appears as a smooth steady rise did not happen that way.
The use of coal rather than petroleum for electrical generations is a direct result of the 1973 Oil Embargo. In an attempt to regain energy independence after the gas rationing, and oil shortages of the Embargo, the nation turned to its vast coal reserves. Between 1973 and 1976, coal production increased by 14.4%. In 1978, the Power Plant and Industrial Fuel Use Act mandated conversion of most existing oil-burning power plants to coal or natural gas. Thirty years later our point of view has changed.
The coal burning power plants emit 48 tons of mercury annually as well as particulates and other pollutants. In addition, coal combustion adds a significant amount of carbon dioxide to the atmosphere per unit of heat energy, more than does the combustion of other fossil fuels. According to a combined report from the U.S. EPA and the Department of Energy, coal generates 2.1 pounds of CO2 per kWh while natural gas generates 1.3 pounds of CO2 per kWh. The U.S. Environmental Protection Agency, EPA, launched the Greenhouse Gas Reporting Program in October 2009, requiring the reporting of carbon dioxide, CO2, data from large stationary emission sources, as well as suppliers of fuel that would emit GHGs if used. EPA intends to promulgate CO2 regulations in the coming year based on the data collected, but in the meantime has guidance on CO2 emissions permitting.
In the past year, EPA finalized two regulations that were specifically targeting coal fired power plants. The Mercury and Air Toxics Standards (MATS) regulates mercury, arsenic, acid gas, nickel, selenium, and cyanide. MATS was finalized on December 21. 2011. The Cross-State Air Pollution Rule, CSAPR, which requires reductions of sulfur-dioxide and nitrogen-oxide emissions in coal fired plants was made final in July but at the end of last year, the U.S. Court of Appeals District of Columbia Circuit granted a stay to the implementation of the CSAPR pending resolution of the legal challenges. CSAPR, if eventually implemented will reduce SO2 emissions by 73% from 2005 levels and NOx emissions by 54% at the approximately 1,000 coal fired electrical generation plants in the eastern half of the country. It should be clear that EPA’s goal is to reduce if not eliminate the use of coal for power generation.
The composition and total of the net summer generating capacity for electricity in the U. S. has changed in the past decade. Since 1999 the generating capacity for natural gas has more than doubled while the generating capacity for coal fired electrical generators has remained constant. In 2010 natural gas was used to produce 24% of U. S. electricity. Coal was used to product 45 % of electricity. However, the summer net generating capacity of natural gas now exceeds coal. With the tightening and expansion of regulations by the EPA under the Clean Air Act of coal powered generating plants for carbon emissions, mercury, arsenic, acid gases and the Cross-State Air Pollution Rule the federal government looks likely to end electrical generation from coal as a fuel source. This will only be accelerated by the recent fall in natural gas prices.
In 2010, U.S. coal production was 1,050 million metric tons with 92.5% of the coal used to generate electricity. Without electrical generation there is little demand for coal. The EPA’s MATS and CSPAR regulation and the forthcoming greenhouse gas regulations will eliminate the economic feasibility of coal fired electrical generation plants and all but end coal mining in the United States (at least for this generation). However our nation requires power, and in the foreseeable future that is not going to change. Regulation can also be used to limit other sources of energy- the Keystone XL pipeline and Hydraulic Fracturing (fracking) bans. The cost of power is a key factor in determining the cost of production, and the cost of living. To a large extent we have exported manufacturing to China and other emerging economies. China’s use of coal for electricity generation was 1.29 billion metric tons last year, but their pollution control was weak.
The earth’s atmosphere is interconnected. That is accepted when it comes to carbon dioxide or the chemicals that erode the ozone layer, but it also applies to industrial pollutants. The EPA has estimated that just one-quarter of U.S. mercury emissions from coal-burning power plants are deposited within the contiguous U.S. The remainder enters the global cycle. Conversely, current estimates are that less than half of all mercury deposition within the United States comes from American sources. According to the Mount Bachelor Observatory, other Chinese exports include acid rain that falls in China, Korea, and Japan, and pollutants that enter the air stream including sulfates, NOx, black carbon, soot produced by cars, stoves, factories, and crop burning. It seems that EPA can reduce our economic growth without actually reducing the air pollution we experience.
Monday, January 23, 2012
Energy Consumption in the US 2010
According to the US Energy Information Administration, the statistics branch of the Department of Energy, the US used 98 quadrillion BTU last year. Energy sources are measured in different physical units depending on the type of energy source: barrels of oil, cubic feet of natural gas, tons of coal, kilowatt hours of electricity. In the United States, British thermal units (Btu), a measure of heat energy, is a commonly used unit for comparing different types of energy. In 2010, U.S. primary energy use equaled 98 quadrillion (=E15, or one thousand trillion) Btu. If it helps to visualize this any better, that is equivalent to about 2,471 Mtoe (million tons of oil equivalent) the energy measurement standard used by the International Energy Agency, IEA, the keeper of world statistics. In a world with seven billion people the United States is estimated to have 310 million people, about 4% of the world’s population, 7% of the land mass and use about 14% of the energy (depending on how fast China and India are growing since the world energy data is about two years old).
In the United States the US Energy Information Administration collects and reports the energy statistics in quadrillion BTUs and has recently reported the summary data for 2010. These statistics paint a picture of who we are today. The major energy sources in the United States are petroleum-gas and oil (37%), natural gas (25%), coal (21%), nuclear (9%), and renewable energy primarily biomass and hydro power generation (8%). The United States only produces about 75% of the energy we consume, the shortfall is imported petroleum. The major users are heating of residential and commercial buildings (11%), industry (20%), transportation including cars, trucks, trains, planes and ships (27.4%), and electric power generation (40%).
The slightly complicated chart above shows the types of fuel and the sector that consumes it. Looking at petroleum, you can see that it supplies 37% of our energy needs. Transportation, cars, trucks, trains, planes and ships, uses 71% of petroleum and that petroleum provides 94% of the total energy used in transportation. Industry uses 22% of the total petroleum consumed by the United States to supply 40% of the energy used by industry. Studying all the details of the chart tells you a lot about the United States in 2010. It will also allow you to understand the impact that policies, regulation and scientific advances might have on the country.
For example, 92% of coal mined in the United States is used to generate electricity, regulations like the EPA’s Mercury and Air Toxics Standards and the Cross-State Air Pollution Rule affecting electricity generation are likely to impact coal use, cost of electricity, mining and mining regions. In 2010, of the 1,085.3 million short tons of coal produced in the United States, about 7.5% was exported, so if the number of coal fired electrical plants is decreased, the demand for coal to produce electricity is reduced, the amount of coal mined in the United States will decrease, the number of coal miners and employees of coal companies will decrease, the trains transporting coal and their employees will not be necessary, and the cost of electricity will increase as the electrical power industry builds new generation plants burning other fuels.
.
Some primary energy sources, such as nuclear and coal, are entirely used in one sector, electrical generation. Others, like natural gas and renewables, are more evenly distributed across sectors. Similarly, while transportation is almost entirely dependent on petroleum, electric power uses a variety of fuels. Because the United States is the world’s largest oil importer, it may seem surprising that it also exports about 2 million barrels a day of refined petroleum products. It seems were are also an excellent oil refiner on the easily accessed Gulf Coast. Petroleum is used primarily for gasoline for cars (55%), diesel for trucks and heating oil (23%), propane and liquefied petroleum gases used in homes and farms for cooking, heating, and jet fuel (9%). The five biggest sources of net crude oil imported to the United States in 2010 were: Canada (25%), Saudi Arabia (12%), Nigeria (11%), Venezuela (10%), Mexico (9%). Policy decisions about a future Keystone pipeline may change that in the future. U.S. crude oil imports grew rapidly from mid-20th century until the late 1970s, but fell sharply from 1979 to 1985 because of restructuring the economy (manufacturing as a component of the economy was reduced), conservation, and improved efficiency. After 1985, the upward trend resumed, peaking at 10.1 million barrels per day in 2005, and falling to 9.2 million barrels per day in 2010.
Natural gas is the source of 25% of the energy consumed in the United States and in 2010 was used almost equally for industry, electrical generations and residential and commercial heating. Most, but not all, of the natural gas consumed in the United States is produced in the United States. Some natural gas is imported to the United States in the older Keystone pipelines. Natural gas is also being shipped to the United States as liquefied natural gas (LNG). U.S. natural gas production and consumption were nearly in balance through 1986 though U.S. production of natural gas peaked in 1973. From 1986 to 2006 consumption of natural gas outpaced production, and imports rose. Then in 2006 U.S. production of natural gas began to increase as a result of the development of more efficient and cost effective hydraulic fracturing techniques. In 2010 natural gas production in the United States reached the highest recorded annual total since 1973. Regulation and control of hydraulic fracturing will impact the cost of natural gas production in the United States, the availability of gas and the environmental impact to our natural resources.
In truth I am an old time engineer who learned to look at the world with a slide rule (calculators were just coming in and thought to be cheating). Through numbers I understand the world, policies and see relationships.
Thursday, January 19, 2012
Keystone XL, Fracking, and the Price of Natural Gas
Last year, New York placed a moratorium on hydro fracturing in the New York portion of the Marcellus Shale while it assessed the effects of fracking. New York Department of Environmental Conservation’s draft environmental impact statement (EIS) on drilling was released almost four months ago and recommended that drilling be permitted, but with conditions. The comment period was scheduled to end on December 12, 2011, but was extended to January 11, 2012 and closed after having received more than 20,000 comments. In their press release at the close of the comment period the New York Department of Environmental Conservation stated: “Public input is an important part of establishing responsible conditions for high-volume hydraulic fracturing as well as determining whether it can be done safely. Many significant improvements were made to the 2009 draft based on comments DEC received. We expect additional improvements will be made to the 2011 draft based on the comments submitted during this comment period." The pressure is off on immediately ending the ban on hydro fracking in New York because the price of natural gas has hit a two year low, but the ban will be lifted. There is really no way to permanently prevent drilling to access the shale gas. Sooner or later it will be done, hopefully in a safe and environmentally sensitive manner.
The race to lock up leases on shale gas and a mild winter (so far) in significant parts of the United States has resulted in an oversupply of natural gas. Despite the fall in natural gas prices fracking will continue, not because it is profitable at this price, but because drilling leases and agreements made when gas prices were higher required drilling within a certain period of time. If a company fails to drill they will lose the lease and the money paid for those leases. So, for the next two years or so, no matter the price of natural gas, they will drill where permits are available. In addition, natural gas is often a by-product of much more profitable oil drilling. With oil prices topping $100 a barrel, oil companies in Texas continue to produce natural gas. In Texas where gas is often a by-product of oil production about 40 billion cubic feet of natural gas is flared off each year for the past several years as drilling has expanded. Texas requires oil wells to hook up to gas pipelines eventually which will increase the supply of available natural gas as the hookups catch up with production.
The high oil prices driving the Texas tight oil boom are also making the crude bitumen contained in the Canadian oil sands highly profitable. The current price of oil combined with threats from Iran to close the Strait of Hormuz and block oil shipments from the Middle East have made the oil sands even more attractive. A provision that was attached to the recent payroll tax bill signed by President Obama requires a decision by February 21st 2012 on the construction of the controversial Keystone XL pipeline from Canada to the U.S. The proposed Keystone XL, is an approximate 1,660 mile, 36 inch crude oil pipeline that would begin in Alberta and extend southeast through Saskatchewan, Montana, South Dakota and Nebraska continuing through Oklahoma to an existing terminal not far from Port Arthur, Texas. The oil would arrive at the Texas refineries and ports for American market and export. The U.S. State Department is the lead handling the issue because the pipeline crosses national boundaries, but President Obama has made it clear would make the final decision on whether to approve the pipeline, and the recent tax bill has forced a decision the issue that had been delayed until 2013.
As expected the State Department declined the Keystone XL Pipeline that would have provided a guaranteed oil supply from Canada. The project's critics argue that the mining and refining of oil sands would increase greenhouse gas emissions, pollute water and destroy the Canadian forests. Many Nebraska residents also opposed the Keystone XL pipeline because it originally would have crossed the Ogallala aquifer, the main source of drinking water in the upper Midwest. The administration decided in November to require bypassing the aquifer, but the increased carbon dioxide load associated with tapping the oil sands is a problem to the administration. Proponents of the project worry about lost jobs and energy security and that rejecting the Keystone XL project will push the Canadians to build the 730 mile Enbridge pipeline to a new port in British Columbia and ship the oil to China. However, building a pipeline through British Columbia's northern wilderness faces British Columbia environmental regulations, the stronghold of Canadian environmental regulations, and that project is also experiencing resistance from an existing decades-old moratorium on oil tanker traffic on the British Columbia coastline. The rejection is about the carbon content of the fuel.
Like all petroleum production, oil sands operations can adversely impact the environment. In the past open pit mining of oil sands projects have impacted the land when trees, brush and overburden have been removed for the mining site. As a condition of licensing, projects are required to implement a reclamation plan, but reclamation is a slow process. In addition, large amounts of water are used for oil sands operations for the steam in the current method of extraction. Despite recycling, most of the water ends up in tailings ponds, but newer treatment methods have reduced the treatment and recovery time for tailing ponds as environmental regulations evolve with advances in technology in both oil sand extraction and refining techniques that have allowed the profitable extraction of this oil. These advances and rising oil prices have altered the economics and have made the extraction of oil sand possible and inevitable. Still the energy required to heat the oil sands so that they will flow results in increase the carbon footprint for each barrel of oil. The politics of energy security are not consistent with the overall goal of reduction of greenhouse gas emissions since the extraction and refining of oil sands reportedly produce more greenhouse gases than the extraction and refining of Iranian oil. The President has pledged to reduce U.S greenhouse gas emissions to 17% below the 2005 levels by 2020 and all regulatory and policy decisions have been consistent with that goal. The United States thirst for oil is not going to abate and the Middle East is becoming increasingly unstable. Given his consistent record in reducing greenhouse gas, it is likely the administration will choose the geopolitical risk over the environmental risk of oil with a higher carbon footprint.
Monday, January 16, 2012
Emissions of Carbon Dioxide in the United States
Last Wednesday, the U.S. EPA released the list of facilities that emitted the most carbon dioxide in 2010. This is in preparation for later this year when the U.S. EPA is expected to promulgate new carbon dioxide standards for power plants. Power plants accounted for more than half of the greenhouse-gas emissions by the major emitters on the list, with refineries and chemical facilities also contributing large shares. Of the 100 largest emitters—defined by the EPA as facilities emitting more than 7 million metric tons of carbon dioxide equivalent—96 of them are power plants. Two are refineries and two are iron and steel mills. (Using government respiration data for mine collapse survival, the population of the United States emitted 170 million metric tons of CO2 by breathing last year.)
According to the United States the US Energy Information Administration that collects and reports the energy statistics, U.S. energy related carbon dioxide emissions in the United States totaled 5,426 million metric tons in 2009 (the most recent year available) down from a peak of 6,022 million metric tons in 2007. For the past ten years electrical generation accounted for approximately 40% of the carbon dioxide emissions in the United States, up from 36% in 1990 when industrial sources accounted for a larger share of the economy and significantly higher share of CO2 emissions.
EPA launched the Greenhouse Gas Reporting Program in October 2009, requiring the reporting of carbon dioxide data from large stationary emission sources, as well as suppliers of fuel that would emit GHGs if used. This is the first year that data was reported. Though EPA uses the term greenhouse gasses in their press release and program title they are only talking about carbon dioxide, though the main greenhouse substances in the earth's atmosphere are water vapor and clouds. Carbon dioxide represents less than 0.04% (386 parts per million) of the atmosphere and its significant increase over the past hundred years or so is attributed to man’s impact on earth. The other greenhouse gasses are methane (1.8 parts per million), nitrous oxide (0.3 parts per million), hydrofluocarbons (0.00025 parts per million), Perfluorocarbons (0.00086 parts per million), and sulfur hexafloride (0.000006 parts per million). The Greenhouse Gas Reporting Program (GHGRP) does not represent total U.S. emissions, only the major point sources, what EPA calls stationary sources.
The largest carbon dioxide generators on the U.S. EPA list are generally speaking the largest stationary combustion sources, the largest electrical generation plants followed by large industrial furnaces (iron and steel making and refineries that flair excess gas) that were built during the era of massive size plants and do not necessarily reflect how efficient, clean or dirty a plant is. The amount of carbon dioxide released is a function of the size of facility and the type of fuel used. According to a combined report from the U.S. EPA and the Department of Energy, coal generates 2.1 pounds of CO2 per kWh while natural gas generates 1.3 pounds of CO2 per kWh. The major users of fuel are heating of residential and commercial buildings (11%), industry (20%), transportation including cars, trucks, trains, planes and ships (27.4%), and electric power generation (40%).
The largest stationary sources of CO2 are large power plants. Coal fired power plants are with the exception of nuclear power the largest electrical generation plants, and coal which generates 38% more carbon dioxide when burned than natural gas. Ninety-two and a half percent of the coal mined in the United States is used to generate 45% of the electricity produced in the United States. To protect the environment and meet the President Obama’s pledge to reduce U.S greenhouse gas emissions to 17% below the 2005 levels by 2020 the U.S. EPA wants to eliminate coal as a fuel source for electrical power plant generation through increasing regulation of coal fired electrical generation plants and new millage and emission standards mandated for the automobile industry.
The Mercury and Air Toxics Standards (MATS) regulates mercury, arsenic, acid gas, nickel, selenium, and cyanide. MATS was finalized on December 21. 2011. This regulation will slash emissions of these pollutants primarily from coal fired electrical generation plants. According to the EPA it will cost $9.6 billion annually to comply with the MATS regulations and Industry analysts believe that 10% to 20% of U.S. coal-fired generating capacity will be shut down by 2016. The combined benefit of MATS and the Cross State Air Pollution Rule was estimated by the U.S. EPA to total over decades up to $380 billion in the form of longer, healthier lives and reduced health care costs.
The Cross-State Air Pollution Rule, CSAPR, which requires reductions of sulfur-dioxide and nitrogen-oxide emissions in coal fired plants and is estimated to cost $2.4 billion in annual costs. CSAPR was made final in July but at the end of last year, the U.S. Court of Appeals District of Columbia Circuit granted a stay to the implementation of the CSAPR pending resolution of the legal challenges. CSAPR, if eventually implemented will reduce SO2 emissions by 73% from 2005 levels and NOx emissions by 54% at the approximately 1,000 coal fired electrical generation plants in the eastern half of the country.
Now the U.S. EPA is preparing for the release later this year of CO2 regulations for power plants by releasing the list of industrial CO2 emitters. Electrical generation and automobiles and trucks account for 74% of the carbon dioxide emissions in the United States. Last summer the U.S. Environmental Protection Agency (EPA) and the Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) finalized the new millage and emission standards for automobiles and light trucks for model year 2012 through 2016. The EPA GHG standards require these vehicles to meet an estimated combined average emissions level of 250 grams of carbon dioxide (CO2) per mile in model year 2016, equivalent to 35.5 miles per gallon (mpg).
Since 1990 global CO2 emissions have gone from 21 billion tons of CO2 to 29 billion tons of CO2 in 2009 according to data from the International Energy Agency (IEA). Global emissions of CO2 increased 38% despite a 14.7% decrease below their 1990 level for the Kyoto Participants and the United States increased of about 7% above 1990 levels. The bulk of the increase has come from China, Africa, Middle East, India and the rest of Asia. The United States and 35 Kyoto participants represent less than half the world CO2 emissions and that is shrinking every year. Now the United States appears on track to reduce their CO2 emissions over 1% below their 1990 levels and fulfill the promise that President Obama made at the Copenhagen meeting in 2010 when the President pledged to reduce U.S greenhouse gas emissions to 17% below the 2005 levels by 2020.
Thursday, January 12, 2012
The Cost of Implementing the Virginia WIP to Meet the Chesapeake Bay TMDL
Last Friday, January 6th 2012 I attended the Potomac Watershed Roundtable meeting in Warrenton, VA. The Potomac Watershed Roundtable is a regional forum open to Virginia citizens, community groups and local governments to promote collaboration and cooperation on improving and maintaining water quality of the Potomac Watershed. In recent years this has become about the Chesapeake Bay pollution diet, the Total Maximum Daily Load (TMDL) mandated by the EPA to the six Chesapeake Bay Watershed states (Virginia, Maryland, Delaware, New York, Pennsylvania and West Virginia) and the District of the Columbia. The meeting’s first speaker was Jeff Corbin, Senior Advisor to the EPA Administrator for the Chesapeake Bay.
The TMDL addresses only pollution from excess nitrogen, phosphorus and sediment. The TMDL does not address toxic, carcinogenic or endocrine disruptors that may be present in the Watershed. The excess nitrogen, phosphorus and sediment in the Chesapeake Bay cause algae blooms that consume oxygen and create “dead zones” where fish and shellfish cannot survive, block sunlight that is needed for underwater Bay grasses, and smother aquatic life on the bottom. The result is fish kills and murky water that threaten the aquatic industry and recreational use of the bay.
The TMDL sets a total Chesapeake Bay watershed limit for the six states and Washington DC of 185.9 million pounds of nitrogen, 12.5 million pounds of phosphorus and 6.45 billion pounds of sediment per year which is a 25% reduction in nitrogen, 24% reduction in phosphorus and 20 %t reduction in sediment from the current levels. The pollution limits are then partitioned to the various jurisdictions and river basins based on the Chesapeake Bay modeling tools and monitoring data. The Chesapeake Bay TMDL and the Watershed Implementation Plans (WIP) Phase I and II are designed to ensure that all pollution control measures needed to fully restore the Bay and its tidal rivers are in place by 2025, with at least 60 % of the actions completed by 2017. While it will take years after 2025 for the Bay and its tributaries to fully heal, EPA expects that once the required best management practices (BMPs) are in place there will be gradual and continued improvement in water quality as BMPs reduce the nutrient and sediment run off and better control storm water so that the Chesapeake Bay ecosystem can heal itself.
Since 1985 the excess nutrient contamination to the Chesapeake Bay has decreased, but the Bay’s waters remain seriously degraded. Phosphorus, nitrogen and sediment are released to the Chesapeake Bay Watershed by waste water treatment plants that service the millions of residents of the watershed, and by rainwater that percolates to the groundwater carrying excess nitrogen from septic and washes soil, nitrogen, and phosphorus into storm water and ultimately reaches the rivers, streams and bodies of water that make up the Chesapeake Bay Watershed. Though control of nutrient contamination has improved in all areas of the region, the massive growth of the population and expansion of developed land since 1985 has contributed to the nutrient and sediment pollution problem while the reductions in nutrient contamination have not come fast enough to meet the goals agreed to in the past. More needs to be done to have a healthy Chesapeake Bay and federal action was taken to enforce faster action.
The US EPA has taken control of the situation and can utilize what they call “back stop measures”, but are simply reductions in the allowed (permitted) releases from point source permits (waste water treatment plants, municipal separate storm sewer systems, and confined animal feed lots) to achieve the TMDL. At this time EPA can regulate only point source contamination, they cannot regulate non-point source contamination, which is runoff from roads, parking lots, yards and agricultural fields. The point source reductions are the most expensive way to achieve the reductions in nitrogen, phosphorus and sediment in the bay and would serve as a penalty to the state for failure to meet the targets under the watershed implementation plans. While the Prince William County HL Mooney Advanced Waste Water Treatment Plant is state of the art, other plants in the state are not and the cost to achieve the reduced effluent numbers would far exceed the estimated cost of $7 billion to implement BPM’s to non-point sources.
The real plan is to implement (and maintain) enough BMPs to meet the reductions in the TMDL according to the Chesapeake Bay Model. The actual costs of BMPs are highly variable. For example the cost to plant a cover crop is much less than to fence a stream or stabilize an eroding river bank, and the $7 billion estimate may be the low cost estimate from implementing a BMP on every agricultural acre in the Virginia portion of the Chesapeake Bay Watershed. The local communities in Virginia have been asked by the Virginia Department of Natural Resources request to develop land use information and BMP inventory to meet the local WIP Phase I effort level. With the help of some computerized tools they are going to use the EPA model to determine the least cost method to reach the “acceptable level of effort” necessary to meet the EPA allocations.
Fundamentally, complying with the WIP is about spending enough money, putting in enough BMPs to have the Chesapeake Bay Model say that we meet our TMDL. BPMs are not always easy to see to the untrained eye. There are a long list of techniques to manage storm water to reduce runoff of nutrients and soil from urban, suburban and rural areas. A really expensive (and easy to see) BMP would be to repave roads and parking lots with pervious pavement so that storm water could soak into the road way thereby reducing runoff. This can be impractical as a retrofit because of cost of replacing roads and parking lots. More practical way to limit large volumes of storm water runoff would be to install a rain garden system along roadways and parking lots to infiltrate street runoff. Rain gardens look like landscaping. EPA has a long list of acceptable BMPs at various costs and effectiveness that can be used by communities to meet the requirements of the TMDL under the WIP. The challenge is determining what needs to be done, convincing people to do it (there is tremendous resistance to installing and maintaining BMPs by residents of communities, farmers and politicians) and paying for the BMPs.
Cost is a big issue. For FY 2012 EPA maintained the budget for the Chesapeake Bay Program at 2010 levels, $50 million, enough to monitor, advise and enforce the implementation of the seven WIPs, but clearly no money to pay for BMPs. In creating the Chesapeake Bay TMDL EPA has created an obligation of between $1,000 and $2,500 per person for everyone living in the Chesapeake Bay Watershed to meet the requirements of the WIP Phase I. The Virginia portion of complying with the WIP Phase I is estimated to cost at least $7 billion. As a conservationist, I fully support the common goal of a cleaner, healthier Chesapeake Bay watershed, but worry about the costs to implement the solution.
The TMDL addresses only pollution from excess nitrogen, phosphorus and sediment. The TMDL does not address toxic, carcinogenic or endocrine disruptors that may be present in the Watershed. The excess nitrogen, phosphorus and sediment in the Chesapeake Bay cause algae blooms that consume oxygen and create “dead zones” where fish and shellfish cannot survive, block sunlight that is needed for underwater Bay grasses, and smother aquatic life on the bottom. The result is fish kills and murky water that threaten the aquatic industry and recreational use of the bay.
The TMDL sets a total Chesapeake Bay watershed limit for the six states and Washington DC of 185.9 million pounds of nitrogen, 12.5 million pounds of phosphorus and 6.45 billion pounds of sediment per year which is a 25% reduction in nitrogen, 24% reduction in phosphorus and 20 %t reduction in sediment from the current levels. The pollution limits are then partitioned to the various jurisdictions and river basins based on the Chesapeake Bay modeling tools and monitoring data. The Chesapeake Bay TMDL and the Watershed Implementation Plans (WIP) Phase I and II are designed to ensure that all pollution control measures needed to fully restore the Bay and its tidal rivers are in place by 2025, with at least 60 % of the actions completed by 2017. While it will take years after 2025 for the Bay and its tributaries to fully heal, EPA expects that once the required best management practices (BMPs) are in place there will be gradual and continued improvement in water quality as BMPs reduce the nutrient and sediment run off and better control storm water so that the Chesapeake Bay ecosystem can heal itself.
Since 1985 the excess nutrient contamination to the Chesapeake Bay has decreased, but the Bay’s waters remain seriously degraded. Phosphorus, nitrogen and sediment are released to the Chesapeake Bay Watershed by waste water treatment plants that service the millions of residents of the watershed, and by rainwater that percolates to the groundwater carrying excess nitrogen from septic and washes soil, nitrogen, and phosphorus into storm water and ultimately reaches the rivers, streams and bodies of water that make up the Chesapeake Bay Watershed. Though control of nutrient contamination has improved in all areas of the region, the massive growth of the population and expansion of developed land since 1985 has contributed to the nutrient and sediment pollution problem while the reductions in nutrient contamination have not come fast enough to meet the goals agreed to in the past. More needs to be done to have a healthy Chesapeake Bay and federal action was taken to enforce faster action.
The US EPA has taken control of the situation and can utilize what they call “back stop measures”, but are simply reductions in the allowed (permitted) releases from point source permits (waste water treatment plants, municipal separate storm sewer systems, and confined animal feed lots) to achieve the TMDL. At this time EPA can regulate only point source contamination, they cannot regulate non-point source contamination, which is runoff from roads, parking lots, yards and agricultural fields. The point source reductions are the most expensive way to achieve the reductions in nitrogen, phosphorus and sediment in the bay and would serve as a penalty to the state for failure to meet the targets under the watershed implementation plans. While the Prince William County HL Mooney Advanced Waste Water Treatment Plant is state of the art, other plants in the state are not and the cost to achieve the reduced effluent numbers would far exceed the estimated cost of $7 billion to implement BPM’s to non-point sources.
The real plan is to implement (and maintain) enough BMPs to meet the reductions in the TMDL according to the Chesapeake Bay Model. The actual costs of BMPs are highly variable. For example the cost to plant a cover crop is much less than to fence a stream or stabilize an eroding river bank, and the $7 billion estimate may be the low cost estimate from implementing a BMP on every agricultural acre in the Virginia portion of the Chesapeake Bay Watershed. The local communities in Virginia have been asked by the Virginia Department of Natural Resources request to develop land use information and BMP inventory to meet the local WIP Phase I effort level. With the help of some computerized tools they are going to use the EPA model to determine the least cost method to reach the “acceptable level of effort” necessary to meet the EPA allocations.
Fundamentally, complying with the WIP is about spending enough money, putting in enough BMPs to have the Chesapeake Bay Model say that we meet our TMDL. BPMs are not always easy to see to the untrained eye. There are a long list of techniques to manage storm water to reduce runoff of nutrients and soil from urban, suburban and rural areas. A really expensive (and easy to see) BMP would be to repave roads and parking lots with pervious pavement so that storm water could soak into the road way thereby reducing runoff. This can be impractical as a retrofit because of cost of replacing roads and parking lots. More practical way to limit large volumes of storm water runoff would be to install a rain garden system along roadways and parking lots to infiltrate street runoff. Rain gardens look like landscaping. EPA has a long list of acceptable BMPs at various costs and effectiveness that can be used by communities to meet the requirements of the TMDL under the WIP. The challenge is determining what needs to be done, convincing people to do it (there is tremendous resistance to installing and maintaining BMPs by residents of communities, farmers and politicians) and paying for the BMPs.
Cost is a big issue. For FY 2012 EPA maintained the budget for the Chesapeake Bay Program at 2010 levels, $50 million, enough to monitor, advise and enforce the implementation of the seven WIPs, but clearly no money to pay for BMPs. In creating the Chesapeake Bay TMDL EPA has created an obligation of between $1,000 and $2,500 per person for everyone living in the Chesapeake Bay Watershed to meet the requirements of the WIP Phase I. The Virginia portion of complying with the WIP Phase I is estimated to cost at least $7 billion. As a conservationist, I fully support the common goal of a cleaner, healthier Chesapeake Bay watershed, but worry about the costs to implement the solution.
Monday, January 9, 2012
Beijing and Bakersfield Air Quality Problems
Air pollution is once more in the news. The Chinese announced that they will begin publishing the small particle, PM2.5 air pollution data for Beijing after January 23rd. Beijing, has been reporting their air quality based on PM10, which are particles smaller than 10 micrometers but larger than 2.5 micrometers in diameter. It has not reported particulate pollution of 2.5 micrometers or less. Combustion engines and coal burning power plants are key contributors to PM2.5 particles, and according to the US EPA and World Health Organization, the smaller, finer pollutants measured by PM2.5 are especially dangerous for human health. Studies have shown that people increased risk of asthma, lung cancer, cardiovascular problems, birth defects and premature death from particles smaller than 2.5 microns in diameter that lodge deep in the lungs.
Despite the official Chinese government report of 286 “blue sky” days last year, the air pollution in Beijing, home to over 20 million people, is easily seen by the smog that wraps the city’s apartment complexes and office buildings many days and by non-government sanctioned reports. The US Embassy in Beijing has their own PM2.5 monitoring station atop their building and has been reporting via an open Embassy Twitter Feed hourly PM2.5 pollution data to the chagrin of the Chinese government because it conflicted with official government data. The U.S. Embassy reported a series of readings beyond the scale of the air quality index, AQI, (which goes to 500) in fall of 2010 and levels over 300 this past fall which sparked a public campaign (over the internet) for better government reporting. Current air quality levels from the US Embassy are 325 which is “hazardous” according to the EPA.
In the United States, the Air Pollution Control District for Bakersfield, California, the city ranked by the American Lung Association as having the worst particulate matter (PM2.5) air pollution in the United States for the last several years, announced they had the worst air quality December recorded in over a decade. The area has already had four times as many unhealthy days this season than in the entire 2010-2011 winter season. In Bakersfield and the rest of the San Joaquin Valley PM2.5 concentrations are highest when cool stable weather and low wind speeds coupled with the Valley’s topography limit dispersion of pollutants and allow multi-day buildups of PM2.5 concentrations to occur along with the “Tule” fog which forms during this time of year. A lingering high-pressure system and dry La Nina conditions in the Pacific have created stagnant air in the valley. As a result car exhaust, agricultural emission and smoke from factories and chimneys has remained in place and this morning’s reading on the San Joaquin Valley Air Pollution Control District was 25 ug/m3 at Bakersfield with an AQI of 74 which is “moderate” air quality.
According to the Lung Association, the two biggest air pollution threats in the United States are ozone and particle pollution. Other pollutants include carbon monoxide, lead, nitrogen dioxide, sulfur dioxide and a variety of toxic substances including mercury that appear in smaller quantities. The United States Environmental Protection Agency, U.S. EPA, requires states to monitor air pollution to assess the healthfulness of air quality and ensure that they meet minimum air quality standards. The US EPA has established both annual and 24-hour PM2.5 air quality standards (as well as standards for other pollutants). The annual standard is 15 ug/m3 (an AQI of 49). The 24-hr standard was recently revised to a level of 35 ug/m3 (an AQI of 99). The recently challenged, Cross-State Air Pollution Rule (CSAPR) was intended in part to prevent pollution from one state from moving into other states and preventing them from meeting their goals, but the problems in the San Joaquin Valley are caused by local industry, the weather and the Sierra Nevada mountains to the east and the Coastal Range mountains to the west that wall the valley. The World Health Organization guidelines for PM2.5 are based on a mean level rather than the average that the U.S. EPA uses.
PM2.5 particles can be either directly emitted or formed via atmospheric reactions. Primary particles are emitted from cars, trucks, and heavy equipment, as well as residential wood combustion, forest fires, and agricultural waste burning. The main components of secondary particulate matter are formed when pollutants like NOx and SO2 react in the atmosphere to form particles. In the San Joaquin Valley the primary source of secondary particles are ammonium nitrate and ammonium sulfate which forms when nitrogen oxide and sulfur dioxide emissions from cars, trucks, and industrial facilities react with ammonia from agricultural operations.
Bakersfield, at the southern end of the San Joaquin Valley with a population of about 350,000 has some of the worst air in the United States. The city’s economy relies on agriculture, petroleum extraction and refining, and manufacturing. Cutting through the valley are the state's two main north-south highway corridors, the routes for nearly all long-distance tractor trailer rigs, the No. 2 source of particulate pollution in the valley. Also in the mix are millions of acres of plowed farmland and 1.6 million dairy cows and ammonia-laden manure they create. Without wind and rain, when the Tule fog forms, the air sits, trapped as if in a pot with a lid. With an air quality index a fraction of the level in Beijing local groups in Bakersfield have condemned the failure of the Air Quality Control Board to meet the federal standard by banning wood burning as the easiest source of particulate pollution that is easy to ban, but even with the ban in place for most of December, air quality remained at unhealthy levels with AQI over 100, but air quality has improved over the weekend.
Thursday, January 5, 2012
Ohio Fracking and Earthquakes
A series of eleven small earthquakes ranging in magnitude from 2.1 and 4.0 have taken place beneath Youngstown Ohio since March 2010. Each earthquake is reported to have had their epicenters near the Ohio Works Drive injection well used by D&L Energy Inc. to dispose of waste water from nearby hydro fracking jobs. D&L began injecting the waste water from the fracking jobs, referred to as brine, into its Ohio Works well in December 2010.
The earthquakes early in the spring led the Ohio Department of Natural Resources, ODNR, to have Columbia University Lamont-Doherty Earth Observatory install seismic monitoring equipment in the area to determine whether there was any relationship between fracking or water disposal activity and the earthquakes. A report is expected in the near future, but after the earthquakes, on December 30th and 31st, use of the disposal well has been halted. ODNR has halted deep well disposal of fracking waste water in the D&L Ohio Works Drive injection well and four other injection wells in the Youngstown area pending analysis of the data collected by the Lamont-Doherty scientists.
In hydraulic fracking on average 2.8 million gallons of chemicals and water is pumped into the shale formation at 9,000 pounds per square inch and literally cracks the shale or breaks open existing cracks and allows the trapped natural gas to flow. While geologists and engineers believe that there is little risk that the fracking “water,” a mix chemicals and water, will somehow infiltrate through the shale and the thousands of feet to reach the groundwater reserves though a fissure created by the fracking, there are other routes of contamination and impact. An now concern is focused in Ohio on the disposal of the flowback water that is not absorbed into the rock formations.
The water that is absorbed into the rock formations may change the formations in ways we do not yet understand, it is the disposal of the flowback that is the focus of this investigation. Though the water in the hydro frack is exempted from the clean water act (by a 2005 act of congress), the flowback which contains “proprietary” chemicals and contaminates from the geological formation is not and must be disposed under state regulations. This is not the first study of earthquakes associated with the disposal of fracking water.
Researchers of the University of Texas at Austin were part of a team of researchers who studied a series of small earthquakes that struck near Dallas, Texas in 2008 and 2009, in an area where natural gas companies had used fracking. The epicenter of the quakes turned out to be about half a mile from a deep injection disposal well under the Dallas-Fort Worth International Airport used to dispose of the fracking fluid. The largest earthquake of the series measured 3.3 on the Richter scale, a very small earthquake. In a study that was published in the Bulletin of the Seismological Society of America, the researchers also reviewed records from US Geological Survey seismic-recording stations in Oklahoma and Dallas. It was concluded by the researchers that the fracking did not cause the earthquakes, but there seemed to be a relationship to the deep well injection of the fracking fluid to the earthquakes.
Columbia's Lamont-Doherty Earth Observatory scientists have the advantage of having placed seismic monitoring equipment in the area before the last few quakes which included the strongest of the series at 4.0 on the Richter scale on New Year’s Eve. The location of the earthquake epicenter is expected to be in the area of the Ohio Works Drive injection well an area of no previous seismic activity. It has been speculated that the earthquakes were triggered by the fluid injected into the well that permeated a previously unknown fault.
Our ability to recover natural gas buried a mile or more beneath the earth has increased. Advances in horizontal drilling which allows a vertically drilled well to turn and run thousands of feet laterally through the earth combined with advances in hydraulic fracking, the pumping of millions of gallons of water and laced with thousands of gallons of chemicals into shale at high pressure have increased our ability to recover natural gas from shale ahead of our knowledge of the consequences of the fracking and disposal of the waste water. Wastewaters from the hydraulic fracturing process must be disposed of safely, and deep injecting wells had been the favored method. There are 177 similar injection wells around the state of Ohio that will remain in use. The Youngstown-area well has been the only site with seismic activity, according to the ODNR. Only five Youngstown area wells have been shut down.
The earthquakes early in the spring led the Ohio Department of Natural Resources, ODNR, to have Columbia University Lamont-Doherty Earth Observatory install seismic monitoring equipment in the area to determine whether there was any relationship between fracking or water disposal activity and the earthquakes. A report is expected in the near future, but after the earthquakes, on December 30th and 31st, use of the disposal well has been halted. ODNR has halted deep well disposal of fracking waste water in the D&L Ohio Works Drive injection well and four other injection wells in the Youngstown area pending analysis of the data collected by the Lamont-Doherty scientists.
In hydraulic fracking on average 2.8 million gallons of chemicals and water is pumped into the shale formation at 9,000 pounds per square inch and literally cracks the shale or breaks open existing cracks and allows the trapped natural gas to flow. While geologists and engineers believe that there is little risk that the fracking “water,” a mix chemicals and water, will somehow infiltrate through the shale and the thousands of feet to reach the groundwater reserves though a fissure created by the fracking, there are other routes of contamination and impact. An now concern is focused in Ohio on the disposal of the flowback water that is not absorbed into the rock formations.
The water that is absorbed into the rock formations may change the formations in ways we do not yet understand, it is the disposal of the flowback that is the focus of this investigation. Though the water in the hydro frack is exempted from the clean water act (by a 2005 act of congress), the flowback which contains “proprietary” chemicals and contaminates from the geological formation is not and must be disposed under state regulations. This is not the first study of earthquakes associated with the disposal of fracking water.
Researchers of the University of Texas at Austin were part of a team of researchers who studied a series of small earthquakes that struck near Dallas, Texas in 2008 and 2009, in an area where natural gas companies had used fracking. The epicenter of the quakes turned out to be about half a mile from a deep injection disposal well under the Dallas-Fort Worth International Airport used to dispose of the fracking fluid. The largest earthquake of the series measured 3.3 on the Richter scale, a very small earthquake. In a study that was published in the Bulletin of the Seismological Society of America, the researchers also reviewed records from US Geological Survey seismic-recording stations in Oklahoma and Dallas. It was concluded by the researchers that the fracking did not cause the earthquakes, but there seemed to be a relationship to the deep well injection of the fracking fluid to the earthquakes.
Columbia's Lamont-Doherty Earth Observatory scientists have the advantage of having placed seismic monitoring equipment in the area before the last few quakes which included the strongest of the series at 4.0 on the Richter scale on New Year’s Eve. The location of the earthquake epicenter is expected to be in the area of the Ohio Works Drive injection well an area of no previous seismic activity. It has been speculated that the earthquakes were triggered by the fluid injected into the well that permeated a previously unknown fault.
Our ability to recover natural gas buried a mile or more beneath the earth has increased. Advances in horizontal drilling which allows a vertically drilled well to turn and run thousands of feet laterally through the earth combined with advances in hydraulic fracking, the pumping of millions of gallons of water and laced with thousands of gallons of chemicals into shale at high pressure have increased our ability to recover natural gas from shale ahead of our knowledge of the consequences of the fracking and disposal of the waste water. Wastewaters from the hydraulic fracturing process must be disposed of safely, and deep injecting wells had been the favored method. There are 177 similar injection wells around the state of Ohio that will remain in use. The Youngstown-area well has been the only site with seismic activity, according to the ODNR. Only five Youngstown area wells have been shut down.
Monday, January 2, 2012
EPA and Power Generation in the US 2011
On Friday, December 30th 2011 the U.S. Court of Appeals District of Columbia Circuit granted a stay to the implementation of the EPA’s Cross-State Air Pollution Rule, CSAPR, pending resolution of the legal challenges brought by 30 parties consisting of states, utilities, unions and others that have been consolidated into a single legal challenge to this rule. The CSAPR was made final in July (and modified in October), and affects about 1,000 power plants in the eastern half of the United States. In 23 states coal fired utilities will be required to reduce annual SO2 emissions in order to reduce downwind pollution. In 25 states utilities will be required to reduce ozone season NOX emissions. The October version of CSAPR made what EPA characterized as a technical correction, but served to reduce the reduction requirements in the first two years. Nonetheless, all the impacted states were required to reduce SO2 emissions beginning in 2012. The stay prevents the implementation of these requirements today and delays these changes. The CSAPR was to replace EPA's 2005 Clean Air Interstate Rule (CAIR). The 2005 CAIR will remain in effect pending the legal resolution of the issue.
The Cross-State Air Pollution Rule (CSAPR) should not be confused with the recently finalized mercury, arsenic, and acid gas regulations, the Mercury and Air Toxics Standards (MATS). MATS regulates mercury, arsenic, acid gas, nickel, selenium, and cyanide and was finalized on December 21. 2011. That standard slashes emissions of those pollutants primarily from coal fired electrical generation plants. The Cross-State rule was also aimed at coal fired electrical generation plants, but was designed to slash smokestack emissions of SO2 and NOX that can travel into neighboring states. These pollutants react in the atmosphere to form fine particles and ground-level ozone and are transported long distances, making it difficult for other states to achieve their particle requirements under the National Ambient Air Quality Standards (NAAQS).
According to the EPA the final CSAPR rule yields $120 to $280 billion in annual health and environmental benefits, including avoiding 13,000 to 34,000 premature deaths. At an annual projected annual cost of $800 million in addition to the estimated $1.6 billion per year in capital investments already required under the 2005 CAIR. For a total annual cost of $2.4 billion dollars 240 million Americans with have cleaner air. Had the rule proceeded on schedule, EPA estimates that by 2014 CSAPR would have reduced SO2 emissions by 73% from 2005 levels and NOx emissions by 54%.
The primary impact of the new rules will be on coal-fired plants more than 40 years old that have not yet installed state-of-the-art pollution controls. Many of these plants are inefficient and will be replaced by more efficient and cleaner burning plants, probably combined cycle natural gas plants. The Edison Electric Institute, an industry trade group, claims the combined new rules will cost utilities up to $129 billion not $2.4 billion per year that the EPA estimates and eliminate one-fifth of America's coal fired electrical capacity though it is unclear what portion of that cost is associated with each rule.
Most of the electricity in the United States is produced using steam turbines. Coal is the most common fuel for generating electricity in the United States. In 2010 Coal produced 45% of electricity used in the United States, nuclear power generated 20%, natural gas generated 24%, hydroelectric generated 6%, wind 1% and oil, wood, biomass, geothermal solar and other generated the rest. If the Edison Institute is correct, 20% of the coal fired electrical capacity will be eliminated. Because 92% of coal mined in the United States is used to generate electricity, this will impact the coal mining industry. In 2010 1,085 million short tons of coal were produced in the United States, about 7.5% was exported, almost all of the rest was used to generate electricity. If the number of coal fired electrical plants is decreased by 20%, the demand for coal to produce electricity is reduced, the amount of coal mined in the United States will decrease by about 200 million short tons, the number of coal miners and employees of coal companies will decrease, fewer trains to transport the coal will be necessary, and the cost of electricity will increase (as reported by the EPA) as the electrical power industry builds new generation plants burning other fuels.
The Mercury and Air Toxics Standards and the Cross-State Air Pollution Rule appear designed to reshape the power generation industry reducing coal fired plants, but some fuel will need to be used to spin the turbines that produce electricity. The electric power sector has seen large changes in the fuel mix over the years, so this is not new. A half a century ago, nuclear energy played no role in electric power generation, but in 2010, nuclear energy provided 21% of the energy used to generate U.S. electricity. Oil provided 18% of the fuel for electric generation in 1973, but its share has declined to 1% in 2010. In the past the changes in fuel mix were accomplished often by adding, not replacing plants as the economy grew. With much slower growth in the demand for electricity, the change in fuel mix will have to be accomplished almost entirely by replacing plants.
There will be economic impacts to the reduction in demand for coal in the United States, the cost to convert, replace and upgrade power plants, and increasing the demand for natural gas. The costs for these changes will be born in the present while the benefits occur in the future with lower health care costs and higher quality of life. If the plan is to eliminate the use of coal to generate electricity, be upfront and clear about the goal, the benefits and the costs. You cannot eliminate coal without replacing that fuel with another. Natural gas from shale hydro fracking is the obvious substitute, but EPA has barely begun studying the environmental impacts from fracking. As a nation we need to decide if we intend to abandon coal and embrace fracking without fully understanding the risks associated with fracking.
Subscribe to:
Posts (Atom)