Thursday, June 28, 2012

Lake Lanier, Atlanta’s Water and the New Water Reality


Lake Sidney Lanier Reservoir commonly known as Lake Lanier was created by the U.S. Army Corps of Engineers when they constructed the Buford Dam in 1956. According to Charles Fishman in “The Big Thirst” in the 1950’s when Lake Lanier was created the city of Atlanta did not finance a share of the project believing that the city that typically receives almost 50 inches of rain on average a year would never need the water. Atlanta has grown far beyond the expectations of those city fathers and the downstream states of Alabama and Florida have through legal action sought to limit the quantity of water Georgia can retain for their use above Buford Dam, arguing that Florida and Alabama need an adequate flow of water down the Chattahoochee River for power production and drinking water supply in Alabama and for maintaining adequate fresh water flow to the Apalachicola Bay to keep the salinity balance to maintain the estuary ecology, fishing habitats and breeding grounds in Florida. Georgia has single mindedly sought to protect the ability of Atlanta-area water utilities to withdraw unlimited water from the reservoir to meet the unrelenting water demand of the Atlanta metropolitan area for lifestyle water (gardens and green lawns) and life essential water through litigation rather than through conservation and smart planning. A grassroots effort has been launched by the local governments, water authorities, environmental groups, farm groups, industry and others-in short, the ACF stakeholders themselves, to try to achieve equitable water-sharing solutions among stakeholders that balance economic, ecological and social values, while ensuring sustainability for current and future generations.

In the Washington Metropolitan area where two states and the District of Columbia are dependent on the flows of the Potomac River they have the Interstate Commission on the Potomac River Basin, ICPRB, which was authorized by congress in 1940.  ICPRB allocates and manages water resources of the river through the management of the jointly owned (and financed) Jennings Randolph Reservoir (built in 1981), Potomac River Low Flow Allocation Agreement (1978) and the Water Supply Coordination Agreement in 1982 which designated a section of the ICPRB as responsible for allocating water resources during times of low flow and assists in managing water withdrawals at other times. These steps improved reliability of the water supply and ensured maintenance of in-stream flows to meet minimum aquatic habitat requirements. The task of cooperation may be more difficult for Georgia, Alabama and Florida where the distance creates different views of how much water is available and makes it difficult to see that they are joined in a regional watershed.

Back in Georgia- in 1989 after four years of drought, the U.S. Army Corps of Engineers recommended the 20% of the water used to generate hydroelectric power be diverted for Atlanta’s water supply.  Alabama and Florida objected and filed suit against Georgia and the U.S. Corps of Engineers in 1990, arguing that diverting water to Atlanta was environmentally harmful and economically problematic, and that in any case it required congressional approval because the purpose of the Buford Dam was not to supply water to Atlanta. Thought Atlanta has an average annual rainfall of almost 50 inches a year it varies tremendously in 2007 rainfall was less than 32 inches and in 2009 it was over 69 inches.

Drought has always occurred in Georgia. Five times in the past 90 years has Georgia had multi-year droughts that were called “Droughts of the Century.” An analysis of rainfall in Georgia by the U.S. Geological Survey found that  normal and above-average rainfall years occurred or 43% of the time in the past quarter century and drought and severe drought years occurred 57% of the time. If the weather patterns change the problem could be exacerbated, but what has really changed in Georgia to make the problem acute is the population of Atlanta metropolitan area has grown from about 2 million in 1970 to 5.5 million in 2010 without giving any thought to water resources which have not increased and that unrelenting growth impacted water infiltration and hydrology.  While on average there may still be adequate water to sustain the region.  It is clear that Georgia and Atlanta need to be proactive and plan for regular prolonged drought occurring each decade. Georgia has not been at all proactive in protecting the hydrology and water infiltration and regulating consumption of water in the Atlanta metropolitan area, preferring instead litigation in order to obtain more water from Lake Lanier. Georgia has encouraged unsustainable water usage through largely unregulated growth of population, industry and agriculture without any consideration given to historic drought experience and ever increasing demand for water.

Back in 2009 (a year that saw more than 69 inches of rain in Atlanta) as part of the never ending litigation between Georgia, Florida and Alabama Federal District Judge Paul Magnuson ruled that Georgia either had to reach an agreement with her neighbors by July 2012, or return to 1970s water withdrawal levels. Instead of working towards an agreement, Georgia once more chose litigation and the Eleventh Circuit Court of Appeals found that the 1950s legislation approving the construction of the Buford Dam, (which, in turn, created Lake Lanier), anticipated that the metro-Atlanta area would need greater water withdrawal from the lake over time. The Eleventh Circuit Court overruled Magnuson’s 2012 water-sharing deadline. The Eleventh Circuit Court of Appeals sent the case to the Army Corps of Engineers, which controls Buford Dam, telling the group to review Georgia’s water needs against the environmental impact, as well as Florida and Alabama’s water demands.

Alabama and Florida appealed to the U.S. Supreme Court,who declined to hear the case on Monday letting the decision of the EleventhCircuit Court stand. Nonetheless, none of these decisions will create water in Lake Lanier or increase water resources enough to fully supply all needs during a prolonged drought now or a shorter one as demand for water continues to grow. Lake Lanier must be shared and the demand for water during droughts has exceeded the resources available. No matter the outcome of the case Georgia will have to take responsibility for managing its water resources. “More reservoirs” is not a rational response to drought, due to several factors, including the inevitable and large-scale evaporation issue and the cost of construction. Drought is not only part of our lives, but an increasingly, a recurring part of our lives due to the impact of impervious ground cover and increased demand have had on the storage capacity of the watershed. Water usage must be rationalized to the complete hydrological cycle and reliance on water conservation and reuse to stretch existing supplies for use during drought. Finally, litigation does not increase water supplies. Lives, livelihoods, food supply and cost, and life styles are dependent on water as a community, region and nation we need to understand that. 

Monday, June 25, 2012

Hydraulic Fracking Poses Almost No Risk for Causing Earthquakes


The latest word from the NationalResearch Council is hydraulic fracking whether in shale deposits or as a secondary stimulation for a traditional gas or oil well has very low risk for inducing earthquakes that can be felt by people, but underground injection of wastewater produced by hydraulic fracturing and geothermal wells have a somewhat higher risk of causing earthquakes.  Although the vast majority of earthquakes that occur in the world each year have natural causes, earthquakes can be created by mankind.  Induced earthquakes have been documented since at least the 1920s when the first man-made large reservoirs were created behind dams. Other activities that can create (and have created) earthquakes are; controlled explosions used in mining or construction, underground nuclear tests, and energy technologies that involve injection or withdrawal of fluids from the subsurface can also create earthquakes. Man-made earthquakes are caused by changes in pore pressure within the rock due to the impounding of billions of gallons of water or injecting or extracting fluid from a well that may change the stress acting on a nearby fault. This change in stress may result in slip or movement along that fault creating a seismic event.

Historically man-made earthquakes have not been very large nor have they resulted in significant structural damage, but our ability to cause seismic events has increased over time as our technology to drill, pump and explode has advanced. To quantify the hazard and risk from man-made earthquakes requires probability assessments, which may either be statistical (based on data) or analytical (based on scientific and engineering models). Although the general mechanisms that create induced seismic events are well understood, current computer modeling techniques cannot fully address the complexities of natural rock systems in large part because the models generally lack information on local crustal stress, rock properties, fault locations and properties, and the shape and size of the reservoir into which fluids are injected or withdrawn. Geology cannot be simplified or generalized to model earthquake probability which is very specific.

So the National Research Council Board of Life and Earth Studies report titled: Induced Seismicity Potential in Energy Technologies is a data based analysis of earthquakes induced by mankind. This study compiling and analyzing all the data available was requested by the Energy and Natural Resources Committee of the U.S. Senate to assess the potential to cause earthquakes by energy production and related activities after small seismic events reported in Alabama, Arkansas, California, Colorado, Illinois, Louisiana, Mississippi, Nebraska, Nevada, New Mexico, Ohio, Oklahoma and Texas, appeared to be related to hydraulic fracturing, energy development and (true) geothermal energy production. The National Research Council is a nonprofit based in Washington that provides scientific information for government decision-makers under the auspices of the National Academy of Sciences, the National Academy of Engineering and the Institute of Medicine. Its reports are based on data and analysis gathering and scientific analysis of the information gathered.

The report examines the potential for energy technologies -- including shale gas recovery using fracking, carbon capture and storage, geothermal energy production, and conventional oil and gas development -- to cause earthquakes. Hydraulic fracturing, commonly known as fracking, extracts natural gas by injecting huge volume of water mixed with sand, and chemicals in short bursts at very high pressure into deep underground wells. The process cracks the shale rock formation and allows natural gas to escape and flow up the well, along with some wastewater. The wastewater can be discarded in several ways, including injection of the wastewater at a separate disposal well. True geothermal energy harnesses natural heat from within the Earth by capturing steam or hot water from underground. The basic mechanisms that can induce earthquakes from these wells are fluid injection and extraction that are presently well understood. The report examined the data from over 35,000 fractured wells, 108,000 secondary oil and gas recovery wells, 13,000 tertiary oil and gas recovery wells, 6,000 hydrocarbon withdrawal wells, 30,000 waste water disposal wells, 23 liquid dominated geothermal well fields and 1 vapor dominated geothermal field.

Analysis of the data collected at all these sites showed that the net fluid balance (total balance of fluid injected and withdrawn) appears to have the most direct impact on changing pore pressure within the ground. Oil, gas and geothermal wells are typically designed to maintain a balance between the amount of fluid being injected and the amount of fluid being withdrawn to prevent not only earthquakes, but to maximize the well life.  Geothermal wells appeared most likely to induce earthquakes especially the wells in the vapor dominated Geysers site which had 300-400 earthquakes per year (it is in California). In fluid geothermal wells maintaining a constant fluid balance results in a fairly constant reservoir pressure, reducing the number of induced earthquakes significantly. The 23 fluid dominated geothermal well locations experienced 10-40 earthquakes per year.

Only a very small fraction of the hundreds of thousands of oil and gas wells in the United States have induced earthquakes at levels that are noticeable to the public. An increase of  rock pore pressure above ambient levels due to injection of fluids or a decrease in pore pressure below ambient levels due to extraction of oil and gas have the potential to produce earthquakes. However, analysis of the data showed that to create an earthquake, a combination of conditions has to exist simultaneously:
    A. Significant change in net pore pressure in a reservoir,
    B. A pre-existing near-critical state of stress along a fracture or fault, and
    C. Fault rock susceptible to brittle failure.

Oil and gas wells are designed to maintain a balance between the amount of fluid being injected and the amount of fluid being withdrawn to extend the life of the well. This fluid balance helps to maintain fairly constant reservoir pressure and reduces the potential for induced earthquakes. In a conventional oil or gas reservoir the hydrocarbon fluids and associated aqueous fluids in the pore spaces of the rock are usually under significant natural pressure. Fluids in the oil or gas reservoir flow to the surface when penetrated by a well bore aided by pumping once the well is fully developed. The well or wells will produce until reservoirs reach a point when insufficient pressure, even with pumping, exists to allow the wells to continue to produce at commercial volume. To extend the life of a spent well various secondary and tertiary recovery technologies referred to as enhanced oil recovery technologies can be used to extract some of the remaining oil and gas. Secondary recovery and enhanced oil recovery technologies both involve injection of fluids into the subsurface to push more of the trapped hydrocarbons out of the pore spaces in the reservoir and to maintain reservoir pore pressure. Secondary recovery often uses water injection or “water flooding” and tertiary technologies often inject carbon dioxide (CO2). Of the 108,000 oil and gas wells that used water flooding only 18 have had one or more earthquakes. Of the 13,000 CO2 injected sites none have experienced earthquakes.

Shale formations can also contain hydrocarbons either gas or oil or both depending on the formation. The extremely low permeability of these rocks has trapped the hydrocarbons as they developed in the rock and largely prevented them from migrating out of the rock over geologic time. These unconventional gas and oil reservoirs are developed by drilling wells horizontally through the rock and using hydraulic fracturing techniques to create new fractures in the reservoir to allow the hydrocarbons to migrate up the well bore. The water used to fracture the well is quickly released from the reservoir and does not impact the fluid balance. About 35,000 hydraulically fractured shale gas wells exist in the United States; only one instance of an induced earthquake has been identified in which fracking to access the shale gas is suspected, but not confirmed, as the cause.

Overall, hydraulic fracturing or fracking and traditional oil and gas well have a very low risk of creating earthquakes. The waste water disposal wells associated with fracking and secondary well development have been associated with 8 known earthquakes, though there are a total of about 30,000 disposal wells in use, but these earthquakes have captured the headlines and public concern. Wells used only for the purpose of waste water disposal normally do not have a detailed geologic review performed prior to injection and the data are often not available to make a detailed review of these sites possible.  The overall risk turns out to be small, but limited knowledge about the geology prevents modeling. Attempts at modeling of pore pressure, temperature, and rock stress changes induced by injection and extraction to predict producing earthquakes have not been successful except where detailed knowledge of stress changes, pore-pressure changes, and fault characteristics are available for input and that data is almost always not available for disposal wells. The permanent addition of fluid to the subsurface without any fluid removal and the heat gradient associated with geothermal appears to have the most direct impact on changing pore pressure in the subsurface over time and the creation of earthquakes.


Thursday, June 21, 2012

Rio+20 UN Conference on Sustainable Development



United Nations Conference on Sustainable Development (Rio+20) is taking place this week, June 20-22, 2012 in Rio de Janeiro Brazil to mark the 20th anniversary of the 1992 United Nations Conference on Environment and Development in Rio de Janeiro, and the 10th anniversary of the 2002 World Summit on Sustainable Development in Johannesburg. Sustainable development’s goals are to meet the needs of the present without compromising the ability of future generations to meet their own needs. It is an ideal that is seen as the guiding principle for long-term global sustainable development and consists of three pillars: economic development, social development and environmental protection.  

The first UN Conference on Environment and Development was held in Rio de Janeiro in June 1992 and adopted an agenda for development in the 21st Century, “Agenda 21 : A Programme of Action for Sustainable Development.” Agenda 21 is the integration of environment and development concerns to fulfill the basic needs of all people (very broadly defined), improved living standards for all, better protected and managed ecosystems and a safer, more prosperous future for all. The plan was to achieve this in a global partnership for sustainable development paid for by the first world nations of 1992. The economic development principals detailed in the Agenda tend to sound like world socialism, but that happens when the goal is for every human being to share equally in food, water, wealth and to use the minimum resources to maximize the number of people who can live on the planet. I expect that the Rio +20 conference will fail with European nations currently under economic stress back peddling on the developing world’s expectations for aid and China and India not using their resource to solve their own problems. The U.S. is sending Secretary Clinton and hosting a U.S. Conference Center.

U.S. EPA Administrator Lisa Jackson is participating in the Rio +20 conference and has a US Center set up to hold events related to climate change, green economies, sustainable agriculture and sustainable cities. Events at the US Center will be live streamed at http://conx.state.gov/event/rio20/ if you want to watch the events the U.S. is hosting. The UN proceedings are being streamed at this link if you prefer to watch those. Live streaming is a nifty opportunity for those of us who will never have the opportunity to attend the live conference.  http://webtv.un.org/meetings-events/conferencessummits/rio20-13-22-june-2012-rio-de-janeiro-brazil/press-conferences/watch/forests-rio-dialogues-session-5-press-conference-rio20/1694285685001

While the stated objectives of Rio+20 are grand and broad. The most basic need is for every human to have access to clean drinking water and adequate sanitation. According to Rose George in her book “The Big Necessity,” there are 2.6 billion people on earth without a toilet. The lack of sanitation is the real cause of most dirty drinking water, and results in the death of 1.8 million children each year.  “The…2006 Human Development Report, and annual publication of the United Nations Development Programme (UNDP), wrote that ‘when it comes to water and sanitation, the world suffers from a surplus of conference activity and a deficit of action.’” The need to improve the level of basic water and sanitation services and the management of the world's water resources as well as wastewater has been emphasized at both previous Earth Summits, Rio (1992) and Johannesburg (2002). These conferences called for actions to improve the way water is managed and used, but has diluted the message with other agendas. Water and especially sanitation services for the poor are lagging behind in key regions of the world and are being buried at the pre-conference meetings by other agendas.

Monday, June 18, 2012

EPA Goes After Particulate Air Pollution



U.S. Environmental Protection Agency (EPA) Assistant Administrator Gina McCarthy announced Friday that they are proposing new air quality particulate standards to go into effect in December 2012. Though EPA is required to review air standards every five years under the Clean Air Act and had apparently already decided on reducing the particulate level, EPA wanted to delay the new standards until 2013. However, a suit filed by eleven states: New York, Connecticut, Delaware, Maryland, Massachusetts, New Mexico, Oregon, Rhode Island, Vermont, Washington and California and several environmental groups forced the EPA to act now. The court found that the EPA failed to adequately explain how the primary annual 2.5 micron particulate standard (PM 2.5) provided an adequate margin of safety for the most vulnerable-children, the infirm and the old.

A study of children in Southern California showed lung damage associated with long-term particulate exposure, and a multi-city study found decreased lung function in children associated with long term particulate exposure. These two studies appeared to warrant a more stringent annual particulate standard according to the court. The United States particulate levels are a small fraction of the levels in the worst areas of the world-Beijing, New Delhi, Santiago (Chile), Mexico City, Ulaanbaatar (Mongolia), Cairo (Egypt), Chongqing (China), Guangzhou (China), Hong Kong, and Kabul (Afghanistan).  

Currently, under the Clean Air Act the US EPA has established both annual and 24-hour PM2.5 air quality standards (as well as standards for other pollutants). The annual standard is 15 ug/m3 (an air quality index, AQI of 49). The 24-hr standard was last revised to a level of 35 ug/m3 (an AQI of 99). These standards were last reviewed in 2006, but no change was made at that time. It was reported that the EPA’s analysis found a lower standard for the annual exposure would have prevented almost 2,000 premature deaths each year.  Combustion engines and coal burning power plants are key contributors to PM2.5 particles, and according to the US EPA and World Health Organization, the smaller, finer pollutant particles measured by PM2.5 are especially dangerous for human health. Studies have shown that there is an increased risk of asthma, lung cancer, cardiovascular problems, birth defects and premature death from particles smaller than 2.5 microns in diameter that lodge deep in the lungs.

According to the Lung Association, the two biggest air pollution threats in the United States are ozone and particle pollution. Other pollutants include carbon monoxide, lead, nitrogen dioxide, sulfur dioxide and a variety of toxic substances including mercury that appear in smaller quantities. The EPA, requires states to monitor air pollution to assess the healthfulness of air quality and ensure that they meet minimum air quality standards. The recently challenged, Cross-State Air Pollution Rule (CSAPR) was intended in part to prevent pollution from one state from moving into other states and preventing them from meeting their goals because several states have been unable to meet the current standard. PM2.5 particles can be either directly emitted or formed via atmospheric reactions. Primary particles are emitted from cars, trucks, and heavy equipment, as well as residential wood combustion, forest fires, and agricultural waste burning. The main components of secondary particulate matter are formed when pollutants like NOx and SO2 react in the atmosphere to form particles. However, studies have shown that air currents over the Pacific are carrying elevated particulate levels into California presumably from China.

According to American Lung Association State of the Air Report, Pittsburgh had the highest particle pollution in the nation on an annual basis. Seven cities averaged particulate levels higher than the 15 ug/m3 current standard allows: Bakersfield, CA; Hanford, CA; Los Angeles, CA; Visalia, CA; Fresno, CA; Pittsburgh, PA; and Phoenix, AZ. The American Lung Association in their latest report states that twenty cities actually have average year-round particle pollution below the current regulated level, but above the proposed EPA air quality standard of 12-13 ug/m3. The maximum 24 hour standard will remain unchanged at 35 ug/m3.

Thursday, June 14, 2012

Heat Pumps- Replace, Repair or Upgrade to Geothermal


The first sign of trouble was when I woke up one morning thinking that it smelled like rain. I was in bed with the air conditioning system on. I could think of several excuses why I might have had that thought and so ignored the first symptom and it would be a several more weeks until the heat pump failed. It was a relatively long and cool spring with nights in the 60’s cooling the house, but come the first 90 degree day  I knew my split system heat pump had failed.

My heating and cooling system like a lot of newer homes in northern Virginia is a split heat pump system that consists of an outdoor metal cabinet that contains the condenser and compressor and an attic unit that contains the evaporator coil and the air handler that sends the cool air through the duct system in summer and hot air in winter. In the heating cycle, the air-source heat pump takes heat from the air outside the home and pumps it inside passed refrigerant-filled coils. Inside the heat pump system are two fans, two refrigerator coils, a reversing valve and a compressor. The outdoor unit contains a coil and fan and the compressor. The reversing valve switches the direction of refrigerant through the cycle and therefore the heat pump may deliver either heating or cooling.

The effectiveness of a heat pump is based on the temperature difference between the source and the sink and which cycle it is in. Heat pumps are more effective for heating than for cooling if the temperature difference is held equal. This is because the energy used to power the compressor can be converted to useful heat when in heating mode and released into the house as extra heat. The condenser is normally outdoors and during the cooling cycle, and the compressor's dissipated work is not put to a useful purpose. Air heat pumps are best suited to relatively warm climates, such as the southeastern U.S. This is because when temperatures are low, a heat pump’s Coefficient of Performance, COP falls dramatically. According to the Department of Energy a7.5-ton rooftop heat pump that has a high-temperature COP of 3.0 can have a low-temperature COP of 2.0 or even lower. And at very low temperatures, a heat pump can require supplemental heat, typically in the form of electric resistance just to function further reducing effective heating efficiencies.

The most effective type of heat pump is the geothermal heat pump. In winter it collects the Earth's natural heat either through a series of pipes, called a loop, installed below the surface of the ground or submersed in a pond, lake or well. The temperature six feet beneath ground surface is cooler in summer and warmer in winter than the ambient temperature and fairly constant, but many loop systems are not installed deep enough in a suitable medium to maintain constant temperature, but nonetheless draws excess heat from the house and allows it to be absorbed by the Earth. I had always assumed that when the time came I would replace my heat pump with a geothermal unit.

Despite the fact that my heat pump system is under 8 years old and heat pumps should last 12-14 years, the evaporator coil corroded and leaked enough Freon (R22) that the system could no longer cool.  The corrosion of the coil was obvious upon inspection, but the Freon level had been fine 2 months earlier when the system had been serviced, so I was taken a little by surprise to find myself having to make the decision about whether to replace the coil, replace the entire system with an energy star heat pump or upgrade to a geothermal heat exchanger now. A heat pump should last longer than 8 years. This is the first major repair the system has required and I probably could get a couple more years out of the system if I replaced the coil, but there is no guarantee and the outdoor unit had started to show rust two years ago. If I replace the entire system, I will probably get another 8-10 years before I have any major problems.  

In truth we were never happy with the system; it could never keep the master bedroom cool in summer. The master bedroom has unobstructed southern exposure and though we installed drapery, window films and additional insulation as recommended by the Building Envelop Research of US Department of Energy Efficiency and Renewable Energy Unit, still the bedroom was never cool enough in summer. The attic, crawl spaces, and eves, were insulated with cellulose. The pipes, end caps, knee wall, sump pumps and all identified areas were sealed, while my energy bills were reduced significantly, I could not get the bedroom cool on the hottest days. In the winter the passive solar helps and I keep the house at 67 degrees Fahrenheit, which the heat pump has never had any problems maintaining. This is an opportunity to make sure that the heating and cooling system are sized and ducted optimally for my house and lot. The Manual J calculation showed my existing heat pump to be slightly undersized for the house.  The Department of Energy has lots to say about ducting problems with air handling systems.  In a typical home, about 20% of the air that moves through the duct system is lost due to leaks, holes, and poorly connected ducts. The result is higher utility bills and difficulty keeping the house comfortable, no matter how the thermostat is set. The heating and cooling represent 40%-50% of power use in the typical American home.  An analysis of my electric bills showed that the heat pump operated on average about 7 months a year and that I spent about $1,260 annually operating the system. (My electric rates have been steady for over 5 years and my solar panels supply all my other electrical needs.)

 Most manufacturers advertise energy savings of up to 35%-75%; using an average existing system as a starting point and converting to a geothermal system. DOE states that with an energy star system,  it is possible to save 10%-20% of energy cost from an existing system, giving an implied savings of 15%-35% for a geothermal system versus a new energy star system. If I assumed that the geothermal heat pump would save me 50% of the electricity used for operating the heat pump that is about $600 per year.  There are several calculators on manufacturer's web sites to perform better calculations. I found the Bosch calculator and used it  for projecting savings from a geothermal system as compared to an EER 13 air to air heat pump.  The Bosch website calculator gave me a savings of about $971 with $295 of the savings from hot water heating using inputs for a well-insulated home in the Washington DC metropolitan area converting to a geothermal heat exchanger from a propane heated water and air heat pump. So my back of the envelope calculation was not a bad guess and the hot water heating cost is an important element in the cost calculation.

 Until December 31,2016 a 30% federal tax credit is available on the total cost of a qualifiedheat exchanger, reducing the capital cost. The largest hurdle to the widespread adoption of GHP technology is the one I am facing now- the capital cost for initial installation. The heat exchange loop portion of the GHP system can be half or more of the overall geothermal heat pump system cost (and equal to the total cost for a traditional furnace and air conditioner). However, the geothermal heat pump requires st least 75 feet of tubing (in my case either vertical wells or standing column wells) for each ton of size. The costs I have been quoted were $3,000-$4,000 per ton for installation of the heat exchange loop or well.  The difference in cost was the amount of damage that would be done to my garden. If indeed it is a 4 ton system that the house needs, the additional cost of the geothermal heat pump would be a minimum of $12,000 and could be as much as $16,000 plus any costs to reconfigure piping in my completely finished basement. Even with a 30% federal tax credit for the entire system the payback might take 10 or more years if the actual savings turned out to be 50% of the electricity used by the air heat pump system.

It now appears that this decision is a close call and I need to get detailed proposals to determine the actual cost, the damage to the house and garden, the Coefficient of Performance, COP, and Energy Efficiency Ratio, EER to obtain a better estimate of capital versus operating costs. Also, I need time to think about the benefits of an absolute reduction in energy usage while still maintaining my creature comforts. Installing the right size equipment for the home is essential to getting the best performance and comfort, and now is my opportunity to verify that the new system I install is sized correctly for the house and lot. A system that’s too large will not keep your home comfortable because of frequent ‘on/off’ cycling, but a system that is too small will not be able to cool the house on the hottest of days. Also the duct system which has already had all the leaks sealed needs to be evaluated for adequacy and optimal layout. The system selected will have an impact on reliability- at least according to Consumer Reports.  Finally, I need to make sure that the HVAC contractor I hire has insurance, contractor’s license without complaints, and good references for similar sized and types of projects. 

Monday, June 11, 2012

The Rural Crescent, the Occoquan and the Water Supply


The Board of Supervisors of Prince William County have allocated $60,000 to engage a study to examine if the goals desired from the creation of the Rural Area of the county have been met through the implementation and management of the Rural Crescent. In addition, the study is to identify other rural preservation tools that would allow the elimination of the Rural Crescent restrictions on development density. This is a direct result of a motion by Supervisor Martin Nohe at the regular March meeting of the Prince William County Board of Supervisors.

The Rural Crescent was created in 1998 and originally intended as an urban growth boundary for the county designed to preserve the agricultural heritage and force redevelopment along the Route 1 corridor rather than development in the remaining rural areas. This was to be accomplished by limiting development to one home per 10 acres with no access to public sewers. The Rural Crescent has been chipped away at for years, but still contains 80,000-acres; however, active farming in Prince William continues to decrease.  To adequately judge the utility of the Rural Crescent the study must consider its impact on water resources and water ecology. While the Rural Crescent may have been the wrong policy to preserve our agricultural heritage, it has been a success at preserving water resources, protecting our groundwater and supporting the ecosystem of our estuary. In addition, continued redevelopment of areas with preexisting infrastructure will allow Prince William County to improve storm water management in those areas and score nutrient points for the EPA mandated TMDL as well as revitalize older areas of the county and support of sustainable development. The Rural Crescent is about water, specifically groundwater.

The Rural Crescent in Prince William aligns roughly with the Culpeper groundwater basin, one of the more important watersheds in Virginia and essential to the health of the Occoquan Reservoir which in turn is an essential element in the drinking water supply of Fairfax Water - Prince William Service Authority, PWSA, obtains most of the drinking water they distribute in the county wholesale from Fairfax Water. Besides purchased water from Fairfax Water, PWSA operates the Evergreen water wells that draw water directly from the Culpeper Basin and thousands of home owners have private wells that also draw from the aquifer. The Virginia-American Water Company also distributes water purchased from Fairfax Water. Any changes in land use have the potential to negatively impact groundwater, the watershed and the Occoquan Reservoir and should be considered in the study of “other preservation tools.”  

The Rural Crescent is located within the northeast quadrant and eastern quadrant of the Culpeper basin and consists of an interbedded sequence of sedimentary and basaltic that is highly fractured and overlain by a thin cover of overburden. While ground water flows generally speaking west to east, the fractures within the rock run predominately north south. Contaminants can enter the groundwater at these fractures and zigzag through the aquifer, but these fractures also serve as recharge areas. Groundwater is typically protected against contamination from the surface by the soils and rock layers covering the aquifer, but there is inadequate overburden in much of the Rural Crescent. Once contaminated, groundwater is very difficult to clean and in a fractured rock system there is limited if any natural attenuation and the aquifer could be polluted beyond our ability to remediate.

Generally, groundwater in the Culpeper Basin is replenished each year through precipitation. Groundwater recharge through precipitation requires adequate area for infiltration of rainwater, control of sheet flow created by roads and paved areas, as well as protecting the most geologically favorable infiltration points. Precipitation and snow melt flows over the ground as surface runoff. Not all runoff flows into rivers, much of it soaks into the ground as infiltration. Some water infiltrates deep into the ground and replenishes the saturated subsurface rock of the aquifer, which store huge amounts of freshwater for long periods of time. Some infiltration stays close to the land surface and can seep back into rivers, creeks, and ponds as base flow for the rivers, and some ground water finds openings in the land surface and emerges as fresh water springs.

According to the U.S. Environmental Protection Agency, impervious cover levels of 10% can significantly impact watershed health increasing stormwater runoff. When runoff volume increases, runoff velocity increases, and peak storm flows causes flooding and erosion.  Increased stormwater velocity increase soil erosion, increases nutrient contamination and reduces water infiltration into groundwater. The groundwater is essential as the base flow to the streams and rivers that feed the Occoquan Reservoir during the dry months. The groundwater stored in the watershed can supply adequate water to maintain river flow during droughts. Maintaining open areas provides areas of groundwater recharge and controls runoff. Decisions about the fate and management of the Rural Crescent will impact groundwater quantity and quality and in turn will impact water flows to the Occoquan Reservoir during dry periods.  Flow to the Occoquan Reservoir is essential in managing the drinking water withdrawals from the Potomac River.  The Interstate Commission on the Potomac River Basin, ICPRB, manages the Potomac River drinking water allocations for the entire region by “suggesting” the quantity that Fairfax Water draw from the Occoquan and Potomac daily.  Prince William County’s decision on the fate of the Rural Crescent could impact drinking water supplies in Fairfax, Maryland, and DC as well as our own county.
 
The “rural preservation tools” to be investigated as part of the study are sustainable community concepts, high density communities utilizing the strategies of Low Impact Development, LID, which include dedicated open space. While high density communities built adjacent to dedicated open space of cute community farms as is being done in Loudoun might preserve our agricultural heritage, it will not guarantee the preservation of our ecosystem and water. When development disturbs more than 10% of the natural land by covering surfaces with roads, driveways, walkways, patios, and homes the natural hydrology of the land is disturbed, irreparably disturbed. These developments while much better than traditional developments still disturb more than half the land area by significantly increasing building density. 

The lack of overburden limits natural protection to the aquifer, but has allowed easy infiltration. The sedimentary rocks of the Rural Crescent are productive aquifers and feed not only the groundwater wells that provide drinking water to Evergreen and other communities, but also feeds the tributaries to Bull Run assuring the base flow to the rivers and streams that feed the Occoquan. Our freshwater resources need to be managed as a whole. Development that will impair the recharge of the aquifer can result in impacts to the entire region, including the decrease in water level and aquifer storage, reductions in stream base flow and lake levels, loss of wetland and riparian ecosystems, saltwater intrusion and changes in groundwater quality. Our future and our children’s future is our water. We can’t allow it to be destroyed  in paving roads and building houses for short term gain.

Thursday, June 7, 2012

Sharing Our Water in the Potomac Watershed



The May 1st Potomac Basin Drought Monitor indicated that most (96.8%) of the Potomac River Basin was abnormally dry (D0). Stream flows measured at Point of Rocks and Little Falls were below median levels.  Precipitation levels in the Basin were below normal in April by 1.2 inches. Most of the groundwater monitoring wells were normal to low across the Basin.  I was not the only one wondering if this would be the beginning of a drought, and worried about my own water supply. Then the rains came.

For the past two weeks as thunder storms have rolled through the region, I checked the water level in the nearby U.S. Geological Survey (USGS) well 49 V1 almost daily and watched as the water level has risen. For two weeks the regular downpours have kept my 15 new trees well watered while the groundwater level has risen 3 feet in the USGS well 49 V1! My drinking water well is undoubtedly as flush as 49 V1 and I’m relieved. The water level has gone from the 10th percentile to the 90th in two fairly wet and stormy weeks. My garden is beautiful, and I can with a clear conscience plan to fill the “gator bags” on my new trees during the dry days of summer. I’ve seen first-hand how immediately rainfall and its percolation into the ground directly impact groundwater.

The groundwater and rain also feed the river at the bottom of my land. That water flows into Bull Run at Sudley Springs and onto the Occoquan River. The rain also feeds the tributaries to the Potomac River.  The Washington metropolitan area gets nearly 90% of its drinking water from the Potomac River. The remaining 10% of the region’s supply is from the Patuxent and Occoquan rivers, Goose Creek (a Potomac Tributary that runs through Loudoun County), Lake Manassas (which feeds the Occoquan), the Jennings Randolph and Little Seneca Reservoirs and groundwater resources that serve small community supplies and private wells like mine. Though I fixate on the water resources in my little corner of the region which I have no ability to supplement, the Potomac River is truly the lifeblood of the region. The Potomac is the region’s major source of drinking water, accepts the clean effluent from waste treatment plants, cools power generation plants, and with the C&O Canal, Lake Manassas, and the Occoquan Reservoir provides water recreation and breath taking scenery to our communities.

The Washington Aqueduct Division of the U.S. Army Corps of Engineers (WAD), the Fairfax County Water Authority (FCWA) and the Washington Suburban Sanitary Commission (WSSC) furnish about 95% of the metropolitan region's water. A number of distribution agencies like Prince William Service Authority purchase some or most all of their water wholesale from the big three and distribute that water in their communities. A number of smaller agencies and self-supply portions of distribution agencies supply the remaining 5% of the water.

For more than two centuries the waters of the Potomac seemed unlimited so that the region is not hampered and tied by water allocation agreements created centuries ago that bind many areas of the arid west to fixed and rigid allocations. Instead, the Interstate Commission on the Potomac River Basin, ICPRB, which was authorized by congress in 1940 to address the pollution of the river facilitated the creation of the Potomac River Low Flow Allocation Agreement in 1978 in response to the droughts of the 1960’s and 1970’s.  Back in the days when the ICPRB was formed, raw sewage flowed directly into Four Mile Run, Hunting Creek, Hooffs Run, and the Potomac River. The river tributaries were putrid and clogged, a foul mix of bubbling, decomposing human waste in brown waters. Shorelines were devoid of wildlife, and tests showed dozens of disease-causing pathogens. Water pollution was so bad that propeller airplane passengers from D.C. (filled with the members of congress) could look down and see the sludge. The extent of the problem was documented in 1949 by the Izaak Walton League, one of the first conservation organizations in the U.S., in a film showing water conditions in Alexandria.

The ICPRB was one of the first organizations with a congressional mandate to consider water resources on a watershed basis, rather than along political boundaries. However, now, the focus of the ICPRB has changed. Sewage is not released into the Potomac (unless the combined sewer systems in Baltimore or Washington overflow).  We have reached the point in population density and development that during times of drought, natural flows on the Potomac are not always sufficient to allow water withdrawals by the utilities (including power generation which takes an awesome amount of water) while still maintaining a minimum flow in the river for sustaining aquatic resources. ICPRB allocates and manages water resources of the river through the management of the jointly owned Jennings Randolph Reservoir (built in 1981), Potomac River Low Flow Allocation Agreement (1978) and the Water Supply Coordination Agreement in 1982 which designated a section of the ICPRB as responsible for allocating water resources during times of low flow. These steps improved reliability of the water supply and ensured maintenance of in-stream flows to meet minimum aquatic habitat requirements as defined by the Maryland Potomac River study in 1981. The section of ICPRB responsible for all this is known as the Section for Cooperative Water Supply Operations on the Potomac (CO-OP), and is formally empowered in its duty by the Water Supply Coordination Agreement of 1982.

This ICPRB is intended to coordinate all the political entities, Maryland, Virginia, Fairfax Water, Washington DC, the federal government and counties and cities within and dependent on the watershed to address the basin’s major challenges, including water quality impairments, water supply and restrictions, flooding, groundwater use, nonpoint source pollution and emerging contaminates. The ICPRB role has been somewhat overshadowed by the recent EPA mandated Chesapeake Bay TMDL, but the ICPRB remains primary in coordinating water supply management and spearheading coordination of effluent water quality issues as they impact drinking water supplies. In their most recent water supply update on June 4th ICPRB assures us that “there is sufficient flow in the Potomac River to meet Washington metropolitan area’s water demands without augmentation from upstream reservoirs.”  After the recent rains it appears unlikely that the Washington metropolitan area’s back up water supply- primarily the Jennings Randolph reservoir helped in by the smaller supply at Little Seneca reservoir will be needed during the summer of 2012.  ICPRB also brings you the Potomac River Watch.
Thanks to Curtis Dalpra Communications Manager at CO-OP for his help.

Monday, June 4, 2012

Lightning Rods and Lightning Protection Systems-Should You Have One?


The image is from Lightning Prevention Systems, Inc. in New Jersey a state that has only about 45,000 lightning strikes a year

Though summer is the peak season for lightning, it does strike year round. According to the National Oceanic and Atmospheric Administration (NOAA) 25 million cloud-to-ground lightning strikes occur in the United States each year. Generally speaking lightning strikes are geographically concentrated in the southeast, south and mid-west. Until we moved to Virginia (with an annual average of 344,702 lightning strikes a year) from California, I had not thought much about lightning. Texas is that state with the most lightning strikes a year averaging almost 3 million a year and much smaller Florida 1.4 million strikes a year! The creation of lightning is a complicated process. According to NOAA we know what conditions are needed to produce lightning, but there is still debate about exactly how lightning forms. The exact way a cloud builds up the electrical charges that lead to lightning is not completely understood. A channel of negative charge, called a step leader, shoots to the ground in a zigzag of roughly 50-yard segments in a forked pattern. As it nears the ground, the negatively charged step leader is attracted to a channel of positive charge from the earth reaching up, a streamer, normally through something tall, such as a tree, house, or telephone pole. When the oppositely-charged leader and streamer connect, a powerful electrical current begins flowing. A flash can consist of one or as many as 20 return strokes.

Cloud to ground (CG) discharges, are the most common, but there are other known types of lightning that have no channel to ground.  These cloud discharges are classified as in-cloud (IC), cloud to air (CA), or cloud to cloud (CC). There is also lightning that originates at the top of the thunderstorm. This area carries a large positive charge. Lightning from this area is called positive lightning that frequently strikes away from the rain core, either ahead or behind the thunderstorm. It can strike as far as 5 or 10 miles from the storm, and is it typically has a longer duration, so fires are more easily ignited. Positive lightning usually carries a high peak electrical current, which is more likely to kill.

 The air within a lightning strike can reach 50,000 degrees Fahrenheit, and one ground lightning stroke can generate between 100 million and 1 billion volts of electricity. Lightning is a major cause of building fires, even though highly effective (though not perfect) protection has long been available. In the 1700s Benjamin Franklin (remember the kite and key story) proposed a method of protecting structures from the effects of lightning using elevated rods and down-conductors. His ideas were furthered by the work of Nikola Tesla, Michael Faraday and other scientists to develop and document successful designs. In 1904, The National Fire Protection Association, NFPA, established the American standard for installation of lightning protection systems now known as NFPA 780- the Standard for the Installation of Lightning Protection Systems. Historical documentation shows that fire losses to protected buildings were between 1.3%-7% of the damage to unprotected buildings during the first two decades of the twentieth century as the standard developed and our knowledge increased. Experience of the fire-insurance companies showed that if buildings were properly "rodded", they would be practically safe from damage by lightning. Remember this is based on statistics lightning is most likely to hit the highest object, and is also more likely to strike something with a good path to ground, such as a lightning protection system.

Installation of such a system in conformance with NFPA 780 is not a simple matter and can cost thousands of dollars depending on the size and shape of the house. Whether it makes economic sense for you depends on many factors such as location, value of the property in dollars and sentiment, insurance and the amount of the deductible, what you can afford and how you feel about lightning. My husband and his brother who grew up in the same house that was struck by lightning came to different decisions.  To provide effective protection for structures, a lightning protection system must include a sufficient number of rods with tips exposed and extending above the structure. These lightning rods, now called air terminals become the preferred strike receptor for a descending step leader from the thundercloud. That rapidly-varying lightning current must then be carried away from the building into the earth through a down conductor system, that will provide the path of least resistance and impedance to the flow of current must be so low that "side flashes" to other objects in the vicinity of the system do not occur. It is essential that the system is designed to prevent it from heating or dislodging from the structure by the force of the power surge. All nearby metal components of the structure (solar panels, generator, roof vents, water and sewer pipes etc.) must be properly connected to the down-conductor system to minimize the probability of side flashes and ensure the flow of current to the earth. The connections from the down conductors to the earth must allow the lightning current to flow into the ground without the development of large electrical potential differences on the earth's surface and without creating other hazards.

To verify that an existing system is designed and installed correctly you could have it certified. Certification of a lightning protections system assures that the system meets the standards set forth by NFPA 780 and the Underwriter's Laboratories (UL 96A). If like me you are going to install a lightning protection system make sure your designer and installer has at least his Journeyman Installers certification from the Lightning Protection Institute, LPI and/or certified by Underwriters Laboratories. Journeyman Installers must pass two levels of tests for LPI Certification. Master Installers must pass a series of four tests, and carry a brass card issued by LPI. Annual re-testing is required for continued certification. There is also a Master Installer and Designer designation- that might be the best choice. You can go the LPI web site to obtain a list of certified installers in your area. Likewise you can go to the Underwriters Laboratories website for a list of certified installers. Many counties do not require a building permit to install a lightning protection system. Without the help of the county building department staff you will have to make sure that your system is designed and installed in conformance with NFPA 780 or UL96A there are differences between the standards, but they are similar in many ways.  The American Society for Agricultural Engineers (ASAE) Standards also has specification for lightning protection though they are geared to farms, the principals are the same. ASAE develops and publishes consensus standards for agricultural tractors and machinery, agricultural structures, turf and landscape equipment, irrigation and drainage equipment and systems, environmental aspects of food production and resources management. In addition, you might want to take a look at other installation jobs and make sure that they are as unobtrusive as possible. Aluminum cables are likely going to be run across your roof and down the sides of your house in addition to installing a rod on every gable and at least every 20 feet along a roof span. Check for a valid contractor’s license, references and the Better Business Bureau.

Though copper with 98% conductivity when annealed, is the preferred material for lightning protection on farm structures. Alloyed metals are typically used today. Aluminum while having only 59% the conductivity of copper is acceptable as a substitute for copper in lightning protection when electrical grade aluminum is used.  Aluminum is subject to corrosion by ocean air or soil, but resistance in other environments can be excellent due to a thin surface layer of aluminum oxide that forms when the metal is exposed to air, effectively preventing further oxidation. Aluminum is preferred on structures with aluminum trim to prevent corrosion from the reaction of the aluminum coming in contact with copper wires- add rain to copper and aluminum and you are making a battery.

The lightning protection system must terminate into the earth to dissipate the charge using a copper clad steel cable. The slightly acidic nature of the soil will corrode the aluminum and impede system performance. The choice of material used on the structure is based on cost, and the other materials of construction- copper has less resistance than aluminum, but can have other problems other than cost. The down conductors should be as widely separated as possible and each building must have at least two, but there should be at least one down conductor for each 100 feet of perimeter. Usually aluminum is used against the house because copper gutters are not as common these days, aluminum is much cheaper and so bi-metal connectors must be used to make the transitions into the copper clad steel cable in the earth to assure proper grounding and dissipation of the lightning. Also bi-metal transition connectors must be used if the air terminals are copper. Underground metallic piping, including water piping, well casings, sewer and septic lines must be considered in the design or they are potential points of failure. Lightning arresters should also be installed on the lead-in wire or cable for the electrical supply and bonded to the lightning protection system directly or through a common ground. NOVEC, my power company, offers to install and maintain a “collar” system at the electrical meter for $10 per month or you can purchase separate surge protectors.

In summary, all the science and experience of century prove that properly designed and installed lightning protection systems work, though there is still some dispute about the most effective design theory. Each year, lightning is the cause of an estimated 17,400 fires 55% of which occur outdoors, and 41% occur inside structures. According to Federal Emergency Management Agency, FEMA, dollar loss per fire is nearly twice that from all U.S. fires and in 1998 that was around $10,000 and probably close to double that now. Roofs, sidewalls, framing, and electrical wires are the areas most ignited by lightning fires. Though a lightning protection system will have little or no effect on how likely it is that lightning will strike in the immediate area, the energy will be conducted directly to ground, without having to go through your house, its internal wiring and electrical equipment and appliances. Whether it makes sense to install one is an economic decision based on where you live, your insurance deductible and other factors. Our very high insurance deductible combined with the irreplaceable paper book collection that is the centerpiece of our lives would not survive a fire caused by a lightning strike or the water and mold associated with fire response. So, even with a cost of thousands of dollars, we will install one here in Virginia.