Monday, July 30, 2012

Dimock, Gasland and the EPA – Fracking and Water

Last Wednesday, July 25th 2012 the U.S. Environmental Protection Agency announced that it has completed its sampling of private drinking water wells in Dimock, Pa. Based on the outcome of that sampling, EPA has determined that the levels of contaminants present do not require additional action by the Agency, the water with the existing private well treatment systems is safe to drink. Regional Administrator, Shawn M. Garvin, said “The sampling and an evaluation of the particular circumstances at each home did not indicate levels of contaminants that would give EPA reason to take further action.  Throughout EPA's work in Dimock, the Agency has used the best available scientific data to provide clarity to Dimock residents and address their concerns about the safety of their drinking water.” The EPA’s news release is intended to end the story of Dimock, but bureaucratic speak is never really clear. So, let’s see if we can bring clarity and accuracy to the end of the story of Dimock, PA.  

The Safe Drinking Water Act, SDWA, which is how the EPA looks at water quality, defines a contaminant as “any physical, chemical, biological, or radiological substance or matter in water” (U.S. Code, 2002; 40 CFR 141.2). This is a very broad definition of contaminant includes every substance (including minerals) that may be found dissolved or suspended in water, everything but the water molecule itself. However, the SDWA only has MCLs and secondary standards for 91 contaminants. Groundwater aquifers are potentially vulnerable to a wide range of man-made and naturally occurring contaminants, including many that are not regulated in drinking water under the SDWA. The presence of a contaminant in water does not necessarily mean that there is a human-health concern. Whether a particular contaminant in water is potentially harmful to human health depends on the contaminant’s toxicity and concentration as well as other factors including the susceptibility of individuals, amount of water consumed, and duration of exposure. EPA did a final round of testing of the private wells in the Dimock area to make sure that the water from the drinking water wells was safe to consume and all identified contaminants were within the acceptable level as determined by a risk analysis. Most private well owners rarely test their well water quality and very few ever consider testing for the entire suit of contaminants regulated under the SDWA let alone the list of potential contaminants that EPA tested for here.

Dimock, Pennsylvania is located in Susquehanna County near the New York border, overlies the Marcellus Shale and was an early area that had been developed with hydraulic fracturing or fracking. Dimock had been made famous for its appearance in the Josh Fox movie Gasland.   In Dimock, Mr. Fox met families who demonstrated on camera how they were able to light their running tap water on fire due to the methane gas present in their wells. That was a rather spectacular display. Residents also claimed to be suffering from numerous health issues related to contamination of their well water. Methane is a simple asphyxiant that displaces oxygen from air. Methane released from water into an enclosed environment could cause serious symptoms. Exposure to low oxygen environments produces symptoms of central nervous depression, including nausea, headache, dizziness, confusion, fatigue and weakness. Even if there was no other contaminant of concern present in the water, the symptoms of central nervous depression could be very frightening.

Cabot began natural gas fracking in the Dimock area in 2008. On January 1, 2009, an explosion was reported in an outside, below-grade water well pit at a home located in Dimock. In Pennsylvania private drinking water wells are not regulated and are often the shallow, dug wells that are housed in a pit. The Pennsylvania Department of Environmental Protection (PADEP) collected samples from wells that provide drinking water to 13 homes located near the Cabot fracked gas wells, and these samples contained elevated levels of dissolved methane gas. (During the year the number of impacted homes would expand to 18 from 13.) The presence of dissolved methane and/or combustible gas was noted in the private wells within six months of completion of drilling of the Cabot  gas Wells and Cabot was presumed to be responsible for the pollution, pursuant to Section 208(c) of the PA Oil and Gas Act, 58 P.S. §601.208(c). None of the homes dependent on their private drinking water well had done any extensive testing of their water quality before Cabot began fracking in the area and all contaminants found (except for fecal coliform) are sometimes naturally present in groundwater. The two important questions raised were is the water safe to drink and did Cabot cause any change in the water quality by fracking in Dimock. PADEP presumed Cabot responsible and cited them for improper or insufficient cementing of the well casings. In addition there had been several other violations for improper storage of drilling mud, diesel spills, failure to maintain records and driller’s logs.

In November 2009 the PADEP entered into a consent agreement with Cabot for methane and metals removal systems for eighteen private wells in the Dimock area. The agreement was later revised several times. The revised agreement required Cabot to pay the impacted fam­i­lies set­tle­ments worth twice their prop­erty assessed val­ues, deposit the money into an escrow account and notify the residents that the money was available and to install a water treatment system (a filter or ion exchange system) in each impacted home. The agreement calls for each well owner to enter into the agreement with Cabot who was to install water treatment systems in their homes. Until the treatment systems were installed, Cabot was to provide delivered bottled water. There were no plans for confirmation testing to demonstrate the effectiveness of the filtration systems.  There were eighteen private wells that were part of the PADEP /Cabot agreement. By 2011 only six well owners had signed agreements and had water treatment systems installed in their homes. However, most of these were buying bottled water because they did not feel confident that the treatment systems were effective. Water treatment systems are often simple and unimpressive in appearance and verification sampling should have been performed.  Twelve of the private well owners had not signed the agreement Cabot and instead eleven (I could not trace the 12th ) had filed a civil suit against the company. These owners were being provided delivered water by Cabot. On November 30, 2011, with the approval of the PADEP, Cabot ceased delivering water to these homes. PADEP agreed to stopping the water deliveries because there had been sufficient time for residents to sign the agreement and that a remedy for private well owners had been provided. Clearly, many of the homeowners were not satisfied with the remedy offered.

Very public protests took place aided by environmental groups and anti-fracking grass roots groups and  resulted in the EPA stepping in and reviewing all the data for the 18 wells. In their summery EPA notes that based on the maximum contaminant sampling results for the 18 wells sampled, levels of coliform bacteria, methane, ethylene glycol, bis (2-ethylhexyl) phthalate (DEHP), 2-methoxyethanoI aluminum were present.  Coliform bacteria were found in half the wells and typically indicate a pathway exists for disease causing bacteria to contaminant the water supply, though it . E. coli bacteria and fecal bacteria are a subset of coliform bacteria that only occur in animal and human waste and are a threat to human health. The level of coliform bacteria found in two of the wells was too high to measure. After reviewing all the sample data, information and residents’ concerns by the EPA and ATSDR (a part of the U.S. Department ofHealth and Human Services) the regulators identified a significant group of private wells in the nearby area that had not been tested and were not part of the existing PADEP /Cabot agreement. In addition, the level of concern and frustration of the residents who were party to the PADEP /Cabot agreement prompted EPA to temporarily supply water to four homes and perform follow up environmental monitoring and water sampling and have ATSDR perform a full public health evaluation on the data from the site area. Because many of these compounds affect the same organ systems, ATSDR used suitable methods to evaluate the potential for synergistic actions and the cumulative concentration of all substances, and dissolved combustible gases was considered to protect against the buildup of explosive gases in all wells in the area.

Between January and March of 2012 EPA collected 61 separate groundwater samples, 6 duplicates for quality control testing and performed188 analyses for each sample, in some instances the samples were filtered and retested. These samples covered the water supply to 64 homes, and two rounds of sampling at four wells where EPA was delivering temporary water supplies because prior sampling data found elevated levels of contaminants in those wells. EPA found an elevated level of manganese in untreated well water at one of the wells. Two homes that obtain their water from that well have water treatment systems that can reduce manganese to levels that according to the EPA do not present a health concern.

Many of the perceived problems with well water are caused by the presence of iron and manganese. Iron and manganese can give water an unpleasant taste, odor and color. Manganese causes brownish-black stains on household items washed with the water. In addition, water contaminated with iron and manganese often contains iron or manganese bacteria which feed on the minerals. These bacteria do not cause health problems, but can form a reddish brown or brownish black slime in toilet tanks and clog filters. Iron and manganese often occur together and are naturally occurring elements commonly found in groundwater in many parts of the country. At  levels naturally present in groundwater iron and manganese do not usually present a health hazard. However, their presence in well water can cause unpleasant taste, staining and accumulation of mineral solids that can clog water treatment equipment and plumbing. In addition, a persistent coliform (non-fecal) bacteria problem may be caused by iron bacteria. Under guidelines for public water supplies set by EPA, iron and manganese are considered secondary contaminants. The standard Secondary Maximum Contaminant Level (SMCL) for iron is 0.3 milligrams per liter (mg/L or ppm) and 0.05 mg/L for manganese. This level of iron and manganese are easily detected by taste, smell or appearance and thumbing through the results of the EPA sampling I saw manganese levels high enough to see and taste in drinking water.

In addition, to the elevated manganese, there were elevated levels of sodium not beyond what can occur naturally, elevated levels of arsenic not beyond what can naturally occur, but in at least one case significantly elevated over the other samples and above the SDWA MCL. Methane was present in several samples and can also be naturally occurring. Fecal coiform bacteria indicative of contamination from a septic system was present in one sample (that water is NOT safe) and coliform bacteria was present in several samples. Only one of their sodium levels was higher than mine which is naturally occurring, safe to drink and tastes good.   

ATSDR performed the risk analysis on the results. Overall during the sampling in Dimock, EPA found elevated arsenic, barium or manganese, all of which are also naturally occurring substances, in well water at five homes at levels that could present a health concern according to ATSDR. In all cases the private wells either now have or will have their own treatment systems that can reduce concentrations of those metals to acceptable levels at the tap.  EPA provided all the residents their sampling results and has no further plans to conduct additional drinking water sampling in Dimock or continue to provide drinking water. The water supply to these homes with their treatment systems is deemed to be safe by the EPA.

The bottom line is we really do not know definitively what impact if any Cabot caused to the groundwater. Cabot agreed that they failed to properly grout the gas wells and certainly they did not properly store and contain the fracking fluid. Publicized photos show jugs of dirty looking water reportedly from wells in the area and could be manganese and iron, fecal contamination, or dirt that entered the groundwater through surface infiltration of loosening of fines within the aquifer. EPA sampling is silent on water appearance. PADEP concluded that surface spills and shoddy construction practices by Cabot allowed natural gas from a shallow deposit above the Marcellus to drift into the drinking-water wells of residents. The non-quantified traces of chemicals that are sometimes used in fracking, and antifreeze and are common in fuel that had been reported in previous sampling were not found the EPA water samples. EPA found only naturally occurring heavy metals at levels of any concern.

For the past decade and a half, the US Geological Survey, USGS, has been studying groundwater quality in the United States. The presence of a contaminant in water does not necessarily mean that there is a human-health concern. Whether a particular contaminant in water is potentially harmful to human health depends on the contaminant’s toxicity and concentration in drinking water. Other factors include the susceptibility of individuals, amount of water consumed, and duration of exposure that is why the ATSDR performed their risk analysis.  In their survey testing of groundwater in the United States the USGS has found most man-made contaminants at both trace and concentrations exceeding human health screening levels or MCLs in groundwater samples from unconfined aquifers. These man-made contaminants originate at the surface and the unconsolidated aquifers provided little natural protection from surface infiltration. 

The shallow drinking water wells in Dimock make them particularly susceptible to contamination. The residents of Dimock did not regularly test their water quality historically. The bacterial concentrations found in early rounds of testing were troubling, though unlikely to have been caused by the fracking, but were indicative of susceptible and potentially poorly maintained or constructed wells. The fecal bacteria found in one well was a health hazard very unlikely to have been caused by fracking, but likely to be caused by a failing septic system. Prior studies of private well water in Pennsylvania have found that approximately one third of private wells test positive for total coliform bacteria (Swistock et al 2009). The highest incidence of coliform bacteria tends to occur with snow melts and rains that carry the bacteria from the surface, but can also occur with iron and manganese. Regularly testing your drinking water and maintaining any water treatment system in your home is an essential part of private well ownership. 

Thursday, July 26, 2012

Blue Plaines From the Past to the Future

AECOM Picture

The District of Columbia's sewage system, one of the oldest in the United States, began its story around 1810, when the first sewers and culverts were constructed to drain storm and ground water from the streets of Washington D.C. In 1815 the canal system was built and included the Washington Canal that ran down what is now Constitution Avenue. The canal system provided a convenient way to transport goods, provide access to water and also dispose of waste. Residents drew their water from a series of city owned springs. In the first half of the 1800’s streets in Washington began piping in spring or well water for residents' use, and sewage was discharged into the nearest body of water- often the canals.

The Washington Aqueduct bringing river water for potable use citywide was built in the 1859 when population growth required a dedicated clean drinking water supply. Lieutenant Montgomery C. Meigs, is credited with planning and building the Washington Aqueduct. The surge in population during the Civil War, quickly created a human waste problem in the city and there were epidemics of smallpox, typhoid and malaria, which took many thousands of lives during the war years. From 1871 to 1874, the city’s Board of Public Works built an estimated 80 miles of sewers to remove the human waste from the city. Although the amount of construction was impressive, sewer engineering and hydrology were in their infancy and much of the work was poorly planned, structurally unsound and hydraulically inadequate, but the vitrified clay and brick sewers remain part of the sewer system to this day. In the early 1880s when the Washington City Canal was covered over because it had become nothing more than a stagnant open sewer, the problem of open sewage was transferred to the marshes along the Potomac and Anacostia Rivers.

Up to this time, the sewer canals and pipes that served DC were a combined system that merely carried and discharged (without treatment) both sanitary sewage and stormwater into local creeks and rivers. In the 1890s, it was decided that the existing combined system should be retained (due to the costs of rebuilding the sewers as separate systems), but that any extensions of the system would be built using the more expensive system of separate lines to carry stormwater and sewage flows. In order to protect the health of city residents, who were drinking the river water provided by the Washington Aqueduct, all the sewage discharges would be extended away from the city to a point far enough down the Potomac River to prevent being taken up in the Aqueduct for drinking water. The discharge point selected was Blue Plains, the southernmost tip of the District of Columbia. Pumping raw sewage into rivers was state of the art in the 1800’s. Washington DC did not start treating the sewage waste until 1937 when the Blue Plaines sewage treatment plant was built.

Today, sitting on the southernmost tip of Washington DC, across the river from Alexandria is the Blue Plains Advanced Wastewater Treatment Plant. While there are larger sewer treatment plants, that remove the solids and bacteria, the modern day Blue Plains also has Tertiary Treatment to remove nitrogen and phosphorus making Blue Plains the largest advance treatment plant in the United States at 150 acres and with a rated annual average day capacity if 370 million gallons per day (mgd) and a peak wet weather capacity of 1,076 mgd. The system needs such a large storm rated capacity to accommodate the old central city section which accounts for one third the area of the District and still has the old combined sewer system that overflows with predictable regularity during large storms. The system has released excess storm flows averaging 54 million gallons per year to the Anacostia River. In addition, Blue Plains is under a consent order from the Environmental Protection Agency, EPA, to meet new effluent limits for total nitrogen released and better control of the system during storms.

Being an advanced waste water treatment plant is not as modern as it sounds. At Blue Plains and other sewer treatment plants primary treatment screens wastewater, and performs some rudimentary treatment to remove crude solids of human waste and skim off grease, oil and fat. Wastewater sits in settling tanks, which are designed to hold the wastewater for several hours. During that time, most of the heavy solids fall to the bottom of the tank, where they become a thick slurry known as primary sludge. The material that floats is also skimmed from the surface of the tanks. Secondary (or biological) treatment involves feeding oxygen to bio-organisms that break down any organic matter still in the wastewater.

Tertiary treatment further treats the effluent water to remove nitrogen, phosphorus, fine suspended particles and microbes, and to kill or slowdown disease-causing organisms and viruses. It is the tertiary treatment that makes Blue Plains an Advanced Wastewater Treatment Plant. At this time, Blue Plains cannot remove enough nitrogen (on average over the year including storm periods) from the waste stream to meet the EPA mandated limit for nitrogen of 4.689 million pounds of nitrogen per year. In addition, when it rains and the Blue Plains AWWT plant tries to provide complete sewage treatment to flows of 600-740 million gallons a day the performance of the clarification units deteriorates because of the large flow of water, turbulence and not enough time to settle the solids and scum. The deterioration in performance cascades from the primary treatment to the secondary and onto the advanced treatment. This results in a reduced treatment for the sewage that lasts not only during the rain, but can last for several weeks. Blue Plans is currently engaged in a $7.8 billion 20 year improvement program called the Clean Rivers Project that will meet the reduced total nitrogen released requirements of their operating permits and increase the control of the system during rain storms in addition sludge treatment will be improved and sewer piping improved in many areas.

The sludge is separated from the wastewater during the primary treatment is further screened and allowed to gravity thicken in a tank. Then the sludge is mixed with the solids collected from the secondary and denitrification units. The combined solids are pumped to tanks where they are heated to destroy pathogens and further reduce the volume of solids. With treatment sludge is transformed (at least in name) to biosolids. Blue Plaines biosolids are Class B; however, after the completion of the improvement project in 2014 utilizing thermal hydrolysis (heating to over 160 degrees under high pressure) followed by anaerobic digestors, the biosolids produced by the plant will be Class A and the methane captured will provide 20% of the power for Blue Plains. To ensure that biosolids applied to the land as fertilizer do not threaten public health, the EPA created the 40 CFR Part 503 Rule in 1989 that is still in effect today. It categorizes biosolids as Class A or B, depending on the level of fecal coliform and salmonella bacteria in the material and restricts the use based on classification. The biosolids are tested for fecal coliform and salmonella and composite sampling is done for metals and hydrocarbons; the presence of other emerging contaminants in the biosolids is not tracked.

The Clean Rivers project will maintain the peak flow rate of 1,076 mgd from the collection system to Blue Plains. Peak flow rates to Complete Treatment would be reduced to 555 mgd for the first 4 hours, 511 mgd for the next 24 hours and 450 mgd thereafter. A tunnel would be constructed between Poplar Point and Blue Plains, and flows exceeding the complete treatment capacity would be diverted to the tunnel and that tunnel connected to the other tunnel storage. The storage provided by the new tunnel would be an additional 31 million gallons for a total storage in the system of 157 million gallons spread over the Anacostia River tunnels system and the new Blue Plains Tunnel. This facility will allow flow from the collection system that exceeds the complete treatment capacity of the plant to overflow to the tunnel. Flow captured in the tunnels would be dewatered through a new enhanced clarification facility, ECF, with a capacity of 225 mgd. Operating provisions would include arrangements to dewater the tunnels during and following rain and to convey ECF effluent to a direct outfall (after disinfection) and/or through the complete treatment facilities depending on the capacity available at the time.

The Clean Rivers project will also include the construction of Enhanced nitrogen removal, ENR, facilities. The new ENR facilities will have the capacity to provide complete treatment for flow rates up 555 million gallons per day for the first 4 hours, 511 million gallons per day for the next 24 hours and at a rate of 450 mgd thereafter to meet the new total nitrogen effluent limit mandated by the EPA. When completed, the Blue Plains Advanced Waste Water Treatment Plant will be able to meet the nitrogen release standard under the operating permit, reduce the number of uncontrolled storm related releases of waste.

Monday, July 23, 2012

Endocrine Disruption and What’s in the Potomac River Watershed

Recently in the Susquehanna River in Pennsylvania, smallmouth bass have been found with benign skin tumors. Two skin samples of lesions from fish removed from the river were sent to Dr. Vicki Blazer a mairine biologistand researcher at the U.S. Geological Survey, USGS. Dr. Blazer who received the American Fisheries Society 2010 Publications Award for her article investigating the mortality of fish in the Potomac River basin and is a fish biologist at the West Virginia Science Center studying the impact of contaminants of emerging concern in rivers and streams of the lifecycle and health of fish,  found that one of the samples  tested positive for a type of benign skin tumor. These samples were sent to the USGS because an ongoing collaborative effort between the USGS, the U.S. Fish and Wildlife Service, state agencies in West Virginia, Maryland, and Virginia, and the Potomac Riverkeeper has been studying the impacts of trace contaminants on fish health. Areas of study have been endocrine disruption, immune system impact, cancer/neoplasia promotion, secondary sex characteristics, oxidative damage and behavior. I follow Dr. Blazer’s talks at conferences.

Endocrine disruptors are chemicals that may interfere with the body’s endocrine system and produce adverse developmental, reproductive, neurological, and immune effects in both humans and wildlife. Research evidence suggests that these chemicals, can mimic hormones or interfere with the function of the body’s own hormones. Endocrine disruptors are found in many of the everyday products we use, including some plastic bottles and containers, food can liners, detergents, flame retardants, toys, cosmetics, and pesticides. These hormones and hormone like substances are typically highly soluble in water and are easily transported in the blood. These compounds are of particular concern because they can alter the critical hormonal balances required for proper health and development. The glands that make up the endocrine system are: pituitary gland, thyroid glands, adrenal glands pancreas, ovaries, testies, pineal gland and the thymus.

The marine life work in this area began with the studies of fish kills more than 15 years ago. Preliminary analysis at that time did not find any chemicals or pesticides in concentrations that were sufficient to stress fish and be a cause of the fish kills. Yet there were fish kills. As Dr.Blazer pointed out in a recent conference almost all of our knowledge about concentrations likely to cause a health impact are based on acute toxicity or gross impact such as size. In most cases there are no criteria for sub-lethal effects such as immune modulation or endocrine disruption. Dr. Blazer and others believe that methodology used to detect these chemicals in recent studies may not have been sensitive enough, and may indeed be above the concentration thought to impact these fish. Two examples given by Dr. Blazer at a recent conference were that based on research studies in more than 25 fish species, scientists have suggested that 1 ng/L (parts per trillion) may be the “no effects level” for natural estrogen concentrations on fish. Unfortunately, all the river studies performed in the Potomac River and Shenandoah River passive sampler studies use 1.3 ng/L as the lower detection limit. The detection limit for Potomac and Shenandoah Rivers studies for ethynylesradiol is almost twice the level recently set as the aquatic “no effects” level. Studies along the Potomac and Shenandoah Rivers have only studied smallmouth bass that were found to be intersexed and have measurable amounts of vitellogenin (a protein that is a precursor to egg yoke) in their blood. Vitellogenin is normally only found in the blood of sexually matures egg-laying females, though males typically carry an inactive gene that is “turned on” by the presence of estrogen. Further research is necessary to not only determine if the problem is more widespread geographically and among species, but to identify the mode and mechanism of impact.

Fish health turns out to be a way to track ecosystem health. The smallmouth bass population has been presenting a variety of skin lesions, bacterial, viral and fungil infections, high parasite loads and intersex in normally gonochorist fish ( where embryonic gonad subsequently divides into ovaries or testes). The findings are not at all consistent, but show wide spread biological impact. Scientists like Dr. Blazer are looking to determine if these impacts to fish are being caused by something being put in or released from wastewater treatment plants, farms, or storm water runoff. Until the cause is identified, nothing can be done to stop them and prevent impact to animal and human populations.
Slide taken from Dr. Viki Blazer of USGS presentation

The Potomac fish kills studied by Dr. Blazer and others suggest that there are stressed populations of fish that at some point are overwhelmed by environmental stressors such as increased water temperatures, low dissolved oxygen, excess nutrients, high pH, or chemicals that cause immuno-supression leading to a wide variety of opportunistic infections and the large fish kills. There is increasing evidence that estrogenic chemicals and other endocrine disrupting substance modulate the immune response and disease resistance. In a study of female bass from the Shenandoah River south fork scientists found the BDE (a flame retardant), triclosan (an antibacterial and antifungal agent used in a wide variety of consumer products including toothpaste, mouthwash, deodorant and cleaning supplies) and pesticides had accumulated selectively within the endocrine system with lower concentrations in the brain, skin, kidneys. In talking about the bass, it is reported by Marcia Moore of the Daily Item that Dr. Blazer said, “The good news, for people anyway, is the muscle has the lowest concentration,” indicating that the fish could be eaten. “It’s not such good news for the fish because we’re finding it in the brain, ovaries, kidneys and skin.” The location of the increased concentrations seen in Dr. Blazer’s slide and potential sources of contaminants raises questions about potential human exposure.  

All water on earth is part of the hydraulic cycle and is reused over the course of time. These traces of chemicals have managed to slip through the earth’s natural filtration and some of them through treatment systems to be released into rivers and consumed by humans. Finished and source water (as well as food and beverages) have been found to have low levels of these emerging chemicals, but whether this low level of exposure is bio-accumulating in humans and can cause any health or developments effects is yet unknown. Endocrine disruptors can sometimes affect reproduction, development, and behavior, certainly these impacts on fish is being studied. These potentially endocrine disrupting chemicals come from a variety of sources and have diverse molecular structures. If these chemicals are introduced into water systems from human waste and food, then it is possible that human tissues might also contain detectable levels of contaminants. We might be experiencing subtle population impacts from chemical exposure during fetal and newborn development. Potential human effects from chemical contaminants in tissues of the endocrine system are cancer (particularly breast cancer and testicular cancer), infertility, disorders of sex development, asthma and other immune related syndromes, autism, ADHD, learning and behavioral disorders, diabetes, thyroid disorders, and testicular dysgenesis syndrome (poor semen quality, testis cancer, undescended testis and hypospadias).

The endocrine system of fish bears some similarity to the human endocrine system, but we do not live our lives in the waters of the Potomac. Two million people rely on the Washington Aqueduct for their drinking water and millions of people in other parts of the country drink source water with similar observed occurrences of endocrine disruption. The impact on human life and the ecosystem of these emerging contaminants is not known, but now is the time to find out the impact from the substance we’ve been allowing to enter the waters of the earth. We need to determine the impact and fate of these micro pollutants before we implement the watershed cleanup plans to make sure we are implementing the right strategies for the health of the entire ecosystem which may include eliminating the use of certain chemicals.  

Thursday, July 19, 2012

Less Rain Means a Cleaner Bay

Rainfall has been below normal. In the Washington Metropolitan area Virginia, Maryland and the District of Columbia are dependent on the flows of the Potomac River and the Occoquan for their water supply. Potomac River basin has been abnormally dry this year with the eastern shore of Maryland in a moderate drought and river flows below normal. Although the recent rainfall has eased drought in some areas, not enough rain has fallen to raise watershed stream flow to normal levels. Temperatures have been abnormally high and there appears little chance for precipitation in the near term.  But for now, the Interstate Commission on the Potomac River Basin, ICPRB, reports that from a water supply perspective, there is sufficient flow in the Potomac River to meet both the Washington metropolitan area’s water needs and the environmental water flow needs without augmenting river flows by releasing water from the upstream reservoirs. So we can enjoy the clearer flows of the river with little worry or need to conserve water for now.

The ICPRB allocates and manages the water resources of the Potomac River through the management of the jointly owned Jennings Randolph and Little Seneca reservoirs, the Potomac River Low Flow Allocation Agreement and the Water Supply Coordination Agreement adopted in 1982 which designated the ICPRB as responsible for allocating water resources during times of low flow and assist in managing water withdrawals at other times. The ICPRB limits water withdrawals by the local water utilities coordinating Fairfax Water’s utilization of the Occoquan and Potomac and limiting total withdrawals from the Potomac if necessary. In the event that the Potomac River flow at Little Falls is below 700‐million gallons per day the ICPRB releases water from Jennings Randolph and Little Seneca reservoirs to make up the flow and ensure that the saline and freshwater balance necessary to maintain the oxygen levels for oysters, clams and crab populations is maintained. The reservoirs ensure in-stream flows to meet minimum aquatic habitat requirements and the drinking water needs of the region.  

The July 5th Water Supply Outlook from the ICPRB reports that both groundwater and stream flow remain adequate for the short term, but are below normal as rainfall has been below normal for much of the early spring and June. Jennings Randolph (the big reservoir) is full and we are not going to run out of water this year. The good news is that nitrogen and phosphorus contamination in the Chesapeake Bay could fall to the lowest levels since the droughts of a decade ago. The nitrogen and phosphorus contamination in the Bay is correlated with rainfall as seen below and we can enjoy this little preview of what a cleaner Bay might look like.
From the Chesapeake Bay Program 2012

Since the 1970’s large algae blooms have formed in both the Potomac and Upper Bay portions of the Chesapeake Bay watershed each summer. Larger than normal blooms occurred in the upper Chesapeake Bay and its tributaries during August and September 2000 and 2011. These blooms were probably the result of greater than normal amounts of freshwater and nutrients entering the Bay in those years, but there are still factors that need to be studied.  The dead zones form in summers when higher temperatures reduce the oxygen holding capacity of the water, the air is still and especially in years of heavy rains that carry excess nutrient pollution from cities and farms. The excess nutrient pollution combined with mild weather encourages the explosive growth of phytoplankton, which is a group of single-celled algae. While the phytoplankton produces oxygen during photosynthesis, when there is excessive growth of algae the light is chocked out and the algae die and fall below the interface between the warmer fresh water and fall into the colder sea water. The phytoplankton is decomposed by bacteria, which consumes the already depleted oxygen in the lower salt level, leaving dead oysters, clams, fish and crabs in their wake. Thus, the name- dead zone.

In a wedge estuary such as Chesapeake Bay the layers of fresh and salt water are not typically well mixed, there are still several sources of dissolved oxygen. The most important is the atmosphere. At sea level, air contains about 21% oxygen, while the Bay’s waters contain only a small fraction of a percent. This large difference between the amount of oxygen results in oxygen naturally dissolving into the water. This process is further enhanced by the wind, which mixes the surface of the water. Recent heavy wind storms may have increased oxygen levels in various water layers. ICPRB staff scientists will be working with Maryland and West Virginia natural resources scientists to survey algae blooms in the upper Potomac watershed. Researchers will visit numerous sites along the Potomac, its South Branch up to Moorefield, W.Va., the Cacapon River, and the lower Shenandoah River. The summer-long assessment will document the types and extent of algal blooms in this section of the watershed.

While the recent storm brought much damage, the powerful winds that took down trees also served to mix the waters of the Potomac. The waters of the Bay appear clearer than they have in recent years. While drought does improve nitrogen, phosphorus and sediment levels in the Bay in the short run, the cost of drought can be high (agricultural losses and water restrictions) and ultimately droughts end and the rains will come.  The Chesapeake Bay Foundation still judges the Bay to be “dangerously out of balance” despite progress made in the health of the Bay in the past 30 years and this year’s clear waters and healthy shad run. As the Washington Post Reported recently, the District’s 45 miles of Potomac watershed streams and rivers is so tainted with bacteria from the combined sewer overflows that the city prohibits swimming. The waters of the Potomac are the primary drinking water supply for the region they should be clean enough to be safe for swimming and recreation.  The Watershed Implementation Plans from the six states and Washington DC and the $2.6 billion sewage treatment plant upgrade for Washington DC under the Chesapeake Bay TMDL will further improve the waters of the Potomac and the Bay in the next decades. The Maryland, Virginia and the District estimates that it will cost more than $30 billion for them to meet the mandates of the Chesapeake Bay TMDL pollution diet over the next 13 years. Water is not free, it’s just we do not often see many of the costs associated with it. We need to see and understand all the cost of guaranteeing 24/7 access to clean abundant water.  

Monday, July 16, 2012

My Well Has Stopped Working- What To Do

One day you turn on the faucet and nothing happens. If you have a private drinking water well you will have to determine how to get your water back “on.” There are a number of reasons why a well might suddenly stop producing water, but basically they all break down into equipment failure, depletion of the aquifer or other groundwater problems and failing well design and construction. There is one other possibility-that your pipes have frozen. (See February 19, 2015 article).

Equipment problems are not only the easiest to identify and fix, they are more common than groundwater problems (which were covered in a previous post) or well design and construction issues.First check the power to the well and check to see if you have a short. If your well stopped working right after a thunder storm, check to see if the well was struck by lightning. This is fairly common in the south. If there is a short in the pump system it will blow the circuit and if there was a power surge as the pump was turning on a circuit could have blown. So turn off and on the pump’s circuit breakers or change the fuses. Pumps generally have two circuits tied together because an immersion pump draws a lot of power (240 volts). Make sure both circuits are on- a small water drizzle is one sign of a 240 volt pump getting only 120 volts.If the pump keeps turning off and it is not because of dry well, then there might be a short.

Intermittent episodes of severe water pressure loss or even no water is usually a sign of a problem with the water supply. If you have water first thing in the morning and again when you get home from work, but the supply seems to run out especially when doing laundry or taking a shower, then you may have a groundwater problem. Sudden failure or failure after a power outage is probably a mechanical problem with components of the system.  The first step in identifying the cause of a water failure is to check the equipment. The essential components of a modern drilled well system are: a submersible pump, a check valve (and additional valve every 100 feet), a pitless adaptor, a well cap, electrical wiring including a control box, pressure switch, and interior water delivery system. There are additional fittings and cut-off switches for system protection, but the above are the basics. To keep the home supplied with water each component in the system and well must remain operational.

The components within the basement provide consistent water pressure at the fixtures in the house and the electrical switch that turns on the pump. The pump moves water to the basement water pressure tank, inside the tank is an air bladder that becomes compressed as water is pumped into the tank. The pressure in the tank moves the water through the house pipes so that the pump does not have to run every time you open a faucet. The pressure tank typically maintains the water pressure between 40-60 psi. After the pressure drops below 40 psi, the electrical switch turns on the pump and the pressure in the tank increases. Any failure in any component or loss of electricity can cause a well to suddenly stop producing water when a faucet is turned on.

If you have power (and I assume you do if you are reading this), the most common cause is a failure of the pump, but first remember to check your circuit breakers to make sure that the problem is not electrical. Next check the pressure gauge on your pressure tank, read it. If it is not showing a pressure of 30-50 psi or 40-60 psi (slightly left of center) that could be your problem. The electrical switch at the pressure tank (grey box) is a fairly common cause of failure in the pump/private well water system; so check the switch to make sure it is working. Manually closing the pressure control switch should turn the pump on. Close the switch, if when you do this the pump does not turn on the problem is the pump. If the pump can not be heard to turn on when you manually turn on the pressure control switch, that is your problem. The pump is the piece of equipment subject to the most wear and tear and most likely to fail. When the pressure in the pressure tank falls to 35-40 psi the switch at the pressure tank turns on the pump.

There are two types of pumps; a jet pump and a submersible pump. Most modern drilled wells are built with a submersible pump so that the ground water is not exposed to potential contaminants before it reaches your home. In older pump installations and dug wells, above ground jet pumps were often used, which potentially allowed the introduction of contaminants at the surface concrete well cap. A drilled well generally has a 6 inch diameter pipe sticking out of the lawn somewhere. A dug well is wide enough for a man to fit down into the well so that the well could be dug. Dug wells tend to have concrete lids or other large lid that can sometimes be confused with multiple septic tank systems. Dug wells are more susceptible to contamination and tend to be older. The pump for a dug well is sometimes in a pit next to the well or it will be located in the basement.  Jet pumps are easier to check since they are not in the well and you can pretty much see if they are running. A submersible pump in a drilled well can be checked for power and the pressure switch can be checked with a volt meter and the pump can be heard operating. The safety switch and control box for the pump should be in the basement on the wall near your pressure switch. If you needed me to tell you these things now would be a good time to call the well driller to come out and diagnose the problem. Well drilling companies can generally replace, pumps and pressure tanks and other well components. Things like a leaky valve at the bottom of the well can result in a pump losing it prime after a power failure.  In addition, they can diagnose an improper well design. Private well construction was not regulated in Virginia until the 1992 and is still not regulated in many places including Pennsylvania. I have seen some funky well designs as a VAMWON private well volunteer. An electrical problem will require an electrician, but component replacement can be done by the well drilling company. In Virginia a license is necessary to work on a well as a certified water well provider. Plumbers generally do not have this certification. Do not call a plumber for a well problem.

The submersible pump is a long cylindrical unit that fits within the 6 inch diameter well casing. The bottom portion consists of the sealed pump motor connected to a series of impellers separated by a diffuser that drives the water up the pipe to the plumbing system through the pitless adaptor and a pipe that runs from the well beneath the ground to the basement. Submersible pumps should last 15 years or more, but silt, sand, algae and excessive mineral content can impact their life. A submersible pump operating high sediment water may fail in 5 or 6 years (and several have in my neighborhood). The sediment and mineral content in groundwater acts as an abrasive that wears out the pump bearings and other moving parts and causes the pump to fail prematurely. Often old wells produce less sediment than new wells and a replacement pump may last longer. Any impact to the well (hitting the well pipe with a car or truck), or a bit of gravel broken loose from the formation can cause the pump to wrack, hit the sides of the well and fail. As a well owner you might want to consider a planned replacement of system components rather than waiting for system failure.

If you can hear the pump turn on, yet no water is delivered to the house the problem might be a failure of the pipe leading from the well to the house. If like me your pipe runs under a portion of the driveway, this turns out to be a fairly expensive, but simple fix-excavating the pipe and replacing it. If the horizontal well piping between well and building does not slope continually upwards or if it has a high spot, an air lock can form in the piping. If you need help with a well problem, the Wellcare® Hotline is staffed by the Water Systems Council (WSC), the only non-profit organization solely focused on private wells and small well-based drinking water systems. The Hotline operates Monday through Friday from 8:00 a.m. to 5:30 p.m. Eastern Time, and can be reached at 888-395-1033.
Also, if you are in Virginia you can call or email the Virginia Master Well Owner’s Network for help. My name and email are near the bottom of the list with the volunteers and I am happy to help if I can.

Thursday, July 12, 2012

Using Your Water Well as a Standing Column Wells for Geothermal

Adopted from Orio 1999

Through the Virginia Department of Health I received a contact from a homeowner interested in installing a ground source heat pump utilizing his current drinking water well as both a water supply well and a standing column thermal exchange well for the geothermal heat pump. He apparently had run across this idea in some materials by Carl Orio. In truth, Mr. Orio is the man.  Mr. Orio has been working with geothermal heat pumps since 1974 when he founded WESCORP. Since that time he has expanded his knowledge and experience about geothermal heat pumps leading the way and is the co-author of practically everything including the New York City "Geothermal Heat Pump Manual" and ASHRAE funded research papers.

Though standing column wells have been around since the advent of geothermal heat pump systems, only  recently are they receiving  more attention because of their superior performance in regions with suitable hydrological and geological conditions (Orio 1994, 1995, 1999; Yuill and Mikler 1995; Spitler et al. 2002) and limited horizontal space.  According to a paper by Mr.  Orio et al. in 2005 there were only about 1,000 standing column well-geothermal heat pump installations in the United States. Standing Column Well systems are groundwater heat pump systems that use groundwater drawn from wells in a semi-open loop arrangement. The ground heat exchanger in these systems consists of a vertical borehole that is filled with groundwater up to the level of the water table. Water is circulated from the well through the heat pump and back to the well in an open loop pipe circuit. The standing column can be thought of as a cross between a closed-loop vertical well system and an open-loop groundwater source system. The systems identified by the paper are in the colder climates of the northeast and west where air heat exchangers could not do the job, but a geothermal system can function in the extreme cold. In a survey of existing systems the most significant parameters were found to be well depth, rock thermal/hydraulic conductivity, and bleed rate.

During much of the year, the standing column systems operate by recirculating water between the well and the heat pump. The water is drawn from the bottom and returned to the top of the water column. However, during extremecold or hot periods they can “bleed” some water from the well to maintain thetemperature range of the standing column well by inducing groundwater flow into the well.  This causes groundwater to flow to the column from the surrounding formation to make up the flow. This serves to cool the water column during heat rejection in the summer, and warms the column and surrounding ground during heat extraction in the winter, thus restoring the well-water temperature to the normal operating range and improving the system performance. A bleed is especially important in the colder climates where the loss of temperature in the well could result in freezing of the system in extreme cold. In warmer climates an excess of temperature in the well during the hottest days would only result in a loss of efficiency since the temperature increase would still result in water significantly below the ambient temperature.

Most of the existing standing column installations are located in the Northeast and Pacific Northwest in heating-dominated residential and light commercial applications. The vast majority exist in the northeastern Appalachian region including Maine, Massachusetts, New Hampshire, New York, and the northwestern states. Because air heat exchangers have improved in efficiency so much in the last few years and the heating demands in Virginia and the rest of the south are much less than in the Northeast, ground source heat exchangers have been less popular in the south.

The home owner who contacted me lives on the Occoquan Reservoir right around the fall line in the coastal plain of Virginia, to be entirely accurate, the homeowner actually lives just west of the fall line and within the harder bedrock of the Piedmont. The Fall Line so named because the meeting of the Piedmont and Coastal Plain is marked by a line of waterfalls. The Fall Line is really a zone rather than just a narrow line. The rapids and waterfalls characteristic of the Fall Line extend up to a mile wide in some locations. The waterfall on the Occoquan River near Lorton has been "dried out" by the construction of the dam that created the Occoquan Reservoir, but still marks the line. You can see the exposed rocks at the Fall Line by walking upstream from the town of Occoquan when the path is open. The geology at the home is most likely a layer of generally unconsolidated, inter-bedded sands and clays, underlain by bedrock.

The homeowner’s well is 450 feet deep with the pump at 400 feet.  The well is probably that deep because that is what it took to find an aquifer just west of the Fall Line. That is not a good well for this water rich region of Virginia, but groundwater is not equally distributed throughout the region.  The water level in the well is about 35 feet from the surface creating water storage within the well of over 2,000 gallons.  The water storage has served to produce a consistent supply for the house since the well is reported to recharge at only 5 gpm- not really enough to run a household.  This is a non-robust aquifer and they drilled 450 feet to reach it. This well may indicate the difficulty of locating a well on the property or replacing the well if it is damaged. As a Virginia Master Well volunteer, my first reaction is always that the drinking water well is sacred and should not ever be risked especially in a location where there was difficulty with drilling a producing well. The homeowner is aware of this concern.

The groundwater aquifer recharges this well at 5 gallons per minute. This aquifer is not robust, even a 10% bleed at 0.6 gallons per minute would consume over 800 gallon per day. However, a bleed is not necessary. The thermal demands are not as high in Virginia as Maine and other parts of cold New England and the actual use of water by the household is probably 100-150 gallons a day. The use of the water for the heat exchanger is non-consumptive (it will be returned to the well at the 4-6 gallons a minute that it is drawn) and since the water in the well will turn over in less than a week and be constantly diluted with recharge, any impact from the metal portion of the heat exchanger should be negligible. The only true potential contamination would be a leak in the heat exchanger allowing the gallon or so of ethylene glycol to contact the well water and that should be carefully watched.

An important design feature to convert the existing well to a duel purpose well is the pump for the well will have to be replaced with a variable speed pump that can deliver at least 12-15 gallons per minute. A variable pump is needed because the pump will now serve two purposes. When the heat pump is operating it requires flow at 4 or 6 gallons per minute and the well must also be able to respond to the household needs an simultaneously pump to the pressure tank. There will have to be a duel switching system to trigger the pump. The heat exchanger has to be able to trigger the pump and lower demand when it shuts off and the pressure tank needs to be able to trigger the pump to either turn on or increase flow. There also has to be a valve trigger to open or close the valves to the pressure tank or heat exchanger.  Typically a standing column well needs to be 250-500 feet deep depending on the bleed. This well at 450 is perfectly sized for a 4-5 ton system and the recharge rate at 5 gallons per minute is really below the ideal level for a household, but this household has two residents so, their daily water demand is low.

The return line to the well should be buried at 6-8 feet below grade to reduce heat loss or gain when returning the water to the well. The return line should deliver the water within the water table in the well to avoid introducing turbulence induced mixing in the well column to allow the length of the well to provide adequate time for the temperature of the returned water to return to normal well conditions which for this well is about 50 degrees F. In addition, the homeowner will have to obtain an Underground Injection Control, UIC, Individual Permit from the U.S. Environmental Protection Agency, EPA. The EPA issues its UIC Permit under the Safe Drinking Water Act (SDWA), as amended, and implementing regulations at Title 40 of the Code of Federal Regulations, Parts 124, 144, 146, 147, and 148 and has some experience issuing permits for geothermal standing column installations in other parts of the country. In Virginia the EPA Region 3 manages the program and will walk the homeowner through the Class V permit process filling out the inventory data required for the homeowner. Their single requirement is a monitor to identify if there is a leak in the system coil that could release, according to the EPA, ethylene glycol into the homeowner’s well though they do not require regular reporting on the monitoring. The EPA representative explained that they are just trying to save the well owners from themselves. A Virginia Department of Health permit is not required, but plumbing permits are required on a county level. 

Monday, July 9, 2012

The Ward Family Does Not Lose Power - the Generator and Lightning Protection

Like my husband Stephen Moore is an economist. Mr. Moore is also a journalist and recently published an article "When The Moore Family Lost Power."  It is an interesting opinion piece, but I'm an engineer and I think you should do something to ensure that we have electricity, sewage and water- not just talk about it.

When I lived in California I became obsessed with water (okay, water and earthquakes). I maintained a constantly rotated supply of 40 gallons of fresh water at all times and read the precipitation and snow pack levels daily. The average annual precipitation in California is about 23 inches (DWR 1998), but rainfall varies greatly across the state from more than 140 inches in the northwestern California to less than 4 inches in the southern cities where all the people live. California has 1,200 miles of canals and nearly 50 reservoirs-the largest water storage and transportation system in the world that captures enough water to irrigate about four million acres and provide water to 23 million people. Even with this extensive management system there are limits to the water supply; Californian are facing the failure their water- network, due to age and lack of maintenance, growth in population and demand, mining of the groundwater, and the potentially far-reaching effects of climate change. Each new drought is a crisis. For at least twenty years California has failed to plan for the inevitable and easily imagined future.

I could never convince my neighbors of the importance of planning for the future, preventative maintenance and maintaining of our infrastructure. So, when my husband wanted to retire and suggested we look around for a place to live-my criteria was water, location where a mild temperature increase would not be devastating and high speed Internet. My husband was born and breed in Virginia and in truth there was little chance of us retiring anywhere else. Fortunately, based on several different predictions, the eastern slope of the Piedmont region of Virginia is a climate change sweet spot. It was predicted to get wetter and warmer (like the Carolinas), has a moderate four season climate with lots of available water in the Culpeper Groundwater Basin and average annual rainfall of over 44 inches a year. (Virginia’s earthquake last year was quite the surprise, but did no damage here.) We found ourselves a foreclosure with a private well with an excellent recharge rate and good water ($1,600 of water tests before purchase verified those facts) and set to work improving the home and making it more sustainable, secure and self-reliant. I test my water annually to make sure that the water remains good. I can control only my own behavior and my private infrastructure.
My Generac Guardian under my deck

Without electricity I have no water, no septic and my freezer containing a quarter of a cow (grass fed sustainably raised down the pike) is in danger of spoiling, my carefully laid down wine is in danger of being damaged and my life generally disrupted with the loss of the all the modern conveniences. So five years ago when we first bought the house, I had a Guardian 16 kilowatt automatic generator manufactured by Generac installed. When the power to the house is cut, the generator automatically kicks in to power most of the house in about 20 seconds. (Generac advertises that the new generators come on-line in 10 seconds.) I had the generator installed so that the backup power automatically turns on. The generator runs on liquid propane from a tank buried in my yard that also powers my hot water heater, backup furnace, gas grill and stove. The generator can supply the house for 23 or more days depending on whether the gas furnace is running, and is housed in a lovely insulated aluminum casing under my deck (muffling the sound) and looking good as new even after five years of sitting outside. (Note that if the generator runs more than a few days especially when new it will need oil.) The generator works great, though during a recent power outage in our area, the DVR took a couple of minutes to reload the program we were watching, the internet was back almost immediately. Over the years we’ve adjusted the load a few times, but we are never without power.  The generator is serviced annually by the electrician who installed it and my propane tank is never allowed to fall under 50% full. The propane tank has a very readable gage on it. Consumer Reports has a buyers guide for generators. 

The house also has a large south facing roof span. So in addition to the generator, I also have 7.36 KW gross, 6.2 KW PTC of Photovoltaic Solar panels on my roof. However, the panels are connected to the grid so that when the grid goes down, the solar panels do not supply power to the house. I would have to have a back-up battery and different configuration for the inverters. The solar panels have proved very reliable and actually produce slightly more power than predicted by the PV-Watts program. If electrical power were to become unreliable in my little pocket of the NOVEC service area, I would consider converting my PV solar system to directly powering the house. It turns out that except for the heat and air conditioning the solar panels can pretty much power the house on most days.

When I finished my basement and installed the elevator that makes it possible for those who can no longer climb steps to live in this house, I installed a secondary sump pump utilizing the elevator shaft (installed a couple of feet below the basement) as the natural drainage point. The elevator is one of many handicap features I’ve installed in this house. Each change or improvement is intended to be sustainable and accessible. Even if we did not need an elevator when we moved in, this is a retirement home and we will all be old and infirmed one day-plan for it. The sump pumps are also tied into the generator. Power is most likely to fail just when you need a sump pump. The sump pumps are tested and run each spring when I drain the hot water heater. The house has good natural drainage and I am not aware of the sump pumps ever needing to operate, but I have them. The elevator is greased, tightened and serviced twice a year and the type of elevator was chosen for its durable design.
Tying the solar panels into the lightning protection system

Installing an Air Terminal

It is large storms that tend to bring down the power around here. Generally speaking lightning strikes are geographically concentrated in the southeast, south and mid-west. Until we moved to Virginia (with an annual average of 344,702 lightning strikes a year and likely to increase with climate change) from California, I had not thought much about lightning. However, the fire that resulted from a lightning strike at my neighbor’s house convinced me that my husband was right and lightning protection (and whole house surge collar) was something we should buy. The air within a lightning strike can reach 50,000 degrees Fahrenheit, and one lightning stroke can generate between 100 million and 1 billion volts of electricity frying every computer and electrical appliance in the house. Lightning is still a major cause of building fires, even though highly effective (though not perfect) protection has long been available. 

The National Fire Protection Association, NFPA, established the American standard for installation of lightning protection systems now known as NFPA 780 in 1904.  Installation of a system in conformance with NFPA 780 can cost thousands of dollars depending on the size and shape of the house. To provide effective protection, a lightning protection system must include a sufficient number of rods with tips exposed and extending above the structure. These lightning rods, called air terminals become the preferred strike receptor for a descending step leader from the thundercloud. That rapidly-varying lightning current must then be carried away from the building into the earth through a down conductor system that will provide the path of least resistance and impedance to the flow of current and prevent "side flashes" to other objects in the vicinity of the system. All nearby metal components of the structure (solar panels, generator, roof vents, water pipes etc.) must be properly connected to the down-conductor system to ensure the flow of current to the earth. I will never really know if I needed a lightning protection system. So far, the major benefit is I’m very relaxed and sleep well during lightning storms and I am satisfied that preventing the small probability of losing all my appliances and electronics is worth the price.

In the United States we have failed to plan for the future, to properly value and maintain 24/7 water, sewer, electricity and phone. This infrastructure needs to be maintained and improved constantly replaced no mechanical component has an infinite life span. Water, sanitary sewers or septic, electricity and phone and Internet service are not a birth right. We have failed to spend our money on maintaining the infrastructure we have and to fund the commitments we have made. The likely future is one with more and extended power outages, water supply disruptions and other failures. Think about it. The Moore family might, but the Ward family does not lose power.   

Thursday, July 5, 2012

Upgrading My Heat Pump and Ducts

Though I had always assumed that when the time came I would replace my heat pump with a geothermal heat pump, that’s not what happened. I am replacing my air heat pump with another air heat pump, a more efficient one, and re-ducting the attic to create a more efficient and effective system. The cost to reconfigure my finished basement ($5,000-$10,000) and install either a vertical coil or standing column well ($12,000-$18,000) combined with technical difficulties, limited cost savings and the potential I might impact the drinking water aquifer or damage my garden ended my plans to retrofit a geothermal unit into my existing home.

Lots of things have changed since this house was built in 2004 (with builder grade system). First of all an air heat pump is usually a split heat-pump systems consisting of two parts: an indoor (coil) unit and an outdoor (condensing) unit. Both units are designed to work together.  Heat-pump systems manufactured today, by law, must have a seasonal energy efficiency ratio (SEER) of 13 or higher while my heat pump has a SEER of 12 and a HSPF less than 8. Seasonal Energy Efficiency Rating (SEER) or Heating Seasonal Performance Factor (HSPF) for heat pump systems are the efficiency ratings on heat pumps, the higher the SEER/HSPF, the more efficient the equipment. The SEER is measured in average Btu output over the season divided by the watt hours and is the standard measure of energy use efficiency. The Air Conditioning, Heating and Refrigeration Institute (AHRI), defines the method to measure SEER. AHRI was formed in 2008 by a merger of the American Refrigerant Institute and the Gas Appliance Manufacturers Association. Generally, the higher the SEER/HSPF of a unit, the higher the initial cost and lower the operating cost. For these new, high-efficiency systems to work properly, the outdoor unit and indoor unit must be perfectly matched, properly sized and correctly ducted to deliver the right air flow.

New Energy Star certified air heat pumps have minimum requirements of a 14.5 SEER, 8.2 HSPF and 12 EER or higher. Air heat pumps are available with Up to 20.5 SEER; and Up to 13 HSPF. Two-stage or variable cooling makes this possible.  The heat pump has a compressor with two or more levels of operation: high for hot summer days and low for milder days. Since the low settings are adequate to meet household-cooling demands on all but the hottest days, a multi-stage unit runs for longer periods and produces more even temperatures. Longer cooling run cycles allows a two-stage or multi stage heat pump to remove more moisture from the air and allows you to size the unit for the hottest day capacity without reducing efficiency. The indoor air handler (the fan) provides the energy to move air through the ductwork to the rooms of your house. The high efficiency units also have a variable speed motor that automatically changes speed based on air flow requirements to maintain temperature settings to eliminate the on/off cycling of the blower.

To properly size a system for a home there is Manual J from the Air Conditioning Contractors of America, ACCA. In truth what there really is are several computer models and an iPhone app available that does the calculations for you. The only problems is the input factors that impact the calculation include the climate; the size, shape and orientation of the house; the home's air leakage rate; the amount of insulation installed; the window areas, window orientations, and glazing specifications; the type of lighting and major home appliances; and the number of the occupants. Slight variations in the input assumptions get different results. In the model I played with, baseline inputs were available based on square footage, orientation, age of home and zip code and then adjustments could be made. The results were no better than my back of the envelope calculation, but I know my house, the square footage, orientation, the additional insulation and window films I installed and I figure that the heat pumps should be around 3.675 ton.  My existing heat pump turns out to be 3.5 ton.  Once the temperature reached 90 degrees in Virginia the heat pump ran continuously and could not keep the master bedroom or the bonus room cool and is probably one of the reasons why I am replacing an 8 year old system. The high efficiency two-stage or multiple stage heat pump allows me to oversize the unit slightly so that it can handle the hottest days without sacrificing optimal performance on more temperate days so the old rule that if a system is over sized, the system will cycle on and off too frequently, greatly reducing its ability to control humidity and its efficiency is no longer strictly true. If you are going with a multiple stage system round up.

An essential element to the efficient and effective heating and cooling of your home is the duct system, and there is the Manual D by ACCA intended to ensure a good design.  Many homes built after 2000 have flexible ducting and this could be a problem in the performance of your system. ASHRAE, founded in 1894, is the leader in research focused on building systems, energy efficiency, indoor air quality and sustainability.  ASHRAE sponsors research a various universities to advance the sciences of heating, ventilating, air conditioning and sponsored a series of studies between 2002 and 2006 that found that the airflow loss in flexible ducting in real world installations was 9-10 times the loss anticipated in the 1999 design standard used in Manual D in most homes built during the last building boom. In addition, the experimental results they also found that with compression ratios exceeding 4% (the minimum compression found in the real world), the duct performance varies considerably with slight variations in the installation. A low skilled, inexperienced or sloppy worker does a poor job that will impact the performance of your system.  
The ducts in my well insulated attic

An examination of my ducts in the attic found a poorly executed installation. I should not be surprised since several of the ducts were not properly attached to the distribution boxes when we first bought the home from the lender. I had the ducts sealed when I added additional insulation to the home. The flexible ducts in my attic are R-6 with a black vapor barrier. The flexible ducts consist of three layers an inner core of a metal helix encased in a plastic or foil film, and insulation layer and the outer vapor barrier jacket. While fully extended properly installed flexible duct can be as good at maintaining air pressure as a galvanized steel duct, performance deteriorates as the ducts sag. In all the real world tests there was some degree of compression or sag (more than anticipated) even in good installations. In poor installations there were sharp bends, excess lengths and significant restrictions due to squishing the duct into tight spaces. When the flexible ducts are compressed (or sagging) the inner layer crumples (it is a soft spring) and the helix pops out. Instead of smooth circular tube the flexible duct turns into a bumpy pathway for the air that causes turbulent flow and very significant pressure drop from the beginning to the end of the duct. In my case, almost no air flow in the bonus room.  The scientists at Berkeley Livermore Laboratory and Texas A & M found this effect to be orders of magnitude above the range provided in the ASHRAE design standards. The reason the drop was so great is that the ducts operate at very low pressure and small resistance due to fitting or duct friction can have a very big impact on flow. The scientists calculated pressure drop correction equations so that systems designer could correct for this effect.  

I did not even bother to look for a Manual D computer program. The solution to improving my duct air flow was simply to install galvanized steel trunk lines and distribution boxes, properly sealed with UL 181 foil-backed butyl tape and with R-8 (or higher) reflective insulation. The trunk lines will have straight runs and gentle curves to the distribution boxes, but I am going to use flexible R-8 to tie into the last foot of the vent sleeves (to avoid replacing all the boots) keeping the transition as smooth as possible. I am going to use reflective insulation at a minimum of R-8 to take advantage of what little boost I can get from the decreasing the emittance of the ducts. Radiant barriers on your ducts work in your attic to prevent some of the heat from the roof from being transferred into the ducts. The idea is to have the radiant barrier or coating reflect some of the heat of the attic space away from the ducts. Oak Ridge National Laboratory, ORNL, found in field experiments that radiant barriers installed in the attic could reduce air conditioning bills in the hottest parts of the country, so hopefully I will get some small boost from it. In addition, I will install a temperature controlled attic fan to reduce peak temperatures in the attic, but allow the attic to benefit from southern exposure heat gain in the winter. The new insulated and sealed galvanized ducts and new properly installed reflective flexible duct supply lines to existing registers will add several thousand dollars to the cost, but should significantly improve performance of the system and the galvanized steel portion of the ducts will last for decades. Total cost $16,400. After the work is done I will have to blow more cellulose into the attic to correct what has settled or was disturbed in the installation.

Monday, July 2, 2012

Carbon Capture- Will It Save Us?

Last week a three-judge panel of the U.S. Court of Appeals in Washington ruled that the U.S. Environmental Protection Agency, EPA, had “substantial record evidence” that greenhouse gases probably caused the climate to warm over the past several decades, the EPA had concluded that greenhouse gases are pollutants that endanger human health in 2009.  Opponents to that determination had essentially asked the Court to re-weigh the scientific evidence before EPA and reach their own conclusion. However, the three judge panel wrote in the opinion for the case that. “(t)his is not our role.”

 Back in  December 2009,  the EPA officially found that greenhouse gases in the atmosphere threaten the public health and welfare of current and future generations the agency,  and started on the path to regulate carbon dioxide, CO2, after the  "American Clean Energy and Security Act”, also known as the Waxman-Markley energy bill was defeated in the Senate. After collecting CO2 emission data from industry the EPA “found” in 2012 that the largest carbon dioxide generators are the largest stationary combustion sources. It was no surprise that the largest (coal) electrical generation and industrial plants in the nation- big furnaces generate more CO2. For the past decade electrical generation has accounted for approximately 40% of the carbon dioxide emissions in the United States and worldwide. At the end of March 2012, the EPA proposed the first Clean Air Act standard for CO2 rule targeted at power plants.  The agency plans to phase in industrial facilities covered by the carbon rules through 2016. Under the new rule, new power plants will have to emit no more than 1,000 tons of CO2 per megawatt-hour of energy produced. That standard effectively changes the fuel of choice for all future power capacity additions to natural gas, nuclear, or the renewable category (with government subsidies). All existing plants and currently permitted and built in the next 12 months will be grandfathered and exempt from this new rule for now.

Coal electrical generation plants currently produce about 1,800 pounds of carbon dioxide per megawatt-hour of electricity. EPA says the CO2 rule that requires new plants to produce no more than 1,000 pounds of CO2 per megawatt-hour as creating “a path forward for new technologies to be deployed at future facilities that will allow companies to burn coal, while emitting less carbon pollution.” The EPA in their new regulations and Department of Energy, DOE, in their research grants are pushing forward on the development of Carbon Capture. In June the International Energy Agency, IEA, released its preliminary 2011 estimates of world CO2 emissions from fossil fuel combustion. World CO2 emissions rose by 1 billion metric tons, a 3.2 % increase over last year to reach 31.6 billion metric tons. The worldwide level of CO2 is now higher than the worst-case scenario outlined by climate experts just five years ago and within 1 billion metric tons of the IEA point of no return. (That is the point where mankind cannot hold global warming at 2 degrees Celsius.)
 In 2011 the top four world generators of CO2 emission from fossil fuels were (from highest to lowest) China, the United States, the European Union and India who edged out Russia to take the number four slot. China increased emissions contributed almost three quarters of the global increase, with its emissions rising by 720 million metric tons, or 9.3% to 8.46 billion metric tons of CO2, primarily due to higher coal consumption. India’s emissions rose by 140 million metric tons or 8.7% to 1.75 billion metric tons. Since 2000, China has more than tripled its installed capacity of coal power plants, while India’s capacity has increased by 50%. Neither country has used the most efficient designs and technologies available for those plants and those plants will continue to operate 24/7 for decades to come.

CO2 emissions in the United States, in contrast, fell by 92 million metric tons in 2011, or 1.7% to an estimated 5.32 billion metric tons. The European Union increased their CO2 emissions from fossil fuel by 69 million metric tons to approximately 3.56 billion metric tons. Japan’s CO2 emissions increased by 28 million metric tons, or 2.4% to approximately 1.19 billion metric tons, as a result of a substantial increase in the use of fossil fuels in power generation post-Fukushima tsunami. Russia and Canada reportedly remained fairly stable from the previous year. Nonetheless, the IEA still believes that it is still possible to prevent the earth’s temperature from rising more than 2 degrees Celsius if “timely and significant government policy action is taken, and a range of clean energy technologies are developed and deployed globally.” One of the key technologies according to the IEA is carbon capture.

In 2009 DOE supported eleven projects to conduct site characterization of geological formations for CO2 storage. Carbon capture is really three activities: Gathering or capturing of CO2 from point sources (power plants, industrial plants, and refineries), transporting the captured CO2 to a geological storage site, and injecting the CO2 into the ground for permanent storage and monitoring the site for eternity. Capturing and transporting CO2 from industrial plants is technologically possible but is currently prohibitively expensive, though DOE’s National Energy Technology Laboratory and several universities are exploring ways to bring down the costs or raise the costs of other energy sources.  A significant portion of the CO2 generated in the United States and the world is not generated from large stationary point sources, but from cars, homes, and smaller sites. Only about a quarter of the CO2 generated from fossil fuel combustion annually is generated at large point sources the only possible capture points. Storing even a portion of this amount of CO2 would require capturing the gas at many locations around the country and transporting it to facilities that could inject the CO2 into appropriate subsurface rock formations. According to the researchers efficient underground storage of CO2 requires that it be in the supercritical (liquid) phase to minimize required storage volume.

In order for CO2 to remain in a supercritical phase, the pressure in the storage reservoir must be greater than about 68 atmospheres and at temperatures above 31.1°C. (Sminchak et al., 2001). These conditions require that the CO2 be injected at high pressures, which can only be achieved at depths greater than about 2,600 feet below the earth’s surface. The supercritical CO2 will be injected into the geologic formations that are overlain by appropriate sealing formations and geologic traps that will prevent the CO2 from escaping as the CO2 injection well remains in continuous operation for years or decades. The volumes of supercritical CO2 envisioned for carbon capture are huge. A recent U.S. National Research Council report suggests that carbon capture and deep earth sequestering could potentially induce earthquakes because significant volumes of fluids are injected underground over long periods of time. However, insufficient data exists at this time to evaluate this risk. An IPCC Special Report on CO2 capture and storage suggests that between 73 and 183 million metric tons of CO2 could be captured and stored worldwide from both coal and natural gas energy plants each year (Metz, 2005).  The IPCC envision that carbon capture and well injection would take place at a number of locations, ideally places near to power plants that produce CO2 to avoid long transportation distances under pressure.

American Electric Power, AEP, participated in three DOE funded projects to advance CCS technologies. All were conducted at the Mountaineer Plant in New Haven, West Virginia (from which some of my power is supplied within the PMJ Interconnection). AEP planned to replace its pilot demonstration CO2 capture plant with a larger $668 million Carbon Capture and Storage facility, which would have buried more than 1 million metric tons of CO₂ a year, splitting construction costs evenly with the DOE, but failed to obtain the consumer rate increases necessary to fund the experiment. The project has been discontinued. In 2010 there were almost 1,400 coal fired electrical generating units in the United States if each were to be converted to carbon capture operation the total cost would be almost a trillion dollars in construction costs (assuming no cost over runs) and capture 1.4 billion metric tons of CO2 per year. This would represent 26% of the net annual CO2 emissions of the United States and increase average electrical rates 25% nationally for just building the units. Electrical rates would have to increase more if there were any annual operating costs of the Carbon Capture unit. Actual rate increases would be regional.  

The AEP projects were demonstrations of Alstom’s Chilled Ammonia Process for Post-Combustion CO2 Capture. The process uses ammonium carbonate to absorb CO2 and create ammonium bicarbonate. This resulting ammonium bicarbonate is converted back to ammonium carbonate in a regenerator and is reused to repeat the process. The flue gas, cleaned of CO2, but with the tell-tale smell of the ammonia reaction, flows back to the stack and the captured CO2 is sent for storage. Once captured, the CO2 is compressed into a liquid state and is injected 1.5 miles beneath the earth’s surface. Several major pilot projects, in Europe have also been cancelled in the last few years because of doubts over their financial and technical viability. Some are still under consideration for EU and government funding, but the need to rescue the Euro and European Banks has taken the financial resources of the European Union. Ayrshire Power in Scotland, blamed their cancelled plans for a new carbon-capture power station at Hunterston on the recession and anxieties about winning funding from the government and the same reasons were given for the cancellation of the Longannet power station in Fife.

 Globally, only a few, small-scale commercial carbon capture projects are in operation. The oil and gas fields in the North Sea are the site of the world’s first offshore commercial CO2 capture and storage project. Carbon dioxide is captured at a plant located on the offshore natural gas platforms and is stored underground in a sandstone well approximately 2,600 feet below the sea bed. The CO2 tax levied on offshore oil and gas operations by the Norwegian government made the project worthwhile and the drilling rig and available aquifer made it possible. CO2 is removed from the natural gas produced at the Sleipner field in the North Sea and re-injected it into a very porous, permeable sandstone and saline aquifer above the oil and gas reserves. Approximately 1 million metric tons of CO2 have been stored each year since 2000 when the system went into operation. This is just a small fraction of the 31.5 billion metric tons of CO2 released into the atmosphere each year.  It appears as if the United States has passed the point of peak CO2, but the atmosphere of the earth is interconnected and China and India appear to be increasing their CO2 emissions by 860 million metric tons a year. It matters what kind and how efficient a power plant is installed in China or India since they will be sending particulates and CO2 into the atmosphere for decades. Nonetheless, we have no control over the growth in India and China’s coal fired power supply, nor in the abandonment of nuclear power by Germany, Belgium, Switzerland and Japan in the next decade in response to the damage to the nuclear reactors that occurred in the Japanese Fukushima tsunami.