Thursday, April 30, 2015

Algae Blooms to be Monitored from Space

Earlier this month the U.S. Geological Survey (USGS), the U.S. Environmental Protection Agency (EPA), NOAA and NASA announced that they are jointly is developing an early warning system using historical and current satellite data to detect algal blooms. The $3.6 million project intends to create a reliable, standard method for identifying cyanobacteria blooms in U.S. freshwater lakes and reservoirs using water color satellite data. “Algal blooms pose an expensive, unpredictable public health threat that can affect millions of people,” said Sarah Ryker, USGS Deputy Associate Director for Climate and Land Use Change. “By using satellite-based science instruments to assess conditions in water and on adjacent land, we hope to improve detection of these blooms and to better understand the conditions under which they occur.”

NOAA and NASA pioneered the use of satellite data to monitor and forecast harmful algal blooms especially in the Gulf of Mexico and Great Lakes. Satellites allow for more frequent and continual observations over broader areas than water sampling and could provide and potentially an early warning of the formation of a toxic algae bloom. The Landsat satellite series, a joint effort of USGS and NASA, has provided a continuous recording of land use and land cover conditions since 1972. The latest satellite, Landsat 8, has demonstrated promising new capabilities for water quality assessment.

Toxic algae blooms also called dead zones form in summers when higher temperatures reduce the oxygen holding capacity of the water, the air is still and especially in years of heavy rains that carry excess nutrient pollution from cities and farms. The excess nutrient pollution combined with mild weather encourages the explosive growth of algae fed by excessive nutrient pollution. The dead zones in the 1970’s were caused by the release of phosphorus in what we would today consider partially treated sewage being released by waste water treatment plants. Stronger regulations on waste water treatment plants under the Clean Water Act seemed to alleviate that problem to a large extent. However, the ecology of newer toxic algae blooms capable of affecting human health began to appear in this century.

Only certain species of blue-green algae form the toxin, for reasons that aren't fully understood. Toxic bacteria were not a problem until the 21st century, though algae blooms have been a problem on Lake Erie, the Gulf of Mexico, the Chesapeake Bay and other areas for over half a century. Microcystine or cyanobacteria is a toxin produced by microcystis, a type of blue-green algae that spreads in the summer algae bloom. These dead zone toxic algae blooms had almost disappeared on the Great Lakes by the end of the 20th century, but there has been a recurrence with some of the worst algae blooms seen in the Great Lakes occurring in the last six to eight years. EPA Administrator Gina McCarthy attributes this to climate change.

Last August the water supply for Toledo, Ohio tested positive for microcystine. A “Do Not Drink” order was issued for the city and the residents were without drinkable tap water for three days. On day three the drinking water from Toledo’s Collins Park Water Treatment Plant was declared safe to drink and life returned to normal in Toledo, Ohio, but is the new normal safe drinking water only most of the time. The current study project is intended to broaden our understanding of the formation of toxic algae blooms. EPA researchers are working to develop mobile apps for local communities to respond quickly to real-time water quality issues.

The joint agency project also includes a research component to improve our understanding of the environmental causes and health impacts of cyanobacteria and phytoplankton blooms across the United States. Blooms in lakes and estuaries are produced when aquatic plants receive excess nutrients under suitable environmental conditions. Various land uses, such as urbanization and agricultural practices, change the amount of nutrients and sediment delivered in watersheds, which can influence cyanobacterial growth.

Monday, April 27, 2015

Virginia’s Voluntary Agriculture Programs are Reducing Pollution

“A nation that destroys its soils destroys itself.”- President Franklin Delano Roosevelt. Healthy soil leads to stable food production and healthy water resources. Though in President Roosevelt’s day ecology was an emerging field and we were just beginning to understand how these factors work together, today we have models of the ecology that can quantify the results.

Volunteers, Partners, Directors and staff from the Region II of the Virginia Soil and Water Conservation Districts meet for their regular spring meeting to discuss how their work to protect and preserve the soils and waters of the Commonwealth is helping the state achieve the Chesapeake Bay Total Daily Maximum Load (TMDL) for nutrient contamination and sediment target.

The TMDL was mandated by the U.S. Environmental Protection Agency (EPA) and is really a pollution diet. The TMDL sets a total Chesapeake Bay watershed limits for nitrogen, phosphorus and sediment that were that represent about a 25% reduction in pollution to be achieved by 2025. Virginia and the other five Chesapeake Bay states and Washington DC were required to submit and have approved by the EPA a detailed plan of how they intend to achieve the pollution reduction goals assigned to them. These plans are called the Watershed Implementation Plans, WIPs and when used in conjunction with the Chesapeake Bay Model can quantify the performance. Russ Baxter, Deputy Secretary of Natural Resources for Virginia DCR for the Chesapeake Bay was kind enough to update the group.

Under the mandate by EPA and implementation plan approved by EPA, Virginia needs to reduce the nitrogen released into the Bay by 7.9 million pounds, the phosphorus released to the Bay by more than 1.1 million pounds and sediment released by 404 million pounds by the end of 2015 as compared to the 2009 baseline levels. These goals are just the first set of goals under the mandate and approved implementation plan which requires further reductions in released pollutants until 2025 when we are targeted to meet the overall 25% reduction goal.

All the information we have on how we are doing is based on the Chesapeake Bay Model. The good news is that according to the model, Virginia is ahead of schedule on reductions in nitrogen and phosphorus, the nutrients. Using the latest version of the watershed model (5.3.2) nitrogen reductions are ahead of the 2015 goal and phosphorus is ahead of the 2017 goal.

Unfortunately, according to the model the sediment levels have increased, though it is unclear why. It is hoped that revisions to the land use base data in Chesapeake Bay Model may produce a better sediment result. Overall, though, the water quality of the Chesapeake Bay was dismal at below 30% and falling the past couple of years. However, Virginia is doing well at meeting their goals helped by the billions of dollars Virginia had already spent on updating the waste water treatment plants and the success of their animal management programs.

The U.S. Environmental Protection Agency had recently completed an evaluation of Virginia’s animal agriculture programs and evaluating Virginia’s implementation of those programs. This included imputing agricultural census data (which found fewer animals than expected), and data collected by the Soil and Water Conservation Districts in verifying the implementation and functioning of Agricultural Best Management Practices (BMPs). Agricultural BMPs are approved and quantified methods of farming to ensure reductions in the amount of nitrogen, phosphorus, and sediment pollution going to waterways within the Bay Watershed. Though we have reduced the nitrogen and phosphorus entering the bay from Virginia, it is discouraging that the overall health of the Bay has gotten worse.

The EPA’s assessment looked at Virginia’s implementation of federal and state regulatory programs that manage the large scale permitted concentrated animal operations, as well as voluntary incentive-based programs for smaller animal operations and crop operations to meet the nutrient and sediment reduction commitments in its TMDL Watershed Implementation Plan (WIP). The voluntary programs are implementation of agricultural BMPs by farmers with the help of the Soil and Water Conservation Districts who oversee the cost share programs that are used to encourage farmers to use the BMPs on their farms. There had been criticism that the agricultural programs were largely voluntary, but EPA found the programs to be effective and well implemented and monitored.

Reducing pollution from agriculture and converting acreage to stream buffers and restoring wetlands is the cheapest way to reduce pollution, and the state’s WIP expect to get 75 % of their nitrogen, phosphorus and sediment pollution reductions from agriculture. Even with using the agricultural lands to achieve most of the pollution reductions, the costs of the Watershed Implementation plans are astronomical, about $13.6-$15.7 billion in Virginia alone.

The latest legislative session in Virginia maintained funding for the Conservation Districts though some cost share monies have been reduced. The Virginia Department of Forestry received $1.3 million from the Chesapeake Bay Funds. The NRCS (National Resource Conservation Service which is part of the U.S. Department of Agriculture) is funded under the Farm Bill and has conservation programs and financial assistance programs that complement the work of the Districts. NRCS implements long list of federal programs that you might want to take a look at. Together we all work to help Virginia achieve its pollution goals.

Thursday, April 23, 2015

Testing Finds Potential Coal Ash Contamination in NC Groundwater

DENR
According to the North Carolina Department of the Environment and Natural Resources, the first results of well testing sent to residents and others near Duke Energy’s coal ash ponds show that most of the water groundwater supplies contain contaminants that exceed state groundwater standards. Of the 117 results mailed to the homeowners, 87 exceeded state groundwater standards. Information mailed from the state to the residents included the test results, health risk evaluations and, as necessary, some potential well treatment options to remove or reduce contaminants from well water.


On February 2, 2014, the second largest coal ash spill in U.S. history occurred in North Carolina when a stormwater pipe under coal ash impoundments at Duke Energy’s retired Dan River plant ruptured. An estimated 140,000 tons of coal ash and contaminated wastewater was released into the Dan River. This disaster drew public attention to a significant problem and uncovering questionable practices in the handling of coal ash in the state. North Carolina is home to 14 coal-fired power plants and a total of 50 coal ash impoundments. According to the EPA, North Carolina’s impoundments have enough capacity to hold 19 billion gallons of coal ash.

Coal ash is a byproduct of the combustion of coal at power plants to make electricity, and contains more than a dozen heavy metals and chemicals which include arsenic, mercury, lead, boron, cadmium, selenium, chromium, nickel, thallium, vanadium, zinc, nitrogen, chlorides, bromides, iron, copper and aluminum. On April 19, 2013, EPA signed a notice of proposed rulemaking to revise the effluent limitations guidelines and standards for coal fired power plants that would strengthen the existing controls on discharges from all steam electric power plants. The proposal sets the first federal limits on the levels of toxic metals in wastewater that can be discharged from power plants, based on technology improvements in the steam electric power industry over the last three decades. No final action has been taken on those regulations.

Following the 2014 Dan River disaster the NC General Assembly began moving draft legislation aimed at addressing the state’s coal ash sites in response to the Dan River coal ash disaster. North Carolina is home to 14 coal-fired power plants and a total of 50 coal ash impoundments. According to the EPA, North Carolina’s impoundments have enough capacity to hold 19 billion gallons of coal ash.

Following the 2014 Dan River disaster the NC General Assembly passed legislation (N.C. Session Law 2014-122) aimed at addressing the state’s coal ash sites. Though many groups felt it was not strong enough, it required all water supply wells within 1,000 of any one of Duke Energy’s active or retired coal-fired power plants in North Carolina be sampled as part of the groundwater assessment being conducted in accordance with N.C. Session Law 2014-122. Homeowners were contacted and wells were sampled and tested for a number of metals and other constituents that are both naturally occurring and associated with coal burning activities. Those are: aluminum, antimony, arsenic, barium, beryllium, boron, cadmium, calcium, cobalt, chromium, copper, iron, lead, magnesium, manganese, molybdenum, mercury, nickel, potassium, selenium, sodium, strontium, thallium, vanadium, zinc, chloride, sulfate, alkalinity, bicarbonate, carbonate, total dissolved solids, total suspended solids and turbidity. Testing was also done for: pH, temperature, the water level, and other factors.

Now the first batch of test results are back, though the full data will not be released until all homeowners are notified. The results from the first batch of tests include results for well owners near Duke Energy’s Allen, Asheville, Belews Creek, Buck, Cliffside, Marshall, Roxboro and Sutton facilities. Based on the laboratory results, North Carolina regulators found that the most common constituents that exceeded state regulatory standards were iron, manganese and pH – all of which can be found naturally in North Carolina soils and groundwater as well as in coal ash.


State officials will continue to collect and analyze the results of water samples from wells near other Duke Energy coal ash facilities and will make those results available to affected residents and the public.

Monday, April 20, 2015

Is it Time to Replace Your Smoke Alarms?

Currently, it is recommended that smoke alarms should be replaced after about 10 years. There are two basic types of residential smoke detectors, ionization and photoelectric. Ionization models are excellent at detecting the small particles typical of fast, flaming fires, but tend to be poor at detecting smoky, smoldering fires. Ionization units are generally prone to false alarms from burnt food and steam-classic causes of annoying false alarms. Photoelectric smoke alarms are excellent at detecting the large particles typical of smoky, smoldering fires, but all were poor at detecting fast, flaming fires. Photoelectric units are less prone to false alarms from burnt food and steam, so you can mount them closer to kitchens and bathrooms.

By far, most residential smoke alarms are ionization sensor models; though I’m not sure that is the best choice. These types of smoke detectors contain a very tiny amount of radioactive material, americium-241 embedded in a thin gold foil in an ionization chamber. An ionization chamber is very simple. It is basically two metal plates a small distance apart. One of the plates carries a positive charge, the other a negative charge. The radioactive material is contained within a laminated material thick enough to completely retain the radioactive material, but thin enough to allow the alpha particles to pass. Small particles from fires and smoke interfere with the movement of the alpha particles and the circuit is broken the smoke detector alarms.

Photoelectric smoke alarms use a T-shaped chamber fitted with a light-emitting diode (LED) and a photocell. The LED sends a beam of light across the horizontal bar of the chamber. The photo cell will generate a current, when exposed to light. Smoke will interfere with the circuit, but they can be insensitive to small particulates.

Fire-safety officials have long believed that the leading cause of smoke-detector failure is a power-source problem, primarily dead or missing batteries since most detectors are battery powered. The result has been the campaigns to get consumers to change their batteries twice a year when they reset their clocks. But many of those experts are increasingly concerned that some detectors may fail to work because they are simply too old. According to the Fire-protection association smoke detectors’ sensitivity to smoke tends to change over time. Sometimes becoming more sensitive and causing more nuisance alarms, sometimes becoming less sensitive and not alarming.

There have been very few studies to determine the actual failure rate though it is widely believed to be 3% per year regardless of age based on a 30 year old small Canadian study when smoke alarms where still a new invention. Thus in theory, the electronic components in a smoke detector should last at least 30 years. But a smoke detector could fail at any time and fire safety officials recommend changing them every 10 years because that provides a reasonable margin of safety and after that time their sensors can begin to lose sensitivity. The test button you have been dutifully pressing each year only confirms that the battery, electronics, and alert system are working; it doesn’t mean that the smoke sensor is working. To really test the sensor, you need to use an aerosol can of smoke alarm test spray that simulates smoke.

The U.S. Fire Administration for Homeland Security, the National Fire Protection Association (NFPA), the National Electrical Manufacturers Association (NEMA) and the Red Cross agree after working for 87,000 hours or 10 years in normal environmental conditions in the home it is time to replace your smoke alarms.

Every home should have smoke alarms, and all homes with oil, natural gas or propane burning appliances such as a furnace, water heater, stove, cooktop or grill should have a carbon monoxide monitor. If you have an all-electric home you do not really need a carbon monoxide alarm unless you operate a generator during power outages. If you are replacing your smoke alarms, it is a good time to consider your options.

Combination smoke and carbon monoxide alarms. These alarms can detect smoke as well as carbon monoxide. Typically these are ionization and the carbon monoxide monitor uses an electrochemical sensor that has a predicted life of 7 years. If you buy a combination ionization and carbon monoxide alarm, it is recommended that you also get a separate photoelectric unit to be fully protected.

Dual-sensor smoke alarms. These combine ionization and photoelectric technology to save you the hassle of installing two separate smoke detectors. Fire protection authorities recommend that both ionization and photoelectric smoke alarms be used together to help insure maximum detection of the various types of fires that can occur within the home. Ionization sensing alarms detect invisible fire particles (associated with fast flaming fires) sooner than photoelectric alarms. Photoelectric sensing alarms detect visible fire particles (associated with slow smoldering fires) sooner than ionization alarms. With this combo alarm you'll still need separate carbon monoxide units if you have any oil, natural gas or propane burning appliances such as a furnace, water heater, stove, cooktop or grill or use a generator adjacent to the home.


So what am I doing? I replaced my old hard wired smoke alarms with combination smoke and carbon monoxide alarms and then added three separate photoelectric smoke alarms.



Thursday, April 16, 2015

Solar Repairs- Is the Fourth Try the Charm?

For almost two and a half years I have struggled to have my solar photovoltaic system repaired. Finally, today, I have all 32 solar panels working and reporting to the Enphase system, the system has been rewired and I think we are good..

Almost two and a half years ago one of my solar panels appeared to fail. When I called the original installer, I discovered that he had gone out of the solar business,  However, the other business owned by the same man appears to be successful. It took us a while to connect, but he referred me to a Maryland and Washington DC based installer, Lighthouse Solar.

Lighthouse solar struggled working with Enphase and Sharp and replaced one solar panel, two Enphase inverters and moved two panels. This did not fix the problem or even relocate it, nonetheless they remained convinced that the problem with the system was a faulty solar panel or panels. However, the more they struggled to solve the problem, the more it seemed to expand. The problems with my solar system that seemed to get worse with each repair they tried to make, the franchisee of Lighthouse Solar that I worked with went out of business or moved on. During the months while I was attempting to have my solar system serviced a second panel then a third panel failed. By December of last year a forth panel or inverter appeared to fail and I was down to 28 panels reporting..

The solar photovoltaic system consists only of panels, inverters and wiring- to me it was becoming increasingly clear that the problem was not originating in the panels or the inverters. There had been difficulty with the original installation- it had failed the electrical inspection three times before  finally passing. Now it was clear to me how critical good electrical design and installation are to a solar system. I was fortunate that when Lighthouse Solar exited the business he arranged with ProspectSolar to install a second solar panel that he believed would fix my ever growing problems (at that time my Enphase system only had 29 inverters reporting). When the field Superintendent for ProspectSolar came out, he pointed out all the visible problems with the wiring that he observed and took several pictures so that I could see the problems, too.
one of the many wiring problems identified by ProspectSolar
Prospect Solar would ultimately propose to perform the repairs on the system for $9.200. I contacted the original installer and sent him the notes and pictures that ProspectSolar had sent me. He surprised me and took full responsibility and found another subcontractor to correct his installation (I assume less expensive than ProspectSolar.). Just before Christmas 2014 Jose arrived. Since Jose and his crew were hired by the original installer I could only ask a few questions and hope he knew his stuff, which he assured me was true. There is no certification that I could ask to see and though he had some experience with solar installations it appeared limited.  Jose assessed the situation and explained to me that the electrical wiring was all wrong, many of the components used in the installation were only rated for interior use and the system was not set up correctly.



New and hopefully improved wiring

And so between snow storms and frigid weather Jose and the crew rewired the system using rain tight fittings (see photosabove), replaced conduit with prefabricated fittings and proper grounding. When he was done and the system turned back on I had only 28 panels reporting. It would take until spring for Jose to get back here with 4 new Enphase inverters. After installing those I had 24 panels reporting-he had replaced the wrong inverters. During the rewiring the physical location of the reporting inverters had changed and no longer matched the map provided by Enphase. So once more Jose was on the roof replacing inverters using a hand mirror to read the serial numbers to make sure he had the right ones this time. After two days of that I once more had 28 panels reporting and producing power. Today, Enphase updated the system and added the new inverters. Jose called and I refreshed the system and had all 32 solar panels producing and reporting. Just in time for a sunny spring. It appears that just as Jose promised that the system is repaired.
In January no power was produced by the system while they were rewiring
I was impressed by how honorably the original solar company has treated this problem and stood behind their warantee of the original work. The truth is I could not even find someone to do a thorough inspection or even a repair without the assist from the original installer and Lighthouse solar. ProspectSolar and Jose De Jesus seem to actually have the technical expertise in house to repair a system in case you need help. In the end I think I was lucky to have it all work out and have both phone numbers for future problems with the system. Right now everything is working, though I still need to have the new conduit painted and a few broken roof tiles replaced. Jose says he’ll be back to take care of those in the next few weeks.His ladder and safety equipment are still at my house.

Monday, April 13, 2015

One Year of Water Left

Phillips Snow Field 2015

On April 1, 2015 there was no snow at an elevation of 6,800 feet in the Sierra Nevada Mountains in the Phillips Snow Field. Four times each winter the California Department of Water Resources manually surveys the Phillips and several other locations to estimate the water for the coming year. The snowpack traditionally is at its peak by early April before it begins to melt. California’s water year is determined by the measurement of the snow pack and this ceremonial measuring of the snow pack has continued even as electronic readings have become more accurate. For the first time in 75 years there was no snow in Phillips Snow Field.

California’s climate is dominated by the Pacific storm track. The mountain ranges cause precipitation to fall mostly on the western slopes. These storms also leave tremendous accumulations of snow in the Sierra Nevada during a wet winter. While the average annual precipitation in California is about 23 inches, the range of annual rainfall varies greatly from more than 140 inches in the northwestern part of the State to less than 4 inches in the southeastern part of the State in an average year. Snowmelt and rain fall in the mountains create the annual flow into creeks, streams, and rivers. California’s surface water infrastructure is designed to capture a portion of these flows. What is not captured makes their way into the valleys, much of the water percolates into the ground or. However, April 15 to October 1 is the dry season and there is no rain. What is in the reservoirs is it for the rest of the water year. After four years of drought, there is about a year of water left. If this drought doesn't end this year California will have to begin to face a new water reality.

Though the dry and snow free Phillips meadow was a show and tell media event and the first time I can recall a Governor joining the Department of Water Resources team, earlier electronic readings confirmed that the statewide snowpack holds only 1.4 inches of water content, just 5% of the historical average of 28.3 inches for April 1. The previous low for the date was 25% in 2014 and 1977. Governor Jerry Brown held a news conference in Phillips Snow Field to order cities and towns across California to cut water use by 25% as part of a sweeping set of mandatory drought restrictions, the first in California’s history. Though it is often quoted that agriculture  uses about 80% of surface water diversion in a full allocation year, that is not technically true. Approximately half of all rain and snow melt  is left to flow naturally in the state- primarily for human use though a certain base line is necessary to maintain the environment.  Forty percent of the water goes to agriculture through the Central Valley Project (CVP), State Water Project (SWP), the Colorado allocation, local reservoirs and groundwater basins. The final 10% goes to cities. The farmers dependent on the CVP and SWP will have their allocations set later this year and it is widely expected to be zero, again. The drought has not affected the state uniformly. Farmers depending on the Colorado Compact have water and frarmers with also draw from groundwater.

These water use restrictions aimed at the cities have followed recent steps to try to preserve what water California has on hand. Governor Brown declared a drought State of Emergency on January 17, 2014 and directed state officials to take all necessary actions to prepare for water shortages. He called on all Californians to voluntarily reduce their water usage by 20%, which most of the state failed to achieve. Now the restrictions are mandatory and the reduction required is 25%. If the drought does not end soon, restrictions will become rationing. In California average daily use of water by individuals on public supply is 181 gallons a day. This is more than two and a half times the per capita use in places like Virginia. This use has to be for exterior water use, watering gardens and lawns and should offer simple areas of reduction without giving up showering and laundry.

Californians have seen droughts before and each crisis has been averted by the rains finally coming before a water crisis happened. In 1995, the Pacific Institute published a report that summarized the condition of the water supply in California stating that “California’s current water use is unsustainable. In many areas, ground water is being used at a rate that exceeds the rate of natural replenishment…” In 2005 the Pacific Institute published another report. Pointing out that water demand and use continued to exceed sustainable supply. California was unmoved by these warnings.

The truth is that California has been using more water than is sustainable available to support the population, businesses and agriculture of the state, and the majority of water, (estimated at 80% of all water, but groundwater is not tracked) going to agriculture. For more than a half a century the Central Valley of California has been one of the most productive agriculture regions of the world. This has been made possible by the ample supply of cheap water used for irrigation. The limit to California’s agricultural is water availability. Water available is a combination of surface water diversions, the Colorado allocation, and groundwater pumping. Approximately one sixth of the irrigated land in the United States is in the Central Valley and approximately one eighth of all groundwater pumped in the United States is pumped in the Central Valley. California uses almost 31 billion gallons of water a day for irrigation in a “typical” year.

Only in drought years is the true stress on the system obvious. Precipitation varies widely from year to year. Multi-year droughts have occurred throughout the state’s history, as have devastating floods. In California varied climate it’s possible to have both floods and drought in the same year. California’s water system was developed over decades to address that variability and provide more reliable water supplies year-round. The original intent to smooth the variations in annual precipitation was corrupted to divert water to the most powerful. California has a long history of water wars over water rights and diversions. State officials recently projected that California’s population will reach 50 million by 2032 and 60 million by 2050. There simply is not enough water. California local water agencies have invested in water recycling, conservation, groundwater storage and other strategies to stretch supplies, but the demand has outstripped supply for over 50 years as evidenced by the groundwater usage in the central valley.
Phillips Snow Field 2014

Even without imported water Los Angeles will continue to have some water. For 30 years Los Angeles County has recycled the water from the wastewater treatments plants. This water from both secondary and tertiary treated wastewater is discharged into spreading basins to recharge groundwater. This recharged groundwater will remain available to the city to prevent disaster. However, the time will soon be here for California to make some hard decisions about the allocation and ownership of water resources and the future of the state.

Thursday, April 9, 2015

Hydraulic Fracking What It Is and Why Virginia Cares

Hydraulic fracturing is a well stimulation technique commonly used by oil and natural gas producers to increase the amount of oil and natural gas that can be extracted from wells. Hydraulic fracturing has proved to be particularly effective in enhancing oil and gas production from shale gas or oil formations. Until quite recently, shale formations rarely produced oil or gas in commercial quantities because shale does not generally allow the flow of hydrocarbons to the wellbores. Hydraulic fracturing induces physical changes to the properties of the rock.
from EPA


Some simple types of hydraulic fracturing techniques have been used on a small scale in oil and gas production for decades. However, hydraulic fracturing operations in recent years have become more complex, involving the exploration of and production from significantly deeper formations and across much larger subsurface areas through the use of horizontal drilling techniques. The development of horizontal drilling, combined with hydraulic fracturing, has made the production of oil and gas from shale feasible.

Hydraulic fracturing or fracking as it is more commonly known involves the pressurized injection of fluids made up of mostly water and 1-2% chemical additives to change the viscosity of the water into a geologic formation. These chemicals can serve many functions in hydraulic fracturing, including limiting the growth of bacteria and preventing corrosion of the well casing, and the formulation used varies. The water and chemicals are injected at high pressure that exceeds the rock strength and the fluid opens or enlarges fractures in the rock. As the formation is fractured, a “propping agent,” such as sand or ceramic beads, is pumped into the fractures to keep them from closing as the pumping pressure is released. The fracturing fluids (water and chemical additives) are partially recovered and returned to the surface and recycled, stored or deep well injected for disposal. Natural gas or oil will flow from pores and fractures created in the rock into the wells allowing for enhanced access to the methane or oil reserves. While fracking techniques used decades ago involved 100,000-300,000 gallons of water mixed with chemical additives, a modern fracking can use 3,000,000-5,000,000 gallons of water mixed with chemical additives.

Over the past 10 years, there have been significant technological advances in horizontal drilling. Hydraulic fracturing and horizontal drilling combined together can release significant quantities of oil and gas from large shale deposits, and has led to oil and gas production in parts of the country that had not previously produced significant amounts of oil or gas. This has produced a rapid increase in fracking across the United States to areas without adequate regulation and safety protocols for widespread oil and gas drilling. Regulation of gas and petroleum exploration has remained primarily with the states.

Fracked oil and gas can result in an economic boom as it generates income. If fracking is done carefully and properly the safely extracted gas can reduce air pollution and even water use compared with other fossil fuels. However, the availability of vast quantities of natural gas is likely to slow the adoption of renewable energy sources and, if fracking is done poorly toxic chemicals from fracking fluid could be released into our water supplies and methane could be release to the air.

There is a moratorium in New York State on fracking based on health and environmental concerns and Maryland is considering a 3 year moratorium on fracking. The moratorium is in response to concerns that include air pollution from the operation of heavy equipment, human health effects for workers and people living near well pads from chemical exposure, noise and dust, induced seismicity from the disposal of fracking fluids, and increased greenhouse gas emissions from poor well head control and continued use of hydrocarbons. However, the biggest health and environmental concerns remains the potential for drinking water contamination from fracturing fluids, natural formation waters, and stray gases.

A significant concern is the amount of water needed to hydraulically fracture a well. On average takes 3.8 million gallons of water for each well. Though about half the water will be returned, the recovered water will contain chemical and radiological contaminants and this water needs to be properly treated and/or disposed of. Though, surprisingly, shale-gas extraction and processing are less water intensive than many other forms of energy extraction. The water intensities for coal, nuclear, and oil extraction are approximately 2 times, 3 times, and 10 times greater than for shale gas, respectively. Corn ethanol production uses substantially more water because of the evapotranspiration of the plants, and 1,000 times more water than shale gas if the plants are irrigated. Conventional natural gas uses less water and renewable forms of energy such as wind and solar that consume almost no water.

Maintaining well integrity and reducing surface spills and improper wastewater disposal have been found to be the way to minimize contamination from the chemicals used in fracking fluid and from naturally occurring contaminants such as salts, metals, and radioactivity found in oil and gas wastewaters that are returned to the surface. Though, there have been few definitive studies of the frequency, consequences, and severity of well integrity failure. Studies done in Ohio and Texas found over a 25-year period on a mix of traditional and shale gas wells found an extremely low level of incidence of groundwater contamination, In Ohio they found 185 cases of groundwater contamination caused primarily by failures of wastewater pits or well integrity out of about 60,000 producing wells, for an incident rate of about 0.1%.The rate for Texas was found to be even lower at about 0.02%, The Texas study included 16,000 horizontal shale-gas wells with none reporting groundwater contamination.

A significant concern among the public is that hydraulic fracturing could open small cracks thousands of feet underground, connecting shallow drinking-water aquifers to deeper layers and providing a pathway for the chemicals used in fracking and naturally occurring geological formational brines to migrate upward. In practice, according to research performed in Pennsylvania this is unlikely because of the depths of most (but not all) shale formations tends to be 3,000-10,000 feet below ground level, and man-made hydro-fractures rarely propagate more than 2,000 feet. According to scientists who have studied fracking a more plausible scenario would be for man-made fractures to connect to a natural fault or fracture, an abandoned well, or some other underground pathway, allowing fluids to migrate upward). A simpler pathway for groundwater contamination, though, is through poor well construction and integrity, which was found in drinking water contamination overlying the Marcellus Shale in Pennsylvania.

The number of peer-reviewed studies that have examined potential water contamination is surprisingly low- barely a handful. Wastewater from oil and gas exploration is generally classified into flowback and produced waters. Flowback water is the fluids that are return to the surface after the hydraulic fracturing and before oil and gas production begins, primarily during the days when the well is completed. Typically it consists of 10–40% of the injected fracturing fluids and chemicals pumped underground that return to the surface mixed with an increasing proportion of natural brines from the shale formations over time. Produced water is the fluid that flows to the surface during extended oil and gas production. It primarily reflects the chemistry and geology of deep formation waters. These naturally occurring brines are often saline to hypersaline and can contain toxic levels of elements such as barium, arsenic, and radioactive radium. Little is actually known about how naturally occurring brines flow through formations.

In response to public concern the US House of Representatives requested that the US Environmental Protection Agency (EPA) examine the relationship between fracking and drinking water resources in 2009. In 2011, the EPA began a series of research projects into the impacts and potential impacts of fracking on water. Also, in April 2012 EPA released the first federal air rules for natural gas wells that are hydraulically fractured, requiring operators of new fractured natural gas wells to use “green completion,” which is a series of technologies and practices to capture natural gas and other volatile substance that might otherwise escape the well during the completion period when most volatile release takes place. The latest reports found this to be very effective.

Whether the EPA will regulate oil and gas exploration nationally or leave the oversight in the hands of the states is an open question. However, the Bureau of Land Management has taken steps to regulate fracking on public land. On Friday, March 20, 2015, U.S. Secretary of the Interior Sally Jewell announced the final rule for hydraulic fracturing on federal and tribal lands. This rule establishes new requirements to ensure wellbore integrity, protect water quality, and enhance public disclosure of chemicals and other details of hydraulic fracturing operations. This final rule will supplement the existing regulations for drilling on federal lands. The George Washington National Forest is the largest protected forest in the eastern United States at 1.1 million acres in the mountains of Virginia. Approximately half of the forest sits atop the Marcellus shale deposit. Oil and gas drilling using hydraulic fracturing or any other approved method is allowed in the 16% of the forest with existing leases and privately owned oil and gas rights.

Virginia has entered this debate. Natural gas deposits are located within certain areas of the Commonwealth. The use of fracking has a long history in Virginia going back to the 1950s. A nitrogen-based foam has historically been used in the fracking process here. Currently, there are more than 7,700 natural gas wells in the Appalachian plane where drilling required fracking in the extraction process. To date, there have not been any reports of adverse effects on water quality from the fracking. The other environmental impact of an industrial process is not much different from coal mining, dust, constant truck traffic, noise. The expansion of coal bed methane production has been in rural Buchanan and Dickenson counties.

However, there are other areas in Virginia that have methane reserves that could be accessed by fracking. The Taylorsville Basin is located north of Richmond and extends across the Virginia Coastal Plain in the tidewater region of the state. The U.S. Geological Survey estimated that the area could contain up to 1.06 trillion cubic feet of natural gas, not huge, but worthwhile economically. Shore Exploration, a Texas-based energy company, has reportedly leased the mineral rights from more than 80,000 acres in Virginia’s Northern Neck and Middle Peninsula spanning large sections of King George, Caroline, Westmoreland, Essex, and King and Queen Counties.

Currently, Virginia law prohibits drilling in the Chesapeake Bay waters and all of the tidal tributaries, but outlines the path for drilling to proceed in the non-prohibited areas of the tidewater region. Whether or not to allow drilling in areas that are not areas identified as part of the Chesapeake Bay waters and tidal tributaries is a regulatory decision, controlled by the Virginia Department of Mines, Minerals and Energy (DMME). Basically, in order to grant a permit, DMME must undertake an environmental impact assessment in consultation with the Virginia Department of Environmental Quality (DEQ). However, DMME is only obligated to consider the findings of the assessment, and ultimately maintains the full authority to issue the permit. Local communities that might be significantly impacted by truck traffic, there is no pipeline, no source of water for a hydraulic fracturing so thousands of truck loads would have to run on small rural roads.

Monday, April 6, 2015

The Bi-County Parkway a Zombie Once More


On Tuesday, April 1, 2015 Delegate Tim Hugo (Virginia House of Delegates R-40th) along with Delegate Bob Marshall ( R-13th) and State Senator Richard H. Black (R-13) held a news conference in front of Sudley Methodist Church on Sudley Road within the Manassas Battlefield. The delegates were there to announce that Delegate Hugo had received a letter from the Virginia Department of Transportation (VDOT) stating that they are “not actively working on this project including pursuing the Programmatic agreement or the environmental approvals from the Federal Highway Administration.”


VDOT Secretary Layne went on to write, “Over the past several years legislation passed by the General Assembly and signed by the Governor have significantly changed how projects will be developed and funded.” The new House Bills require a quantitative evaluation of potential projects and changes how funds are distributed.

What this really means is that no decision has been made on the proposed Bi-County parkway. Until the Bi-County parkway has been through the screening and scoring process later this year, it’s premature to assume it is dead. However, the so called “programmatic agreement” with several federal agencies and the Environmental Impact Report are both necessary to proceed with the project because the latest route to connect Interstate 66 in Prince William County and Route 50 in Loudoun County went through the Manassas National Battlefield Park.

The night before Chairman of the Prince William County Board of Supervisors, Corey Stewart, while affirming that the Bi-County parkway project could not move forward at this time, there was still a need for an outer beltway connecting Loudoun and Prince William Counties. The scoring of the projects will determine if it will proceed. For now the project remains a Zombie, not dead.

The Bi-County highway corridor is approximately 45 miles in length, and is essentially a more direct route for north/south commuters, and cargo and truck traffic connecting I-95 to Dulles Airport and Route 7. The North South Corridor portion alone will cost over $1,000,000,000, run through Prince William County’s Rural Crescent potentially damaging our watershed and impacting our groundwater resources, eliminates one of three corridors in our green infrastructure and once the segment of the Bi-County Parkway between I-66 and VA 234 was completed, U.S. 29 and VA 234 through the Park were planned to be closed.

As far back as 2005 the Prince William Board of Supervisors had passed a resolution in support of locating a connecting parkway east of the Battlefield (which would move it out of the Rural Crescent and away from the direct watershed of Bull Run) and the parkway without a route specified remains part of the PW County Comprehensive Plan. Supervisor Pete Candland (R-Gainesville) plans to introduce a resolution to remove the Bi-County Parkway from the Comprehensive Plan at the next supervisor’s meeting on Tuesday, April 7th 2015. This resolution will trigger a vote from the Board of County Supervisors the following Tuesday, April 14, on the motion to amend the Comprehensive Plan to remove the Bi-County Parkway from the Comprehensive Plane. The Supervisor needs as many citizens who can to attend these meetings on April 7 (2 PM and 7:30 PM at the McCoart Government Center) and April 14 to speak out in opposition to the Bi-County Parkway and to support his motion to remove this road project from the Comprehensive Plan.

Thursday, April 2, 2015

WHO finds Roundup to be a Probable Carcinogen

from product website

Glyphosate (N-phosphonomethylglycine), the active ingredient in the herbicide Roundup that is manufactured by Monsanto has been labeled a probable carcinogen by the International Agency for Research on Cancer, IARC, which is the cancer research arm of the World Health Organization, which recently considered the status of five insect and weed killers including glyphosate. All were found to be potential carcinogens.

There were no additional studies involved in the IARC determination, only a re-examination of the some of the research done almost 20 years age. The U.S. Environmental Protection Agency, which makes its own determinations, said it would consider the IARC’s evaluation, but as recently as 2012 the EPA’s assessment of glyphosate concluded that it was not a carcinogen and could "continue to be used without unreasonable risks to people or the environment." EPA is currently conducting the standard registration review of glyphosate to determine if its use should be limited. The EPA that was expected to complete their review by 2015, but it appears to be delayed.

The new classification of glyphosate by IARC as a potential carcinogen is aimed mainly at industrial use of glyphosate. Its use by home gardeners is not considered a risk by the IARC. Details of the review have not yet been published only a short paper announcing the decision was published in Lancet Oncology (a British Medical Journal). The IARC said there was "limited evidence" in humans that the herbicide can cause non-Hodgkin’s lymphoma and there is convincing evidence that glyphosate can also cause other forms of cancer in rats and mice. Glyphosate has been found in the blood and urine of agricultural workers, showing the chemical has been absorbed by the body. Glyphosate was classified as a probable carcinogen in the same category of cancer risk as things like anabolic steroids, working as a hairdresser and shift work.

Glyphosate (N-phosphonomethylglycine), the active ingredient in the herbicide Roundup that is manufactured by Monsanto (though the formulation is no longer under patent) is the most popular herbicide in use today in the United States, and increasingly throughout the World. Today, Americans spray an estimated 180-185 million pounds of the weed killer, on their yards and farms every year. All the acute toxicity tests have indicated glyphosate is nearly nontoxic to mammals. Thus, it has been assumed that any residues of glyphosate that are ingested from food sources or by farm workers are safe. As a consequence, measurement of its presence in food is practically nonexistent. Glyphosate and its metabolite aminomethylpholphonic acid (AMPA) have not been covered in the reports from the Center for Disease Control on Human Exposure to Environmental Chemicals, so human exposure has not been measured.

Monsanto and industry experts have submitted a review of their studies and believe that glyphosate has been proved safe to humans and the environment. Nonetheless, there have been for some time a minority of scientists and experts in the United States who believe that glyphosate may instead be much more toxic than is claimed by Monsanto. In 2013 a report was published in the online journal Entropy presented a rational scientific argument based on the systematic search of the literature and possible pathways of impact that led the authors to argue that many of the health problems that appear to be associated with a Western diet could be explained by biological disruptions that have been attributed to glyphosate. These include digestive issues, obesity, autism, Alzheimer’s disease, depression, Parkinson’s disease, liver diseases, and cancer.

In humans, only small amounts (~2%) of ingested glyphosate are metabolized to AMPA, and the rest enters the blood stream and is eliminated through the urine. The philosophy that tiny amounts of chemicals are of no health consequence has been the cornerstone of toxicology and regulation, but that has recently come into question with our increased ability to measure trace amounts of chemicals. For many environmental chemicals, more research is needed to determine whether exposure at the extremely low levels is a cause for health concern. Since 2000 there has been widespread adoption in the U.S. of Roundup Ready® (RR) crops, for the production of soy, beet sugar, and corn increasing the use of glyphosate. Confined animal feeding operations (CAFOs) are used to produce corn feed animals that produce meat. Glyphosate has become ubiquitous in our industrial food supply and warrants more testing rather than just a re-examination of older studies that were somewhat ambiguous in their results.