Thursday, August 29, 2013

The Vanishing Bee and our Food Supply

from USDA
Pollination, the delivery of the male gamete of a plant to the female stigma of the same species of plant, is essential for fertilization and for the plant to produce seeds and fruit. Without pollination there would be no fruits, no vegetable and no seeds. Though, grasses, conifers, and many deciduous trees are wind-pollinated, most flowering plants need birds and insects for pollination. Birds and bats can pollinate a limited number of plants, but the vast majority of plants are pollinated by insects. Some wasps, flies, beetles, ants, butterflies and moths pollinate various flowers, but bees are responsible for the vast majority of pollination. Commercial agriculture uses honey bees raised to pollinate its crops. It was estimated by a Cornell University study that the value of honey bee pollination in the United States is more than $14.6 billion annually.

It is variously reported that honey bees pollinate between 100-130 crops that span the entire cultivated crop spectrum- fruits and vegetables, nuts, herbs and spices, livestock forage, and oil crops. That is everything from apples and avocados to turnips and watermelon, almonds to macadamia nuts, allspice to oregano, alfalfa to sweet clover, canola to sunflower. The extent of pollination dictates the maximum number of fruits a tree or plant will bear and honey bees are responsible for more than 80% of the pollination in cultivated crops. 

The honey bee, an immigrant from Europe, is an essential element to our monoculture form of agriculture. It is not really suprising since most of our crops and many of our garden plants evolved in areas where honey bees were native, and both crops and insects were brought to the United States with the colonists to become essential parts of our agricultural system. With modern agriculture’s vast fields and groves of a single kind of plant all flowering at the same time, farmers can’t depend on feral honey bees that happen to nest near crop fields. The single flowering will not support a large bee population. Instead, farmers contract with migratory beekeepers, who move millions of bee hives to fields each year just as crops flower. To pollinate California’s estimated 420,000 acres of almond trees alone takes more than 1 million honey bee colonies that are trucked to the groves in trucks.
from USDA


In the 1990s two species of parasitic mites were accidentally introduced from Asia. The tracheal mite and varroa mite caused severe declines in honey bee populations within a few years. These parasitic mites were finally controlled in honey bee hives by using with chemical pesticides, substantially increasing the costs of large beekeeping operations. Until then many beekeepers had avoided pesticides. Unfortunately, the feral populations of honey bees that once thrived in the wild and brought genetic diversity to the honey bee population were essentially killed off by the parasitic mites, and no longer contribute substantial genetic variability to the managed bee populations.

Then during the winter of 2006-2007, a large number of bee colonies died out, losses were reported to be between 30% to 90% in the impacted beekeeping operations. While many of the colonies lost during this time period exhibited the symptoms from parasitic mites, many were lost, from unknown cause. The next winter, the number of impacted honey bee operations spread across the country. Honey bee colonies died out at even higher rates. The phenomenon was termed Colony Collapse Disorder, or CCD. No disease or cause was identified, the adult honey bees seemed to just disappear with very few dead bees found near the impacted colonies. The impacted colonies had low levels of parasitic mites and minimal evidence of wax moth or small hive beetle damage. The other active bee colonies did not steel the food reserves, they avoided the impacted hives. Often there was still a laying queen and a small cluster of newly emerged attendants present.

There appears to be no end in sight, CCD is spreading around the world. This mysterious disappearance of honey bees is troubling, the bee population is falling and we may end up with insufficient numbers of pollinators to fulfill the demands of our agricultural industry. Last winter, 31% of the U.S. honey bee colonies were wiped out. The year before that it was reported as 21% of colonies lost. These losses if they continue could have a catastrophic impact on agriculture, and we do not know what is causing the problem let alone how to fix it. One third of all food eaten in the United States requires honey bee pollination.

Current theories about the cause(s) of CCD include increased losses due to the invasive varroa mite; new or emerging diseases, especially mortality by a new Nosema species related to the microporidian giardia; and pesticide poisoning (through exposure to pesticides applied for crop pest control or for in-hive insect or mite control). Neonicotinoids, a relatively new class of chemical insecticides, are highly toxic to bees, and can cause behavioral changes to bees at sub-lethal doses. These chemicals could be a potential factor in CCD. In response to the crisis, the European Commission (EC) announced this past spring that it intends to impose a two-year ban on the entire class of pesticides known as Neonicotinoids

In addition to these suspects, the U.S. Department of Agriculture believes the a likely cause of CCD is a potential immune-suppressing stress on bees, caused by one or a combination of several factors. Stresses may include poor nutrition, drought, and migratory stress brought about by the increased need to move bees long distances to provide pollination services to farmers. It is believed that by confining bees during transport, or increasing contact among colonies in different hives, the industry has increased the transmission of pathogens and spread CCD. Some researchers suspect that stress could be compromising the immune system of bees, making colonies more susceptible to disease.

CCD may not be an entirely new and distinct phenomenon. Large-scale honey bee die-offs and disappearances have reportedly happened in the past. Historic literature describes spring dwindle disease, fall dwindle, or autumn collapse in bee keeping. In 1975, honey bees experienced Disappearing Disease which affected a large number of bee colonies in the U.S. It may never be determined if these historic situations share a common cause with the current crisis, but unfortunately with the loss of feral populations to the Asian mite and cheap sources of imported honey the honey bee keeping industry was at a low point when CCD began to strike.

Monday, August 26, 2013

Rising Sea Levels and Shrinking Artic Ice

from NASA to see animation click
In mid-August a draft copy of the Intergovernmental Panel on Climate Change, IPCC, and five year report due to be peer reviewed this coming fall then officially released to the public was leaked to Reuters and the New York Times. The media (both print and Internet) was a buzz despite the fact that the IPCC does not do any original research, but merely summarizes the published scientific literature on climate change. Six years ago the IPCC report predicted a maximum of 23 inches of sea-level rise by the end of this century, but according to the New York Times, the new report lays out several possibilities from as little as 10 inches to a maximum of 21 inches by the end of the century.

These estimates of sea level rise are based on the amount of warming experienced by the plant in response to the increase in greenhouse gases that has already occurred. Even if the entire planet stopped burning fossil fuels tomorrow, the planet would continue to warm. Whatever is going to happen will happen; it is too late to change that. Warming increases sea level in two ways: thermal expansion and the melting of the glaciers and ice caps. When water warms it expands in volume and is expected to contribute a portion of the sea level increase. More than 97% of the Earth’s water is already within the in oceans. The remaining 2.8% of water is the water within the land masses, 2.15% is contained in icecaps and glaciers. The melting of the ice, which has so far been most pronounced in mountain glaciers might increase surface infiltration of water, but the remainder will flow to the oceans increasing sea volume. The big concern is for the future of the ice sheets in the Arctic and Antarctica and what impact the volume of water contained there will have on sea levels.

Last Friday the sea ice minimum analysis produced at NASA’s Goddard space laboratory using satellite data was released. The melting of sea ice in the Arctic reaches its annual minimum, when the ice cap covers less of the Arctic Ocean than at any other period during the year, in the middle of September each year. The good news is that NASA is predicting it is unlikely that this year’s summer low will break a new record. Still, this year’s melt rates are in line with the long term decline of the Arctic ice cover observed by NASA and other satellites over the last several decades. The data record, which began in November 1978, shows an overall shrinking in the size of the Arctic sea ice minimum each summer since 1978 of 14% per decade.

The ice cover in the Arctic Ocean was measured at 2.25 million square miles on Aug. 21. For comparison, the smallest Arctic sea ice extent on record for this date, recorded in 2012, was 1.67 million square miles, and the largest recorded for this date was in 1996, when ice covered 3.16 million square miles of the Arctic Ocean. Though, early summer saw a fast retreat of the sea ice, melting slowed as clouds over the central Arctic kept temperatures cooler than average, and the Arctic experienced mild summer storms. According to Walt Meier, a glaciologist with NASA’s Goddard Space Flight Center in Greenbelt, Maryland: “Last year’s storm went across an area of open water and mixed the smaller pieces of ice with the relatively warm water, so it melted very rapidly. This year, the storms hit in an area of more consolidated ice. The storms this year were more typical summer storms; last year’s was the unusual one.”

In Antarctica, the story is different. Right now, Antarctic sea ice is in the midst of its yearly growing cycle and is heading towards the largest extent on record. Antarctic sea ice reached 7.45 million square miles on Aug. 21. In 2012, the extent of Antarctic sea ice for the same date was 7.08 million square miles. The phenomenon, appears counter-intuitive, and is currently the subject of many research studies. Still, the rate at which the Arctic is losing sea ice surpasses the speed at which Antarctic sea ice is expanding so sea level rise is of great concern to mankind who throughout history has settled along the coast lines.

Thursday, August 22, 2013

Merck Stops Selling Zilmax While Investigating Problems

Last Friday Merck & Company suspended sales of Zilmax a class of beta-agonist cattle feed additive to perform a new study of the drug’s effect on cattle health after Tyson Foods, Inc. announced that it would no longer purchase cattle fed Zilmax earlier this month, due to concern for animal health. Tyson Food is the largest meat processor in the United States by sales.

Beta-agonists were approved by the Food and Drug Administration, FDA, for use in swine in 1999 and in cattle in 2004. These compounds are "repartitioning agents," shifting nutrients away from fat deposition towards lean muscle growth and are used to finish cattle and swine adding an additional 2% in muscle weight shortly before slaughter. That 2% weight gain translates into 24-34 pounds of meat per cow, an average near 30 pounds. There are 24 million head of cattle each weighing around 1,500 pounds harvested each year in the United States, so that is potentially a lot of beef. (It is to be noted that grass fed, organic cattle weigh in around 950 pounds.) Until Merck stopped sales of Zilmax, two beta agonists were available to cattle feeders: ractopamine hydrochloride (Optaflexx; Elanco Animal Health) and zilpaterol hydrochloride (Zilmax; Intervet).

Optaflexx was approved by the FDA for feeding during the last 28 to 42 days of the finishing period. When fed at the recommended dose (200 mg/day), data have shown that body weight and carcass weight are increased, with no increase in feed intake; cows get heavier without the need for additional food. There is no withdrawal period for Optaflexx and the animals are slaughtered at the end of the dosing.

Zilmax, Merck’s product, was approved by the FDA for feeding to feedlot cattle during the last 20 to 40 days on feed. Zilmax demonstrated benefits include increased live weight gains, increased speed of weight gain, increased carcass weight and improved feed efficiency. However, dosing with Zilmax must be stopped three days before slaughter to reduce the presence of the drug in the meat.

Some countries, such as the European Union and China, have restrictions on beta-agonists as a class in response to illegal use of beta-agonists such as clenbuterol, which has caused human illness because of high residues in muscle meats. The World Health Organization and the Food and Agriculture Organization however, voted in 2012 to create a minimum risk level, MRL, for residual in meat of ractopamine hydrochloride (Optaflexx; Elanco Animal Health), the beta-agonists sold by E.I. Lily.
from BCRC video
The Beta-agonists are just the most recent addition to the arsenal of feed additives and growth promoting strategies that have been used to increase what the cattle industry calls feed efficiency, increased carcass weight per pound of feed. Over the past 30 years, weight to feed has improved by 30%. These improvements were due to changes in diet and management of the diet, grain processing, and drugs and hormones used to promote growth.

Cattle grow fatter on grains. It was discovered that the digestibility of grains like corn, barley and oats can be improved by processing. Cracking the outer shell of the grain, allows the rumen microbes to better utilize grain starch and minerals. Processing also allows grain to be mixed with supplements, and can increase palatability to cattle that do not naturally eat these grains.

The other growth promoting drugs and hormones are ionophores, and growth implants. Ionophores are antimicrobials mixed in with cattle feed that improve nutrient availability by killing off certain rumen microbes. Most rumen microbes convert the complex fiber and starch in forage and grain into simple molecules that can be absorbed into the bloodstream to provide energy and protein to the animal, but rumen bacteria known as methanogens convert the dietary fiber and starch into methane gas. Methane contains energy, but it cannot be absorbed by the animal, so it is belched out and wasted. Ionophores improve feed efficiency and weight gain by killing only the methanogenic bacteria, and allowing the other rumen bacteria to make more feed energy available to the animal.

Growth implants are pellets injected under the skin in the animal’s ear. These implants are reproductive hormones that occur naturally in the animal. In steers, implants replace some of the hormones that were lost when the animal was castrated to as part of the feedlot operations. The hormones encourage protein deposition and discourage fat deposition despite the high grain diet. This improves feed efficiency because muscle tissue contains around 70% water, while fat contains less than 25% water. Fat requires more feed than muscle.

This strategy of utilizing growth promoters is what has reduced the relative price of beef over the last 30 years. The FDA, the US Department of Agriculture, the World Health Organization and the Food and Agriculture Organization of the United Nations have all reviewed these growth promoting strategies and have concluded that hormones and growth promoters can be used safely in beef production. These organizations found the residual levels of these growth promoters in food products, such as beef, were too low to be of risk to human health.

We have only recently developed the ability to test for trace exposures and have discovered ubiquitous exposures to many substances. The detection of a chemical or hormone in food or a person’s blood or urine does not mean that it will cause health effects or disease. The guiding principal of toxicology is that there is a relationship between a toxic reaction (the response) and the amount of poison received (the dose). An important assumption in this relationship has been that there is always a dose below which no response occurs. So if the concentration of the contaminant was low enough there would be no toxic reaction, but that principal is being tested with endocrine disruption and advances in analysis. Meanwhile, I buy only organic and organic, grass-fed meat.

Monday, August 19, 2013

Wells, Geology and Contamination

from USGS
A well is simply a hole dug or drilled into the ground from which water can be removed. The hole is called the borehole and is lined with a well casing, which is typically a plastic or metal pipe. The well casing prevents the side walls of the borehole from collapsing into the well and closing the hole. The casing is sealed into place with grout which is usually neat cement or bentonite that was pumped into the annular space between a well casing and the borehole. This grouting seals the well and prevents water from flowing into the well directly from land surface down the side of the well casing pipe or from the shallowest part of the aquifer where the water quality may be less desirable.

Depending on geology, the casing will be open at the bottom or perforated at a specific depth with a screen, allowing water to flow into the well where it can be pumped to the surface. In sandy soils well screens are common. In fractured rock systems and bedrock screens are not necessary on low volume domestic wells. In clay or loam coarse sand or gravel can be placed around the well screen to help improve the flow of water into the well. These sand or gavel packs create larger pore spaces for water to accumulate. Gravel or sand packs are rarely used for domestic wells.

In the United States almost half of all drinking water is supplied by wells. About a third of the population obtains its drinking water from public supply wells which they never think about, and about 15% of the population obtains their household water from private domestic wells. Domestic well owners need to think about their wells and the groundwater that supplies them. Domestic wells have pumps that can pump 10-15 gallons a minute into the pressure tank when needed for household use. These pumps draw groundwater from the area immediately surrounding the well. Depending on the depth of the well and the local geology groundwater drawn into a private domestic drinking water well is typically young water-it could be weeks, months or several years old.

Typically rain water and snow melt percolate into the ground and the deeper the well the further away is the water origination and the older the water. The groundwater age is a function of the depth of the well, the geology of the area, the precipitation, recharge of the aquifer and pumping rates of the aquifer that control the rate of flow of water to a well. The age of the water in an aquifer provides insight into the likelihood of contamination from both anthropogenic and natural sources. Very young groundwater that has recently infiltrated into the aquifer is more vulnerable to contamination from human activities near the land surface than older, deeper groundwater that has had more time to be filtered by soils. Old groundwater, however, is not necessarily free of contaminants. The older groundwater can contain naturally occurring chemical elements and contamination from years past. The land surface through which groundwater is recharged must remain open and uncontaminated to maintain the quality and quantity of groundwater.

Though the most common sources of pollution to groundwater supplies come from two categories; naturally occurring and human activities, groundwater and domestic well water vulnerability to contamination depends on three factors:
  1. The presence of man made or natural contaminant sources; for example, a failing septic system or chemicals poured down the drain, or underlying sediments can be sources of contaminants entering groundwater. From the underlying rocks radionuclides and heavy metals can enter the groundwater. There are areas with natural occurring arsenic, cadmium, chromium, lead, selenium and fluoride. 
  2. The natural processes in the subsurface that can filter or cleanse the groundwater; for example, microorganisms can break down some chemical contaminants in groundwater like nitrate, contaminants can attach to soil particles and unsorted sediments can cause dispersion of contaminants as they move through an aquifer. 
  3. The ease with which water and contaminants can travel to and through an aquifer; for example microorganisms in the soil and from wildlife can travel into groundwater supplies through cracks, fissures, other pathways of opportunity or even through sedimentary and basaltic rocks that are highly fractured and overlain by a thin cover of overburden, while a dense clay layer can reduce groundwater vulnerability by acting as a barrier to the movement of water and contaminants.

The vulnerability and water quality of a well can be vastly different from the quality and productivity of nearby wells. The most common sources of pollution to groundwater supplies come from two categories; naturally occurring ones and those cause by human activities. Naturally occurring contamination are those that are produced from the underlying soil and rock geology and wildlife. Once within an aquifer, contaminants that dissolve in water will travel with the flowing groundwater. What happens next is dependent on the chemical properties of a contaminant, the geology of the area and the flow rate of groundwater. Natural processes such as sorption/desorption, dissolution/precipitation, ion exchange, or biodegradation can reduce contaminant concentrations to effectively clean the groundwater. Many contaminants in the shallow groundwater remain in solution because of presence of oxygen, and the short travel times between the water table and the domestic wells allows contaminants to easily reach the well. In addition, a well might have one or more pathways of opportunity. One of the most common contaminant pathways is the failure of the grouting on the well casing allowing rainwater and snowmelt (carrying dirt and other contaminants) directly enter the well.

Nitrate concentration are often elevated in shallow groundwater because of agricultural and suburban development. Bacteria and nitrates contamination to groundwater can be caused by human and animal waste. In our own neighborhoods septic systems, horses, backyard poultry can cause these problems perculating into the ground or finding an opportunistic pathway through a fissure or other geological entry. On a regional level small lots and dense population of septic systems or large animal or fertilized farm operations can cause problems. Heavy local use of pesticides for ornamental gardens, leaks from underground fuel tanks can be sources of contamination. Households can introduce solvents, motor oil, and paint, paint thinner, water treatment chemicals and others substances by not maintaining our septic systems, or pouring chemicals into the ground or down the drain. Groundwater-quality protection depends on the entire community, what my up gradient neighbor does could impact my water quality. If residents and businesses take steps to reduce input of anthropogenic contaminants to the groundwater, water quality can be improved because of the short travel times between the water table and the well. The opposite is also true. 
from USGS




Thursday, August 15, 2013

Antibiotic Resistant Bacteria and Industrial Farms

from USDA
During a recent training session on local, sustainable agriculture we engaged in a discussion about CAFO concentrated (or confined if you are sitting at Virginia Tech) animal feed lots. The discussion lead to thoughts on antibiotic use on industrial farms and antibiotic use in general.

According to the Center for Disease Control, CDC, antibiotic resistance is one of the world's most pressing public health problems. Practically every known type of bacteria has become less responsive to antibiotic treatment. These antibiotic-resistant bacteria strains can quickly spread within a hospital and the greater community. Peopled infected with antibiotic resistant bacteria tend to shed the bacteria from the nose, feces, and skin; therefore, the bacteria can end up spread by direct contact with a person, through surface contact, ventilation and in municipal wastewater streams after being washed down the drain or flushed down the toilet and spread in ways beyond direct contact. For this reason, antibiotic resistance is among CDC's top concerns.

Usually, antibiotics kill or inhibit the growth of susceptible bacteria. Antibiotic resistance, the acquired ability of a pathogen to withstand an antibiotic arises from random mutations in existing genes and allows the bacteria to survive exposure to antibiotics and other antimicrobial products, in the human body, in animals, or the environment. In a favorable environment the resistant bacterium can multiply and replace all the bacteria that were killed off. Chronic exposure to antibiotics can provide the selective pressure to make the surviving bacteria more likely to be antibiotic resistant. In addition, bacteria that were at one time susceptible to an antibiotic can acquire resistance by acquiring genetic material, pieces of DNA, that contain the resistance properties from other bacteria. The DNA that carries the code for resistance can be grouped in a single easily transferable package in bacteria. This means that bacteria can become resistant to many antibiotics and antimicrobial agents because of the transfer of one piece of DNA.

Though there is consensus among scientists that overuse of antibiotics has caused the increase in resistance, the actual cause of that resistance has been under debate. It is important to note that for every antibiotic, there are sensitive strains, which are killed or inhibited by the drug, and naturally resistant strains and resistance resulting from mutations. When a sensitive strain gains the ability to withstand an antibiotic, it becomes antibiotic resistant. The rate of antibiotic resistance emergence is related to the total amount of antibiotics used and the environment that resistant bacteria are in. The CDC believes that a major factor behind the spread of resistance in hospitals may actually be a lack of adequate hygiene and sanitation, which enables rapid proliferation and spread of pathogens. There are three general category of antibiotic and antimicrobial use that have been blamed for the increase in antibiotic resistant bacteria; misuse and overuse of antibiotics in treating human populations, household use of antibacterials in soaps and other products, and the addition of antibiotics to livestock feed.

Until recently the exact source of antibiotic resistant bacteria was just speculation and inference. Now using genetic sequencing of the bacteria scientists can trace antibiotic resistant bacteria to their source and hopefully to their cause. This methodology was used at the National Institute of Health (NIH) hospital in Bethesda to identify the chain of transmission in a deadly outbreak of antibiotic resistant Klebsiella pneumonia bacteria that infected 18 patients, seven of whom died. The genetic sequencing was used to identify the chain of transmission-body contact and equipment that spread the bacteria, and was able to guide a series of small steps including fumigation and (confirmed) sterilization of equipment, improved hand washing, isolation, and earlier detection to end it. The outbreak at the NIH hospital was ultimately stopped, but our vulnerability to these types of antibiotic-resistant bacteria is sobering. There are nearly 100,000 deaths a year in the U.S. attributed to hospital acquired antibiotic resistant infections, and these types of infections are appearing out in the greater community. This use of genetic sequencing could transform the way hospital-acquired infections are identified and halted saving tens of thousands of lives, but could also be used to tie antibiotic resistant bacterial infections in the community to their cause.

Research underway at The George Washington University, Department of Environmental and Occupational Health lead by Professor Lance B. Price is investigating the connection between antibiotic resistant bacteria present in our food supply and antibiotic resistant infections appearing in the general public. The NARMS retail meat surveillance a collaboration between the U.S. Food and Drug Administration/Center for Veterinary Medicine (FDA/CVM), the CDC, and State public health laboratories in 11 states sampled and tested meat purchased in grocery stores from January to December 2011. Each of the 11 laboratories purchased approximately 40 food samples per month, 10 samples each of ground turkey, pork chops, ground beef and chicken. The found that in 2011 that of meat from supermarkets 81% of ground turkey, 69% of pork chops, 55% of ground beef and 39% of chicken contained antibiotic-resistant bacteria.

The George Washington University researchers working with the Translational Genomics Research Institute in Phoenix (where the testing and sampling are taking place) are trying to quantify how extensively antibiotic resistant bacteria from the meat purchased in grocery stores is infecting people. Specifically, Dr. Price and his team are comparing the genetic sequences of antibiotic resistant E. coli bacteria from grocery store meat samples to the genetic sequence of the E. coli bacteria that have caused urinary infections in women. Though antibiotic resistant bacteria in meat is believed by Dr. Price to cause only a fraction of urinary infections, that could still amount to several hundred thousand infections each year.

The Food and Drug Administration reported that, in 2011 the last year for which they have data, almost 30 million pounds of antibiotics were sold for use in food animal production; however, according to Dr. Price more detail is needed on the use of antibiotics. At a congressional subcommittee hearing this spring Dr. Price said “we need to know why antibiotics are being used—that is, how often they are sold for non-therapeutic production purposes like growth promotion and disease prevention or for therapeutic purposes like disease control and treatment. These practices pose a particular threat to human health because they involve low-dose antibiotics, which can do more harm than therapeutic doses. They create an environment for bacteria that is just hostile enough to prompt them to develop resistance but not so harsh that they are killed off.”

Dr. Price's results when published this fall may provided the best reason to buy organic, sustainably raised meat.

Monday, August 12, 2013

Natural Gas Leaks-Death and Climate Change

Yellow spikes are methane leaks measured in Boston. From Jackson et. al. 

Two recent studies have documented thousands of gas leaks in Boston and Washington D.C. Last year two scientists, Robert B. Jackson, Professor of Global Environmental Change at Duke University and Nathan Phillips, associate professor at Boston University Department of Earth and Environment collaborated with Robert Ackley of Gas Safety Inc., and Eric Crosson of Picarro Inc., to perform a study of gas leaks in Boston. They mapped the gas leaks under the city using a new, high-precision methane analyzer provided by Picarro installed in a GPS-equipped car. Driving all 785 road miles within city limits, the researchers discovered 3,356 leaks. The leaks were found to be associated with old cast-iron underground pipes, rather than neighborhood socioeconomic indicators. Levels of methane in the surface air on Boston’s streets exceeded 15 times the normal atmospheric background value. Cast iron is often the oldest and leakiest, especially at the joints, although other pipeline materials can also develop leaks.

This past spring, the team replicated the study on the streets of Washington, D.C. The results for the Washington D.C. study have not been published, but preliminary reports indicate that D.C., too, has thousands of leaks from its natural gas distribution system. According to a report in Scientific American, Dr. Jackson stated, the number of leaks per road mile is similar to that of Boston, but has almost twice as many miles of road.

For some time our infrastructure systems have failed to keep pace with the current and expanding needs, and investment in infrastructure had faltered as an unseen way to cut costs. Every four years the American Society of Civil Engineers, ASCE, grades the infrastructure in the United States, from water mains, sewer systems and plants, the electrical grid, the neighborhood streets and the national highway system, dams, rail roads, airports. Infrastructure is the foundation of our economy, connecting businesses, communities, and people, making us a first world country.

In 2013 the grade for energy remained at a D+ despite the boom in gas and oil due to weakness in the distribution systems. Though, the recent booms in oil and gas production could supply the energy demand, we have failed to maintain and upgrade the oil and gas. Gas distribution companies are well aware of the leaks in the system. The companies calculate the difference between the gas pumped into the distribution system and what is metered at the end user. This is referred to as "lost and unaccounted-for" gas is often a surcharge on customer bills. These leaks are wasteful, dangerous and a significant source of greenhouse gas released into the environment.

Distribution companies prioritize finding and fixing leaks likely to be explosion hazards, where gas is collecting and concentrating and ignore the small losses from deteriorating iron pipe. Though sometimes they do not do that well enough. Natural gas distribution leaks and explosions cause an average of 17 fatalities, 68 injuries, and $133 million in property damage each year, according to the U.S. Pipeline and Hazardous Materials Safety Administration. The transportation and distribution systems run into homes and businesses. In 2010 a natural gas pipeline exploded in San Bruno, CA, just south of San Francisco. There was no warning and eight people were killed, 58 were injured and 38 homes, the entire section of a neighborhood, destroyed. The deaths in San Bruno did not change the way we maintain our infrastructure, though the California Public Utilities Commission has proposed a $2.25 billion penalty, which includes a $300 million fine.

According to Dr. Jackson and Phillips detecting and reducing gas leaks are critical for reducing greenhouse gas emissions, improving air quality and consumer safety, and saving consumers money. In addition to the explosion hazard, natural gas also poses a major environmental threat: Methane, the primary ingredient of natural gas, is a powerful greenhouse gas that degrades air quality. Leaks in the United States are reported to contribute to $3 billion of lost and unaccounted for natural gas each year. Included in the details of the White House climate plan, originally introduced in a speech at Georgetown University in June, is an “interagency methane strategy” that examines the scope of leaks from gas wells, pipelines and compressor plants to examine their contribution to global warming.

The White House climate plan was released immediately after the International Energy Agency (IEA) released a series of recommendation for measures that might curtail the rapid growth that has occurred in carbon dioxide emission from fuel combustion that has taken place in the past few decades despite treaties, meetings and conferences. Global greenhouse gas emissions are increasing rapidly and, in May 2013, carbon-dioxide (CO2) levels in the atmosphere exceeded 400 parts per million for the first time in several hundred millennia.

Though mankind has blown through the tipping point in CO2 emissions that was just a decade ago referred to as the point of no return, the IEA is making policy recommendations that might hold the global temperature increase to 2 to 4°C by cutting global CO2 emissions growth so that it does not exceed 38.75 billion metric tonnes from fossil fuels in 2020. These recommendations really fall into two categories, efficiency and maintenance:
  • Installing energy efficiency measures in buildings, and requiring increased efficiency in industry and transportation
  • Preventing the construction of and limiting use of the least-efficient and dirtiest coal-fired power plants. In addition to increasing the share of power generation from renewable sources (including nuclear) and from natural gas
  • Reducing methane released from the processing and distribution of oil and gas by replacing aging infrastructure and improving technology implementation.



Thursday, August 8, 2013

Renewable Fuel Standards Hits the Wall

from USDA
The Environmental Protection Agency, EPA, has finalized the Renewable Fuel Standard, RFS, for 2013 and indicated that they will propose to cut the RFS for 2014. This is a response to the fixed volume of ethanol that the RFS requires as the gasoline use in the United States has fallen and the hearings that the U.S. House Committee on Energy and Commerce Subcommittee on Energy and Power have been holding to examine the RFS. Annual U.S. gasoline use has declined from its 142-billion-gallon peak in 2007 to about 133 billion gallons in 2012 and ethanol now represents 9.74% of gasoline. There appears to be a pratical limit of 10% of ethanol in gasoline. The so called "blend wall," as the point at which the U.S. gasoline infrastructure can no longer absorb additional ethanol is known, is the result of the pipelines, pumps and service station infrastructure not being able to handle gasoline with higher amounts of ethanol. There is also some question about the ability of a large number of automobiles to handle 15% ethanol in gasoline.

The EPA was first petitioned in 2009 to allow the sale of 15% ethanol gasoline, E15 and subsequently have approved the use of the fuel in about half of the cars on the road, automakers have approved less than 5% of cars on the road to use E15. It was thought that the percentage of ethanol in gasoline could simply increase to 15% to meet the RFS. The Renewable Fuel Association, RFA, “E15 Retailer Handbook,” outlines potential issues with the fuel. The handbook advises gasoline retailers that “some Underground Storage Tank systems and related underground equipment may not be compatible with E15 blends” and cites the Underwriters Laboratories’ warning that “some equipment, both new and used… demonstrated limited ability to safely accommodate exposure to fuels such as E15.”

The U.S. House Committee on Energy and Commerce Subcommittee on Energy and Power have been holding hearings examining the merits and shortcomings of the RFS. AAA in its role of representing consumer interests urged the Committee to consider whether adjustments to the target volumes of ethanol need to be made to avoid putting consumers and their automobiles at risk. In his testimony before the Committee Robert L. Darbelnet President and CEO of AAA stated “If the only way to meet the RFS requirement is to introduce E15 before agreement has been reached on which vehicles can safely use it and the consumer has been adequately educated, then the RFS requirement should be modified.”

The Energy Independence and Security Act (EISA) established the Renewable Fuel Standard, RFS, program. The RFS program was created in 2005, and established the first renewable fuel volume mandate in the United States. The original RFS program required 7.5 billion gallons of renewable fuel (primarily ethanol) to be blended into gasoline by 2012. Under the Energy Independence and Security Act (EISA) of 2007, the RFS program was expanded to include diesel, in addition to gasoline; increase the volume of renewable fuel required to be blended from 9 billion gallons in 2008 to 36 billion gallons by 2022; and established new categories of renewable fuel, and set separate volume requirements for each one. In addition, it was required that the blended fuel emit fewer greenhouse gases than the original petroleum fuel.

Compliance with the RFS is implemented through the use of tradable credits called Renewable Identification Numbers (RINs), each of which corresponds to a gallon of renewable fuel produced in or imported into the United States each year. This program was developed to encourage the production of renewable fuel and lessen the nation’s dependence on foreign oil. Things change, in the past few years there has been a domestic boom in oil production, the growth in fuel used for transportation has not met projections, and for the past few years drought has significantly impacted the corn crop in the United States.Last year the subsidized ethanol production took half the corn crop.

EPA has continued to enforce the increases in the RFS, though they have received many requests for waivers, and objections. EPA has determined for the most part that the objections raised did not warrant a reconsideration of the RFS requirements; however, a January 2013 ruling by the U.S. Court of Appeals required the agency to reevaluate projections for biofuel to reflect market conditions. In addition, it became clear that the mandate for renewable fuel was going to exceed 10% of all fuel sold. Gasoline (and other fuels) would hit the “E10 blend wall” in 2014.

Most gasoline sold in the U.S. today is E10, it contains 10% ethanol. The “E10 blend wall” refers to the difficulty in incorporating ethanol into the fuel supply at volumes exceeding 10% and since the demand for gasoline has not grown as expected by the EPA due to reduced driving and increased mileage in the cars on the road (see mileage emission standards). The industry cannot practically incorporate more than 10% ethanol into gasoline. So, on Tuesday the EPA announced that it will propose to use “flexibilities” in the RFS statute to reduce both the advanced biofuel and total renewable volume requirement 2014. The EPA is going to adjust the mandate to not exceed the 10% practical limit on fuel blends.

Monday, August 5, 2013

Energy East Pipeline instead of Keystone XL- Canadian Crude Will Reach Market

from TransCanada 
TransCanada’s second application for a Presidential Permit to build the northern most section of the Keystone XL pipeline (Phase IV) from the Canadian Border in Saskatchewan into Nebraska appears to be stalled despite a recommendation from Nebraska Governor Dave Heineman. Based on comments from President Barack Obama that the pipeline that would carry 830,000 barrels of crude oil from the Canadian oil sands and Bakken oil basin in Montana and North Dakota would not create a significant number of permanent jobs, and that the net effect of the climate would be critical to the decision, the outlook for approval is not bright.

Meanwhile, TransCanada’s proposed pipelines to Canada's West Coast, the Northern Gateway, that would carry crude oil from Alberta to the Pacific port of Kitmat, for export to Asia is facing strong opposition in British Columbia by First Nations groups and environmentalists. So now, TransCanada is moving forward with an east-west pipeline entirely in Canada and outside of British Columbia. Russ Girling, TransCanada's president and chief executive officer announced at a news conference last week that TransCanada is moving forward with the 1.1 million barrel per day Energy East Pipeline project. The Energy East Pipeline project would convert a redundant 1,864 mile portion of the TransCanada's Canadian Mainline natural gas distribution pipeline to a crude oil pipeline and build the additional 870 miles of new pipeline to reach the port in Saint John, New Brunswick.

The project is expected to cost approximately $12 billion to upgrade the existing pipeline and extend its run to the coast. The pipeline will transport crude oil from the oil sands in Alberta and Saskatchewan to Montréal, the Québec City region and Saint John, New Brunswick, greatly increasing access by the oil companies to Eastern Canadian and international markets. The pipeline could replace imported oil refined in Montreal and Quebec with Canadian oil. In addition the pipeline will terminate at Canaport in Saint John, New Brunswick where TransCanada and Irving Oil have formed a joint venture to build, own and operate a new deep water marine terminal and be able the supply the United East Coast refineries and other nations.

The Energy East Pipeline, which still needs regulatory approval in Canada, will have a capacity of approximately 1.1 million barrels a day and is expected to be in service by late-2017 for deliveries in Québec and 2018 for deliveries to New Brunswick. After the July train crash in Lac-Megantic, Quebec, that killed at least 15 people, and multiple derailments in recent months as petroleum products have increasingly been transported by railroad as the pipeline projects have languished, there is a growing recognition that pipeline transport of oil is safer. Prime Minister Stephen Harper stated while the pipeline will have a thorough review, it is the safest way to transport oil.

Customers have already pledged to use at least 900,000 barrels a day of the line's capacity, as Canadian producers need a route to export their oil and Canadian refiners need oil, while regulatory hurdles delay the proposed pipelines through Western Canada and to the United States. The Canadian Association of Petroleum Producers has projected that Canadian oil output will more than double by 2030 to 6.7 million barrels per day, with most of the increase anticipated to be from the Alberta oil sands. There is demand in the world for oil and delay or denial of Presidential Permits for border crossing pipelines and delays in crossing the First Nations will not keep the oil in the ground or reduce world demand for oil and fossil fuels.

There is currently a TransCanada Keystone pipeline that runs east from Hardesty Saskatchewan to Manitoba and then south through the Dakotas to Steel City, Nebraska. It is a lower volume pipeline than the proposed Keystone IV. The existing Keystone Pipeline is known as Phase I and run from Hardesty, Canada to Steel City, Nebraska near the Kansas and Nebraska border. Keystone Phase II runs from Steel City to Cushing, Oklahoma where it still terminates, leaving the Canadian crude oil in Oklahoma along with U.S. domestic production from North Dakota that has been using the pipeline to reach the Oklahoma storage facilities.

In 2012 TransCanada began building the Cushing Oklahoma to the Nederland, Texas portion of the Keystone XL pipeline, the Keystone Phase III, a 435 mile extension of the existing Keystone pipeline to Port Arthur and Houston areas. The section of the pipeline did not require a Presidential Permit for it crossed no international borders and received state approval. The Keystone Phase III Project (Oklahoma to Texas) plans to begin operations this year.

In response to the glut of oil in Cushing, Enbridge Inc. and Enterprise Products Partners owners of the Seaway pipeline that runs from the gulf coast area to Cushing, Oklahoma, reversed the flow in their gas pipeline to move crude from Cushing to the gulf coast refineries. The reversal and change to crude required pump station additions and modifications, and was up and running in mid-2012, the capacity of the reversed Seaway Pipeline is up to 150,000 barrels of oil per day.

Enbridge has also applied for a Presidential Permit to increase the capacity of their existing 36-inch diameter Line 67 pipeline, which runs 670 miles from Hardisty, Alberta, to Superior, Wisconsin. With these improvements, the pipeline line will be able to carry up to 570,000 barrels of oil per day up from the current 450,000 barrels a day. The Minnesota project is part a larger plan by Enbridge to upgrade pipelines in the United States and Canada to ship more Canadian oil from the Alberta oil sands to the Midwest and beyond.

Thursday, August 1, 2013

Slowing the Erosion from Rising Sea Level and Storms

Breakwater at Westmoreland State Park
At the quarterly meeting of the Potomac Watershed Roundtable Scott Hardaway from the Virginia Institute of Marine Sciences at William and Mary and Mike Vanlandingham the last standing Shoreline Engineer from the Virginia Department of Environmental Quality (DEQ) spoke about shoreline erosion and stabilization in general and along the Potomac River, talking about the problem and potential solutions for slowing the natural forces that are eroding our shorelines. Since 1980 the DEQ has provided Shoreline Erosion Advisory Service to provide technical assistance in the form of an advisory report and plan reviews to landowners, state owned land, localities, and federal agencies experiencing tidal erosion along the 5,000 miles of tidal shoreline in Virginia.

Approximately 15,000 years ago the ocean coast was about 60 miles east of its present location, and sea level was about 300 feet lower. At that time there was no Chesapeake Bay. Instead there was a river that meandered out to sea. It is that ancient river that created the deep channel within the Bay and estuary waters. Sea level continues to rise in the Chesapeake Bay, it was estimated by Scott Hardaway to be rising at about a foot per century, and others have estimated that this rise will accelerate in the future. The rising sea level is one of two primary causes of shoreline erosion, the other is wave action. Storm events can cause powerful waves and change the shape of the shoreline as they erode and transport soil and sand from one part of the shore to another.

The factors that influences the way that the shore line will erode are; coastal geology, the amount of open water, existing shore conditions, storm surges, and rising sea level. While the erosion can be managed, it cannot be stopped. Rising sea level and storm waves are relentless forces. The erosion of shoreline in Virginia has been complicated by the rapid and extensive development in these areas in the past 25 years. The development changes the nature of the shore and creates difficulties in trying to implement an area strategy with multiple property owners who cannot or will not take the (decades) long term view. There are basically three strategies that can be implemented for fighting shoreline erosion: soft, hard and combination. There is little that can be done to permanently hold back the rise in sea level; however shoreline management strategies can be used to blunt the destruction of storm related wave action.

A soft strategy is utilizing wide fringing marshes, beaches and dunes to absorb the energy of waves and reduce the effects that storms will have on adjacent upland banks. With an adequate marsh fringe, beach or dune protection upland banks may only be impacted by the most severe events- at least for a while. Nonetheless, over time, marshes and beaches are eroded themselves and can no longer protect the shore and according to Mr. Vanlandingham, there are areas where a massive storm can erode 30 feet of shoreline in a single year though the shoreline overall averages a loss of 1 foot per year. As rising sea level and erosion narrow beaches and marshes over time, the upland banks are become impacted by storm surge which causes bank instability. Continual erosion can result in sudden collapse of an upland bank taking yards, decks, homes and roads.

Hard strategies to shoreline protection are riprap revetments, retaining walls with anchor systems and bulkheads. The combination strategies utilize groins in combination with the bulkheads and breakwaters. Bulkheads, revetments, and groins are the most common protection strategies currently employed to protect shorelines from erosion. Bulkhead and seawall are often used to describe the same thing, but there really is a difference: bulkheads are generally smaller and less expensive than seawalls. Bulkheads are usually made of wood. They are designed to retain upland soils and often provide minimal protection from severe storms. Seawalls are generally made of poured concrete and are designed to withstand the full force of waves.

In recent years, rock or riprap revetments became more widely used to protect shorelines. A properly designed and constructed rock revetment can last fifty years or more because it can be maintained by the addition of more stones. The revetments have sloped and rough stone faces that decrease wave reflection and bottom scour. Revetments need to be built high enough to withstand waves during extreme storms or they will not work. In addition, the banks need to be graded to create a stable slope.

Between the 1950s and 1980s, groins were a popular way to trap sand and build a modest beach area and are widely seen in beach communities. A groin is a wood structure perpendicular to the shoreline designed to “catch” sand and prevent erosion of the beach. On a relatively wide sand beach the sand will accumulate on the up drift side of a groin. If enough sand were available, the shoreline banks would gain some degree of protection from erosion. However, the sand capture by the groin will prevent the sand from reaching down drift areas increasing erosion there and can create difficulties and lawsuits amongst property owners. Breakwaters can work as a better strategy if used along a long span of shoreline. Breakwaters are built offshore to control shoreline erosion by maintaining a wide, protective beach.


The breakwater, sitting out perpendicular to the shore, “breaks” the force of the waves and dissipates the energy so the waves do not erode the beach or upland banks. Unlike groins that merely capture sand. Breakwater systems are designed to create stable beaches and allow various species of marsh grasses to be established at the site.
from Hardaway
In the past decade or so, coastal engineers use combinations of hard and soft structures in storm damage reduction design. A rock seawall buried within a dune was constructed in 2000 in Virginia Beach, Virginia by the Army Corps of Engineers to protect critical naval infrastructure. Such approaches have been adopted because they proved to be both cost-effective and environmentally friendly alternatives to more classical coastal structure design. Yet, because of the rarity of extreme flood and wave events, these multi-level designs have not been demonstrated to be effective as clearly as they were in New Jersey during Hurricane Sandy in October 2012 when the fate of two adjacent communities demonstrated the effectiveness of a rock seawall buried within a dune.

Hurricane Sandy devastated the Jersey shoreline destroying many coastal communities, caused widespread erosion of the sand dunes as well as having the Barrier Island breached in some locations. Along the hardest-hit stretch of the New Jersey shore are two adjacent coastal communities: the Boroughs of Bay Head and Mantoloking. Before Hurricane Sandy, these adjacent boroughs featured similar topography and residential development. Yet, while similar surges and large waves arrived at their shores, the communities experienced vastly different levels of destruction. A team of scientists lead by Jennifer L. Irish, associate professor of civil and environmental engineering in the College of Engineering at Virginia Tech investigated the shoreline immediately after the storm, and recently published their findings.
From J.L. Irish Buried Seawall


The cause of difference in damage between the two communities turned out to be a long forgotten sea wall originally built in 1882 that had formed the core of the Bay Head dune very much like the structure installed at Virginia Beach. The stone seawall had been covered over with fine dune sand by Aeolian transport and beach nourishment during the twentieth century and forgotten. While similar surges and large waves arrived at both towns the amount of damage and erosion was vastly different between the two. In Mantoloking the dune structure was entirely sand and their entire sand dune was destroyed by the storm. Water washed over the barrier spit and opened three breaches hundreds of feet wide and the sand was swept away by the waves. In Bay Head, only the portion of the dune located seaward of the seawall was eroded and the section of dune behind the seawall received only minor local scouring. The dune remained in place and the sand remained on the beach. In addition in Bay Head only one oceanfront home was destroyed. In Mantoloking, more than half of the oceanfront homes were classified as damaged or destroyed.

The discovery of the relic seawall came as a surprise to many of the residents, generations of families do not stay in communities and there is little realistic long term planning for future storms and rising sea levels. This relic seawall and the deposited dune sand combined to form a combination soft and hard structure that is now in use to protect the shoreline. This design was discovered as the effective protector of the Bay Head shoreline and demonstrated to work during the “Superstorm.”

Shoreline protection strategies continue to evolve. In many locations, elevated shoreline stabilization structures are combined with beach nourishment for shoreline protection. Nontraditional technologies (beach drains, geotextile bags, artificial breakwater structures, wetlands, etc.) are also being investigated in field experiments. Nonetheless, man cannot hold off the rising seas forever. First we protect the shore with engineered barriers (of all types), then we rebuild the beaches by adding sand and marshes. Ultimately we will have to accommodate the rising sea level by raising structures and retreating from the shore.