The "American Clean Energy and Security Act” is HR 2454, also know as the Waxman-Markley energy bill, or simply as "ACES" was passed by the House on Friday (219-212). The bill includes a cap-and-trade global warming reduction plan designed to reduce carbon dioxide emissions in the U.S. The current goal is a reduction of 17% by 2020 and this will be accomplished by requiring “polluters” to buy permits to emit a certain amount of carbon dioxide. The bill sets an overall cap on such permits but allows them to be sold. The cap grows tighter over time reducing what can be emitted in total and hopefully pushing up emissions prices and prodding industry to release less carbon dioxide by utilizing cleaner energy sources or increasing efficiency of the existing ones. Other provisions include new renewable energy requirements for utilities, studies and incentives for carbon capture technologies, energy efficiency incentives and penalties for homes and buildings, and grants for green jobs.
A cap and trade system will cost the American consumer more for power, transportation and many goods. There will be profits to be made in a cap and trade system, which will hold the profits and who will bear the costs remains to be seen. It is anticipated that there would be a net cost of the program despite the creation of some green jobs and the creation of wealth for market makers. What are we willing to give up in terms of comforts, services, possessions and other goals to accommodate a targeted reduction in carbon dioxide release? There is a price to reduce our emissions of greenhouse gases. Hopefully, there will be benefits and the unintended consequences will not overwhelm the goals of the bill.
On Friday, June 25, 2009 in an opinion piece in the Wall Street Journal, Kimberley A. Strassel reported that Australia failed to pass their “Cap and Trade” bill and that the number of skeptics is swelling. It is unclear if they are skeptics on whether the earth is warming, the modeling of the earth’s climate and environment, or skeptics about the postulated impacts that global warming might have. As the previous review of global warming research showed some research suggests that climate change may have some anthropogenic (human) causes, but other research does not support that theory. Certainly, anthropogenic activity has contributed 4% of the 386 parts per million (0.039%) carbon dioxide in the atmosphere. However, the consequences of climate change that have been cited as reasons for government action are not born out by the facts.
Monday, June 29, 2009
Thursday, June 25, 2009
Termites and Integrated Pest Management
In the United States there are four groups of termites of concern: subterranean (including the Formosan termite), drywood, dampwood and powderpost. Subterranean termites and drywood termites are the two general types. Subterranean termites "nest" in the soil and from there they can attack structures by building shelter tubes from the soil to the wood in structures. Subterranean termites cause more of the damage to homes and structures than drywood termites so will be my primary focus here. Termites will attack any material with cellulose, including wood, paper coated wall board, and paper (as in that treasured book collection that occupies the lower level of my home). Wood that is at least 30 percent water saturated provides enough moisture. Additionally, termites will find free-standing water such as condensation, rain or plumbing leaks and use this moisture as their main source for survival. Termites have been a part of the ecosystem for thousands of years and aid in the decomposition of wood, freeing the nutrients in the decaying material for reuse by other organisms. Termites rely on eating the cellulose found in wooden structures, furniture, stored food and paper. It is virtually impossible to reside anywhere in the United States without confronting termite damage at one point or another.
No technique, from the traditional, blast it with chemicals to the alternative strategies are 100% effective all the time. The most popular professionally applied conventional chemical treatments on the market are Premise (imidacloprid), Termidor (finpronil), and Phantom (chlorfenzpyr) range from slightly toxic to very toxic and vary in their solubility and affinity for soil. They are less environmentally persistent and more rapidly biodegradable, than previous generations of chemicals. This all means that they breakdown faster and do not last as long. In addition, Premise and Termidor kill termites more slowly than the "older" chemicals after the insects come in contact with the treated soil. Studies indicate that "contaminated" termites may pass some of these non-repellent chemicals to other members of their colony which increases the overall impact of the termiticide. There are steps a homeowner should take to make a house less ideal for termite invasions to minimize the use of chemicals. Ridding an infested house of termites or preventing termite infestation using the lest toxic method requires an integrated pest management approach that includes a series of methods fro preventing or managing pest populations based on an ecological understanding of the problem. The US EPA has developed their Pesticide Environmental Stewardship Program (PESP) to encourage these methods. The integrated pest management approach for termite control involves the following steps:
Inspection, detection and monitoring. The thorough inspections and periodic regular monitoring will help determine the location of any termite damage and its extent, signs of previous and current infestation. Research from the Entomology Department of University of Florida found that properly trained termite dog teams are the most effective termite inspectors. This is not a joke, but it is really funny to have your house inspected by a beagle. Termite Dogs can smell termites, through drywall, concrete, paneling and all other building materials.
Identification. The next step is to correctly identify the species found on-site. Ant and termites are often confused.
Determine treatment plan. There is a difference between finding termites and finding "conditions conducive to termite infestations". Both situations need to be addressed, but usually in different ways. Eliminating conditions favorable to wood-destroying pests, mostly moisture-related problems, usually means repairs or alterations to the crawlspace, elimination of water pooling locations and leaks in basements, crawlspaces or other parts of the house or the area around it. If inspection did not find any current infestation a monitoring program combined with these physical changes to the structure to discourage infestation in the long run. If signs of infestation were found, it is possible to spot treat infestation areas using the least toxic chemical. (This is probably borates for wood protection, and replacement of damaged wood. Any replacement wood elements should be with borates, and the infested wood elements removed from site.)
Treatment. The treatment options are bating with spot treatment and traditional chemical barriers. Termite baits use small amounts of insecticide to knock out populations of termites foraging in and around the structure. Some baits may even eradicate entire termite colonies. Termite baits consist of paper, cardboard, or other termite food, combined with a slow-acting substance lethal to termites. Regardless of which bait is used, the process is lengthy and four or five times as expensive as chemical treatment. Baits offer termites an easily accessed location to feed on wooden stakes, cardboard, or some other cellulose-based material. The toxicant-laced bait can either be installed initially, or substituted after termites have been detected in an untreated monitoring device. The more baits installed, the better the chances of locating termites. Planning, patience, and persistence are requisites for successfully using termite baits. This is a long term commitment.
Monitoring. After treatment it is essential to continue monitoring for termites. The success of the treatment method needs to be assessed and once elimination of the infestation has been accomplished, monitoring will detect further infestations as early as possible.
With termites it is important to eliminate all infestation before too much structural damage occurs. Chemical treatment with termiticides is the least expensive and fastest method of treatment for termites. The chemicals in use today are far less toxic than previous generations. If you are strongly opposed to the use of pesticides around your home you should go with a baiting system. Although conventional liquid termiticides reportedly pose no “significant” hazard to humans, pets or the environment when applied according to label directions, they are still toxic chemicals. With baits, the total amount of pesticide applied is small in comparison to the gallons upon gallons needed to achieve a thorough and effective soil barrier treatment. In addition, drilling through driveways, garage, and slabs may be required to create a complete and unbroken chemical barrier surrounding a house. Regulations prevent the application of most if not all termiticides within 50 feet of water well. Termite baiting requires fewer disruptions within the home and is protective of ground water and drainage systems compared to conventional chemical treatment. Installation and subsequent monitoring of bait stations generally does not even require the technician to come indoors. Noise, drill dust, and similar disruptions associated with conventional treatment are avoided, but the baiting process can take months, possibly a full year to eliminate infestations. Traps were quietly and neatly installed yesterday and will be scanned and entered into the system for regular inspections tomorrow. The beagle is scheduled to inspect the home at that time. Updates in my adventures in IPM process will follow.
No technique, from the traditional, blast it with chemicals to the alternative strategies are 100% effective all the time. The most popular professionally applied conventional chemical treatments on the market are Premise (imidacloprid), Termidor (finpronil), and Phantom (chlorfenzpyr) range from slightly toxic to very toxic and vary in their solubility and affinity for soil. They are less environmentally persistent and more rapidly biodegradable, than previous generations of chemicals. This all means that they breakdown faster and do not last as long. In addition, Premise and Termidor kill termites more slowly than the "older" chemicals after the insects come in contact with the treated soil. Studies indicate that "contaminated" termites may pass some of these non-repellent chemicals to other members of their colony which increases the overall impact of the termiticide. There are steps a homeowner should take to make a house less ideal for termite invasions to minimize the use of chemicals. Ridding an infested house of termites or preventing termite infestation using the lest toxic method requires an integrated pest management approach that includes a series of methods fro preventing or managing pest populations based on an ecological understanding of the problem. The US EPA has developed their Pesticide Environmental Stewardship Program (PESP) to encourage these methods. The integrated pest management approach for termite control involves the following steps:
Inspection, detection and monitoring. The thorough inspections and periodic regular monitoring will help determine the location of any termite damage and its extent, signs of previous and current infestation. Research from the Entomology Department of University of Florida found that properly trained termite dog teams are the most effective termite inspectors. This is not a joke, but it is really funny to have your house inspected by a beagle. Termite Dogs can smell termites, through drywall, concrete, paneling and all other building materials.
Identification. The next step is to correctly identify the species found on-site. Ant and termites are often confused.
Determine treatment plan. There is a difference between finding termites and finding "conditions conducive to termite infestations". Both situations need to be addressed, but usually in different ways. Eliminating conditions favorable to wood-destroying pests, mostly moisture-related problems, usually means repairs or alterations to the crawlspace, elimination of water pooling locations and leaks in basements, crawlspaces or other parts of the house or the area around it. If inspection did not find any current infestation a monitoring program combined with these physical changes to the structure to discourage infestation in the long run. If signs of infestation were found, it is possible to spot treat infestation areas using the least toxic chemical. (This is probably borates for wood protection, and replacement of damaged wood. Any replacement wood elements should be with borates, and the infested wood elements removed from site.)
Treatment. The treatment options are bating with spot treatment and traditional chemical barriers. Termite baits use small amounts of insecticide to knock out populations of termites foraging in and around the structure. Some baits may even eradicate entire termite colonies. Termite baits consist of paper, cardboard, or other termite food, combined with a slow-acting substance lethal to termites. Regardless of which bait is used, the process is lengthy and four or five times as expensive as chemical treatment. Baits offer termites an easily accessed location to feed on wooden stakes, cardboard, or some other cellulose-based material. The toxicant-laced bait can either be installed initially, or substituted after termites have been detected in an untreated monitoring device. The more baits installed, the better the chances of locating termites. Planning, patience, and persistence are requisites for successfully using termite baits. This is a long term commitment.
Monitoring. After treatment it is essential to continue monitoring for termites. The success of the treatment method needs to be assessed and once elimination of the infestation has been accomplished, monitoring will detect further infestations as early as possible.
With termites it is important to eliminate all infestation before too much structural damage occurs. Chemical treatment with termiticides is the least expensive and fastest method of treatment for termites. The chemicals in use today are far less toxic than previous generations. If you are strongly opposed to the use of pesticides around your home you should go with a baiting system. Although conventional liquid termiticides reportedly pose no “significant” hazard to humans, pets or the environment when applied according to label directions, they are still toxic chemicals. With baits, the total amount of pesticide applied is small in comparison to the gallons upon gallons needed to achieve a thorough and effective soil barrier treatment. In addition, drilling through driveways, garage, and slabs may be required to create a complete and unbroken chemical barrier surrounding a house. Regulations prevent the application of most if not all termiticides within 50 feet of water well. Termite baiting requires fewer disruptions within the home and is protective of ground water and drainage systems compared to conventional chemical treatment. Installation and subsequent monitoring of bait stations generally does not even require the technician to come indoors. Noise, drill dust, and similar disruptions associated with conventional treatment are avoided, but the baiting process can take months, possibly a full year to eliminate infestations. Traps were quietly and neatly installed yesterday and will be scanned and entered into the system for regular inspections tomorrow. The beagle is scheduled to inspect the home at that time. Updates in my adventures in IPM process will follow.
Monday, June 22, 2009
Chesapeake Bay Water Shed-How is it Doing?
Recent articles in the Washington Post have talked about the failure to meet effluent goals for the Chesapeake Bay Water Shed, an area spanning six states, a 64,000 square-mile watershed, and 180,000 miles of tributaries and coastline. This begged the question of what do the release numbers look like. The numbers above were supplied by the EPA’s Chesapeake Bay Program Office in Annapolis. The contaminants of concern are nitrogen, phosphorus, and sediments.
The Chesapeake Bay Commission was created in 1980 to coordinate Bay-related policy across state lines and to develop shared solutions. The catalyst for their creation was the Environmental Protection Agency's (EPA) seven-year study (1976-1983) on the decline of the Chesapeake Bay. The Chesapeake Bay Commission was established by Maryland and Virginia to assist the states in cooperatively managing the Chesapeake Bay. The Commonwealth of Pennsylvania became a member in 1985. The legislative mandated goals of the commission were:
- assist the legislatures in evaluating and responding to mutual Bay concerns
- promote intergovernmental cooperation and coordination for resource planning
- promote uniformity of legislation where appropriate
- enhance the functions and powers of existing offices and agencies, and
- recommend improvements in the management of Bay resources.
The Federal Clean Water Act gives regulatory authority to the states to restrict pollutants discharged into the waters of the Bay from point sources, such as wastewater treatment plants. In contrast, that authority does not extend to non-point sources, such as farms and septic systems. Will the Federal mandate provide the necessary teeth for the states to committee the funds, and other state resources. The states need to address these non-point sources using other regulatory schemes. Reductions in discharge of contaminants can be achieved through the implementation of “agricultural best management practices” operations. Mandated implementation of these practices will have to be accomplished under state regulations. With state budget constraints where will the funding for these programs come from? What programs will be reduced to pay for achieving Federally mandated Total Maximum Daily Load (TMDL) goals? Will this directly impact the cost of food or will that cost be burred in state taxes? Saving the Chesapeake Bay is important, it has been easy in the past to avoid tough choices of spending and resource reductions in other areas by pushing those decisions into the future with each change in the the effluent goal time line.
In addition, Federal action and funding is necessary for reduction of the largest point source of nitrogen remaining in the watershed, the Blue Plain Wastewater Treatment plant in Washington, DC. Congressional action is needed for funding for enhanced nutrient removal technology. This investment at Blue Plains Advanced Wastewater Treatment Plant would significantly reduce nitrogen flows from the largest single source of nutrient pollution in the watershed removing almost four million pounds of nitrogen or 7.7% of the point source total each year.
Thursday, June 18, 2009
Carbon Footprint, Carbon Savings and Carbon Offsets
All resources are finite. As humans our resources consist of money, time, passion and energy. In the end, where, how and when we deploy these resources will determine our comfort and happiness with our lives. While there are some basic truths, the optimal allocation of your resources is based on your values and goals. We all should be thoughtful in our living, smarter about the ways in which we use the earths and our personal resources.
According to McKinsey and Co. it cost an additional $30-$40 above normal energy production costs to eliminate one ton of CO2 emissions by replacing traditional energy production with solar or wind power (the presumed life of the equipment was unreported). However, when a ton of CO2 was saved using LED light bulbs or energy-efficient appliances money was also saved ($108-$159 less was spent on energy for every ton of CO2 saved). The costs associated with generating power without CO2 emissions are higher than current costs. If the money is spent to reduce CO2 by replacing generating capacity there will be less money to spend on other things that matter to you or are necessary for your life, but if you reduce the use of energy less money is spent on energy and more money is available for other goals.
When you use less energy, by insulating, changing to lower energy light bulbs, controlling passive solar heat, or using energy star appliance, less energy is used, less CO2 released and money is saved. Reducing your energy consumption is a far better utilization of resources. While solar panels and wind turbines are sexy, and renewable sources of energy sound wonderful, these technologies are still in their infancy. Geothermal generation of heating and cooling and nuclear generation of power have failed to catch on in the United States, but have advanced significantly in the past few decades in overseas locations. Conservation and energy efficiency are currently well developed technologies, effective and relatively cheaper. Use less so that we can all live within the productive capacity of the existing infrastructure. Then only expand the generating capacity in ways that do not release CO2, do not burden the earth.
Adding insulation and sealing existing homes and commercial buildings is by far the low hanging fruit and a good source of “green economy” jobs. The Wall Street Journal reports that heating and cooling buildings account for about half of the CO2 emissions in the U.S. My home was built in 2004 and is heated and cooled with a duel system; the upstairs with an air heat pump and the lower levels with a gas furnace and air conditioner. Replacing the heating and cooling systems with geothermal systems would only make sense when the existing systems reach the end of their functional life. After eliminating incandescent light bulbs, upgrading all appliances to energy star, installing reflective films on the window and installing drapery, I found that adding insulation was a good way to further reduce the energy consumption of the house. Following the recommendations of the Building Envelope Research of the Oak Ridge National Laboratory the attic, crawl spaces, eves, duct work, underside of a large portion of the main level floor were insulated with cellulose. The pipes, end caps, knee wall, sump pumps and all identified areas were sealed, the garage was insulated and an insulated garage door installed. After six months electricity usage (as measured in kilowatts for the same six months the previous year) had been reduced by over 6% (despite relocating our workspace to the home with all its attendant equipment) and the winter liquid propane usage (as measured in volume use December through March both years) was reduced by 25%. Also, the overall comfort in the bedroom over the garage and the master bedroom has been vastly improved. I was very surprised (and pleased) at the energy savings for what was a well insulated home.
Though I do not need to commute to a job, I still drive my (gas hybrid car) almost 4,000 miles a year. The hybrid does not make economic sense especially because I drive so little. However, it does make me happy to drive so to me it was worth the extra money I paid for it. In searching for the carbon emitted per vehicle mile I could only find the 1993 data from the Nowak study which lists 0.88-1.06 lbs CO2 per mile. This is probably high for my hybrid, which was not available at the time of the study. The same article states that each person in the US generates 2.3 tons of CO2 each year, which appears in conflict with the automobile numbers until you realize that babies and children do not have cars and city dwellers automobile ownership and use is also much less than suburban use. During the eight years I lived and owned a car in the city, I drove less than 1,000 miles a year. After reducing the energy use in my home, eliminating commuting from our lives, reducing frivolous travel I still wanted to do more.
I found the following fact: “A single mature tree can absorb carbon dioxide at a rate of 48 lbs/ year and release enough oxygen back into the atmosphere to support 2 human beings.” The Tree Folks are the source of the above information, and are willing to sell carbon off-sets in the form of trees. I tend to think of carbon off-sets for people who want to vacation in Bora Bora or have the wedding or Oscar party of the century, but in truth they are probably for people like me who use various technologies to make their lives richer and happier. My large house comes with a big piece of land. Admittedly, most of the land is wooded undisturbed land and part of the Chesapeake Bay water shed, but I do have about 3 acres of mostly open land around the house. We planted 43 trees of moderate maturity (over 6 foot each). Using the Tree Folk data, forty-two trees absorb a ton of carbon a year and the last tree replaces a diseased tree we cut down. Beyond watering the trees in the first three weeks they were planted, they have thrived on benign neglect. I am already drawing up plans, researching native trees, and saving my nickels for another 3.6 tons of annual carbon off-sets otherwise know as another 150 trees. I may have to make that 152 trees because there are two more existing trees that are not doing well.
Trees can also reduce air conditioning and heating needs by providing shade and providing a wind shield for winter. Trees also act as natural pollution filters. Their canopies, trunks, roots, and associated soil and other natural elements of the landscape filter polluted particulate matter out of the flow towards the water shed and use nitrogen, phosphorus and potassium which are contributing factors to the decay of the Chesapeake Bay water shed. Trees are pretty.
According to McKinsey and Co. it cost an additional $30-$40 above normal energy production costs to eliminate one ton of CO2 emissions by replacing traditional energy production with solar or wind power (the presumed life of the equipment was unreported). However, when a ton of CO2 was saved using LED light bulbs or energy-efficient appliances money was also saved ($108-$159 less was spent on energy for every ton of CO2 saved). The costs associated with generating power without CO2 emissions are higher than current costs. If the money is spent to reduce CO2 by replacing generating capacity there will be less money to spend on other things that matter to you or are necessary for your life, but if you reduce the use of energy less money is spent on energy and more money is available for other goals.
When you use less energy, by insulating, changing to lower energy light bulbs, controlling passive solar heat, or using energy star appliance, less energy is used, less CO2 released and money is saved. Reducing your energy consumption is a far better utilization of resources. While solar panels and wind turbines are sexy, and renewable sources of energy sound wonderful, these technologies are still in their infancy. Geothermal generation of heating and cooling and nuclear generation of power have failed to catch on in the United States, but have advanced significantly in the past few decades in overseas locations. Conservation and energy efficiency are currently well developed technologies, effective and relatively cheaper. Use less so that we can all live within the productive capacity of the existing infrastructure. Then only expand the generating capacity in ways that do not release CO2, do not burden the earth.
Adding insulation and sealing existing homes and commercial buildings is by far the low hanging fruit and a good source of “green economy” jobs. The Wall Street Journal reports that heating and cooling buildings account for about half of the CO2 emissions in the U.S. My home was built in 2004 and is heated and cooled with a duel system; the upstairs with an air heat pump and the lower levels with a gas furnace and air conditioner. Replacing the heating and cooling systems with geothermal systems would only make sense when the existing systems reach the end of their functional life. After eliminating incandescent light bulbs, upgrading all appliances to energy star, installing reflective films on the window and installing drapery, I found that adding insulation was a good way to further reduce the energy consumption of the house. Following the recommendations of the Building Envelope Research of the Oak Ridge National Laboratory the attic, crawl spaces, eves, duct work, underside of a large portion of the main level floor were insulated with cellulose. The pipes, end caps, knee wall, sump pumps and all identified areas were sealed, the garage was insulated and an insulated garage door installed. After six months electricity usage (as measured in kilowatts for the same six months the previous year) had been reduced by over 6% (despite relocating our workspace to the home with all its attendant equipment) and the winter liquid propane usage (as measured in volume use December through March both years) was reduced by 25%. Also, the overall comfort in the bedroom over the garage and the master bedroom has been vastly improved. I was very surprised (and pleased) at the energy savings for what was a well insulated home.
Though I do not need to commute to a job, I still drive my (gas hybrid car) almost 4,000 miles a year. The hybrid does not make economic sense especially because I drive so little. However, it does make me happy to drive so to me it was worth the extra money I paid for it. In searching for the carbon emitted per vehicle mile I could only find the 1993 data from the Nowak study which lists 0.88-1.06 lbs CO2 per mile. This is probably high for my hybrid, which was not available at the time of the study. The same article states that each person in the US generates 2.3 tons of CO2 each year, which appears in conflict with the automobile numbers until you realize that babies and children do not have cars and city dwellers automobile ownership and use is also much less than suburban use. During the eight years I lived and owned a car in the city, I drove less than 1,000 miles a year. After reducing the energy use in my home, eliminating commuting from our lives, reducing frivolous travel I still wanted to do more.
I found the following fact: “A single mature tree can absorb carbon dioxide at a rate of 48 lbs/ year and release enough oxygen back into the atmosphere to support 2 human beings.” The Tree Folks are the source of the above information, and are willing to sell carbon off-sets in the form of trees. I tend to think of carbon off-sets for people who want to vacation in Bora Bora or have the wedding or Oscar party of the century, but in truth they are probably for people like me who use various technologies to make their lives richer and happier. My large house comes with a big piece of land. Admittedly, most of the land is wooded undisturbed land and part of the Chesapeake Bay water shed, but I do have about 3 acres of mostly open land around the house. We planted 43 trees of moderate maturity (over 6 foot each). Using the Tree Folk data, forty-two trees absorb a ton of carbon a year and the last tree replaces a diseased tree we cut down. Beyond watering the trees in the first three weeks they were planted, they have thrived on benign neglect. I am already drawing up plans, researching native trees, and saving my nickels for another 3.6 tons of annual carbon off-sets otherwise know as another 150 trees. I may have to make that 152 trees because there are two more existing trees that are not doing well.
Trees can also reduce air conditioning and heating needs by providing shade and providing a wind shield for winter. Trees also act as natural pollution filters. Their canopies, trunks, roots, and associated soil and other natural elements of the landscape filter polluted particulate matter out of the flow towards the water shed and use nitrogen, phosphorus and potassium which are contributing factors to the decay of the Chesapeake Bay water shed. Trees are pretty.
Monday, June 15, 2009
Climate Change Research
In the June issue of Chemical Engineering Progress was a most interesting article by Michael J Economides PhD and Xina Xie PhD both professors of engineering. The article titled “Climate Change-What Does the Research Mean?” is a brief review of the scientific literature and research on the some of the postulated impacts that global warming might have on hurricane frequency and intensity, shrinking the ice field of Mount Kilimanjaro, melting the polar ice caps and rising sea levels. Both sides of each argument appear to be documented and supported by specific scientific measurements. Such contradictory conclusions indicate that the modeling of the earth’s climate and environment need to be significantly reexamined and tested before expending significant amounts of the world’s finite resources towards any specific plan of amelioration. Some research suggests that climate change may have some anthropogenic (human) causes. However, the consequences of climate change that have been cited as reasons for government action are not born out by the facts. As the author’s point out “The impacts of these contradictory claims about the effects of climate change are not trivial, and the implication are enormous.” In the real world of finite resources, careful consideration must be given to which major expenditure programs to support. What are we willing to give up in terms of comforts, services, possessions and other goals to accommodate a Copenhagen accord? There is a price to reduce our emissions of greenhouse gases. Make sure we understand the costs and benefits before we act.
NASA satellites data documenting the 30 year history of temperature above the surface of Mt. Kilimanjaro peak show a slight decline in temperature. However, the summit glacier and ice cap on Mt. Kilimanjaro as featured in the movie “An Inconvenient Truth” is shrinking. Thought the movie uses this fact as evidence of global warming, in fact the ice cap is shrinking while the summit temperature has fallen slightly. According to the University of Innsbruck the mean annual temperature at the summit is below zero degrees Celsius it seems unlikely that there would be melting due to air temperature. Another explanation of the cause of the loss of mass in the glacier has been the deforestation of the mountain’s foothills. Without the humidity from the forests, there was inadequate moisture to replenish the surface ice loss to solar radiation sublimation from the solid to the vapor phase. The dry air kept the moisture released by the glacier’s surface. The deforestation of the earth’s surface reduces the moisture in winds and the increases the CO2 content of the atmosphere. Changes in global temperature do not appear to be as closely connected to changes in glaciers and ice caps. The cause and effect relationships are neither simple nor direct. Most of the actions considered to address these are slow in their implementation and impact, which would give science community the ability to evaluate the impact of any action possibly avoiding unintended consequences.
In today’s Wall Street Journal article “It’s Time to Cool the Planet” the author, Mr. Cascio, a futurist of the Institute for Ethics and Emerging Technologies argues for fast acting geo-engineering like releasing sulfate in huge quantities into the atmospheres using jet-aircraft exhaust. (This by the way has always been my husband’s favorite method of fighting global warming. At least my husband knows he’s kidding.) This particular action would be equivalent of several volcanic eruptions each year and while it would probably lower the temperature, what else would it do year after year? It is known that volcanic eruptions damage the ozone layer (remember that thing we spent our childhoods trying to protect) and plant life. We do not have the knowledge to understand the consequences of direct and quick changes to the plant temperature or carbon content. Sudden changes could be far more catastrophic than any consequence imagined by the long term projections of the global warming establishment. The changes we should consider are not massive interruptions in the natural cycles of nature, but conservation, replanting, control of non-point source contamination, point source contamination reduction in emerging economies. Maybe if we can restore a watershed, reforest the jungles of the Amazon or the foothill of Mt. Kilimanjaro we could then consider bolder action.
NASA satellites data documenting the 30 year history of temperature above the surface of Mt. Kilimanjaro peak show a slight decline in temperature. However, the summit glacier and ice cap on Mt. Kilimanjaro as featured in the movie “An Inconvenient Truth” is shrinking. Thought the movie uses this fact as evidence of global warming, in fact the ice cap is shrinking while the summit temperature has fallen slightly. According to the University of Innsbruck the mean annual temperature at the summit is below zero degrees Celsius it seems unlikely that there would be melting due to air temperature. Another explanation of the cause of the loss of mass in the glacier has been the deforestation of the mountain’s foothills. Without the humidity from the forests, there was inadequate moisture to replenish the surface ice loss to solar radiation sublimation from the solid to the vapor phase. The dry air kept the moisture released by the glacier’s surface. The deforestation of the earth’s surface reduces the moisture in winds and the increases the CO2 content of the atmosphere. Changes in global temperature do not appear to be as closely connected to changes in glaciers and ice caps. The cause and effect relationships are neither simple nor direct. Most of the actions considered to address these are slow in their implementation and impact, which would give science community the ability to evaluate the impact of any action possibly avoiding unintended consequences.
In today’s Wall Street Journal article “It’s Time to Cool the Planet” the author, Mr. Cascio, a futurist of the Institute for Ethics and Emerging Technologies argues for fast acting geo-engineering like releasing sulfate in huge quantities into the atmospheres using jet-aircraft exhaust. (This by the way has always been my husband’s favorite method of fighting global warming. At least my husband knows he’s kidding.) This particular action would be equivalent of several volcanic eruptions each year and while it would probably lower the temperature, what else would it do year after year? It is known that volcanic eruptions damage the ozone layer (remember that thing we spent our childhoods trying to protect) and plant life. We do not have the knowledge to understand the consequences of direct and quick changes to the plant temperature or carbon content. Sudden changes could be far more catastrophic than any consequence imagined by the long term projections of the global warming establishment. The changes we should consider are not massive interruptions in the natural cycles of nature, but conservation, replanting, control of non-point source contamination, point source contamination reduction in emerging economies. Maybe if we can restore a watershed, reforest the jungles of the Amazon or the foothill of Mt. Kilimanjaro we could then consider bolder action.
Thursday, June 11, 2009
Preserving the Chesapeake Bay and the Bay Act
In 1988, Virginia's General Assembly enacted the Chesapeake Bay Preservation Act (Bay Act) to improve the water quality of the Chesapeake Bay and its tributary streams. The Bay Act created a cooperative program between the Commonwealth of Virginia and Tidewater local governments to protect and enhance water quality through environmentally responsible land use management. Each local government in Tidewater Virginia (generally those localities that are east of the Interstate 95) help to protect the water quality in the Chesapeake Bay through local land use requirements which seek to minimize the non-point sources of pollution into the Bay. The original pollution reduction goals of the Act have not been met. The Act did not include any quantitative measurements or goals.
The Chesapeake Bay is the largest and most productive estuary in the United States, supporting over 2,500 species of animals and plants. The Bay has played an important role in the history of the region by providing valuable economic, environmental and recreational resources. However, pollution has caused the Bay's water quality to decline over the last several decades impacting the crabbing and fishing, the recharge of the estuaries and recreational use of the water. Despite the Chesapeake Bay Preservation Act, the Tidewater communities in Virginia and the Chesapeake Bay communities in Maryland, Pennsylvania, Delaware and DC have failed to enforce the targets of the original multistate agreement; reduce two key pollutants, nitrogen and phosphorus, 40 percent by 2000. When these goals were not met the governors and the EPA in 2000 set definitive new goals to be met in 2010. It is now seems unlikely that these goals will be met.
The Chesapeake Bay Preservation Act in Virginia was amended in 2001 to expand the Resource Protection Areas of the Act to all tidal wetlands, tidal shore, perennial flow bodies of water, non-tidal wetlands connected and contiguous to tidal wetlands and buffer lands within 100 feet of any of those features. All other areas of the Tidewater were named Resource Management Areas. Education and regulations are necessary to effectively improve the condition of Resource Protected Areas. Sensible and rational septic, sewage, and agricultural regulations need to be fully developed in many instances and then enforced. Public outreach and education is essential.
The natural beauty, limited areas of environmental progress and scars of the Bay were spelled out in the Sunday Washington Post article by David Fahrenthold and illustrated by the photographs of Cameron Davidson. The health of the Bay began to decline in the 1950s, when underwater grasses started to disappear, and fish and shellfish populations decreased. The deteriorating water quality of the Bay is caused by pollution, which can be divided into two categories: point source pollution and non-point source pollution. Point source pollution results from discharge at a specific point or pipe into surface water, and includes such sources as sewage treatment plants and industrial discharges. (Think municipal waste plants, Sparrows Point steel mill, etc.)
During the past several decades, point source pollution into the Bay has been greatly reduced, due to enforcement of the Clean Water Act. Underwater grasses in the Bay have started to make a comeback, and several species, such as the striped bass, have recovered enough to be commercially viable. However, nonpoint source pollution is a major problem facing the Bay. Oyster and blue crab catches have continued to shrink, and some shellfish populations have declined. The non-point source pollution is difficult to regulate and enforce because it comes down to individuals managing their properties and natural resources for the greater good. However, it can be done.
All localities in the Bay Watershed have identified and mapped Chesapeake Bay Preservation Areas (CBPAs) as part of their local Bay Act programs. CBPAs are defined as lands that, if improperly developed, may result in substantial damage to the water quality of the Bay and its tributaries. The zoning maps of each locality show the general boundaries of the CBPAs. Whenever land inside a CBPA is developed or redeveloped, certain standards, or requirements, apply to the development in order to prevent a net increase in nonpoint source pollution. These standards are known as Bay Act performance criteria, and are specified in each locality's zoning ordinances. CBPAs consist of two categories: Resource Protection Areas and Resource Management Areas. As of the last reiteration of the Act, all areas within the Tidewater region are CBPAs. Management of existing non-point source pollution maybe by far more powerful than development restrictions. Actions towards managing and reducing existing non-point source pollution during a period of growth have been limited.
Current regulations prevent further development of RPA lands beyond minor additions to existing residences and structures and impose broad standards for septic regulations. A Water Quality Impact Assessment (WQIA) is required for any development or redevelopment proposed within an RPA, or for modification (clearing, grading, etc.) of any portion of the 100-foot RPA buffer. The Bay Act also requires that all septic systems within a CBPA be pumped out at least once every five years. This applies to all existing homes and businesses, as well as new development. Though this limited requirement may be inadequate to properly maintain a septic system. In addition, a reserve septic drain field is required for all new development. Requirements for maintenance of existing septic systems are necessary to protect ground water quality, and also protect the water quality of the Bay. Though portions of the CBPA are within the Piedmont and overly clay, throughout the Eastern Shore, water moves quickly through the sandy soils, reaches the ground water table, and moves into creeks and then into the Bay. The high water table and sandy soils within shore areas result in a considerable amount of ground water inflow into surface waters. Consequently, ground water contamination from failing septic systems can threaten the water quality of the Bay.
The first steps to protect the Bay were the regulation and reduction of point source pollution, the next steps were to develop a plan to better manage growth and reduce future non-point source pollution. We have arrived at the time necessary to control and improve existing uses of the area to reduce non-point source pollution. Decentralized waste treatment options are cost effective and can be effective if properly understood and maintained. The time to develop and track a septic best practices program for the communities of the Chesapeake Bay watershed has arrived. This is management of homeowners, outreach to HOA’s, work that is best addressed on the local level rather than on the federal level.
The Chesapeake Bay is the largest and most productive estuary in the United States, supporting over 2,500 species of animals and plants. The Bay has played an important role in the history of the region by providing valuable economic, environmental and recreational resources. However, pollution has caused the Bay's water quality to decline over the last several decades impacting the crabbing and fishing, the recharge of the estuaries and recreational use of the water. Despite the Chesapeake Bay Preservation Act, the Tidewater communities in Virginia and the Chesapeake Bay communities in Maryland, Pennsylvania, Delaware and DC have failed to enforce the targets of the original multistate agreement; reduce two key pollutants, nitrogen and phosphorus, 40 percent by 2000. When these goals were not met the governors and the EPA in 2000 set definitive new goals to be met in 2010. It is now seems unlikely that these goals will be met.
The Chesapeake Bay Preservation Act in Virginia was amended in 2001 to expand the Resource Protection Areas of the Act to all tidal wetlands, tidal shore, perennial flow bodies of water, non-tidal wetlands connected and contiguous to tidal wetlands and buffer lands within 100 feet of any of those features. All other areas of the Tidewater were named Resource Management Areas. Education and regulations are necessary to effectively improve the condition of Resource Protected Areas. Sensible and rational septic, sewage, and agricultural regulations need to be fully developed in many instances and then enforced. Public outreach and education is essential.
The natural beauty, limited areas of environmental progress and scars of the Bay were spelled out in the Sunday Washington Post article by David Fahrenthold and illustrated by the photographs of Cameron Davidson. The health of the Bay began to decline in the 1950s, when underwater grasses started to disappear, and fish and shellfish populations decreased. The deteriorating water quality of the Bay is caused by pollution, which can be divided into two categories: point source pollution and non-point source pollution. Point source pollution results from discharge at a specific point or pipe into surface water, and includes such sources as sewage treatment plants and industrial discharges. (Think municipal waste plants, Sparrows Point steel mill, etc.)
During the past several decades, point source pollution into the Bay has been greatly reduced, due to enforcement of the Clean Water Act. Underwater grasses in the Bay have started to make a comeback, and several species, such as the striped bass, have recovered enough to be commercially viable. However, nonpoint source pollution is a major problem facing the Bay. Oyster and blue crab catches have continued to shrink, and some shellfish populations have declined. The non-point source pollution is difficult to regulate and enforce because it comes down to individuals managing their properties and natural resources for the greater good. However, it can be done.
All localities in the Bay Watershed have identified and mapped Chesapeake Bay Preservation Areas (CBPAs) as part of their local Bay Act programs. CBPAs are defined as lands that, if improperly developed, may result in substantial damage to the water quality of the Bay and its tributaries. The zoning maps of each locality show the general boundaries of the CBPAs. Whenever land inside a CBPA is developed or redeveloped, certain standards, or requirements, apply to the development in order to prevent a net increase in nonpoint source pollution. These standards are known as Bay Act performance criteria, and are specified in each locality's zoning ordinances. CBPAs consist of two categories: Resource Protection Areas and Resource Management Areas. As of the last reiteration of the Act, all areas within the Tidewater region are CBPAs. Management of existing non-point source pollution maybe by far more powerful than development restrictions. Actions towards managing and reducing existing non-point source pollution during a period of growth have been limited.
Current regulations prevent further development of RPA lands beyond minor additions to existing residences and structures and impose broad standards for septic regulations. A Water Quality Impact Assessment (WQIA) is required for any development or redevelopment proposed within an RPA, or for modification (clearing, grading, etc.) of any portion of the 100-foot RPA buffer. The Bay Act also requires that all septic systems within a CBPA be pumped out at least once every five years. This applies to all existing homes and businesses, as well as new development. Though this limited requirement may be inadequate to properly maintain a septic system. In addition, a reserve septic drain field is required for all new development. Requirements for maintenance of existing septic systems are necessary to protect ground water quality, and also protect the water quality of the Bay. Though portions of the CBPA are within the Piedmont and overly clay, throughout the Eastern Shore, water moves quickly through the sandy soils, reaches the ground water table, and moves into creeks and then into the Bay. The high water table and sandy soils within shore areas result in a considerable amount of ground water inflow into surface waters. Consequently, ground water contamination from failing septic systems can threaten the water quality of the Bay.
The first steps to protect the Bay were the regulation and reduction of point source pollution, the next steps were to develop a plan to better manage growth and reduce future non-point source pollution. We have arrived at the time necessary to control and improve existing uses of the area to reduce non-point source pollution. Decentralized waste treatment options are cost effective and can be effective if properly understood and maintained. The time to develop and track a septic best practices program for the communities of the Chesapeake Bay watershed has arrived. This is management of homeowners, outreach to HOA’s, work that is best addressed on the local level rather than on the federal level.
Monday, June 8, 2009
Cows, Methane Gas and Climate Change
The consensus of scientists believe that human activities are changing the composition of the atmosphere, and that increasing the concentration of greenhouse gases will change the planet's climate. However, they are not sure by how much it will change, at what rate it will change, or what the exact effects will be. Currently carbon dioxide is thought to be the critical greenhouse gas. According to NOAA, National Oceanic and Atmospheric Administrations, CH4 which absorbs 25 times the heat of CO2 is present in the atmosphere at 1/50 the level of CO2 at 1.8 ppm. Methane levels in the atmosphere have risen for the first time since 1998. This increase was attributed to changes in the permafrost stores of methane. (According to NOAA, CO2 is present at 386 ppm.)
Lately, there seems to be a focus by environmental groups and nutrition activists on increasing meat consumption as the cause of the increase in methane and greenhouse gasses. The Center for Science in the Public Interest, Nutrition Action Newsletter points out that the consumption of animal products increases global warming due to a variety of causes and argues for the vegetarian life. CH4 is produced as part of normal digestive processes in animals. During digestion, microbes present in an animal’s digestive system ferment food consumed by the animal. This microbial fermentation process, referred to as enteric fermentation, produces CH4 as a byproduct, which can be exhaled or eructated by the animal. The amount of CH4 produced and emitted by an individual animal depends primarily upon the animal's digestive system, and the amount and type of feed it consumes. Ruminant animals including cows are the major emitters of CH4 because of their unique digestive system. Ruminants possess a rumen, or large "fore-stomach," in which microbial fermentation breaks down the feed they consume into products that can be absorbed and metabolized. The microbial fermentation that occurs in the rumen enables them to digest coarse plant material that non-ruminant animals can not. Ruminant animals, consequently, have the highest CH4 emissions among all animal types. In addition to the type of digestive system, an animal’s feed quality and feed intake also affects CH4 emissions.
On Friday in the New York Times was an article by Leslie Kaufman, “Greening the Herds: A New Diet to Cap Gas.” For the past five months cows at 15 farms across Vermont have had their grain feed adjusted to include more plants like alfalfa and flaxseed and less corn. This feed is more like the natural grasses that the cows evolved eating. The methane output of the Vermont cows dropped 18 percent while milk production remained stable. In addition to producing less methane, the cows were observed to be healthier. This study evolved out of research performed by the makers of Danon yogurt in France. Scientists working with Groupe Danone had been studying why their cows were healthier and produced more milk in the spring. The answer, the scientists determined, was that spring grasses are high in Omega-3 fatty acids, which may help the cow’s digestive tract operate smoothly.
Corn and soy, the feed that became dominant feed in the agro-industrial dairy industry, has a completely different type of fatty acid structure. The French sturdy found a reduction in methane release of about 30% at 600 farms. The difference from the Vermont experience was attributed to the fact that the Vermont animals were pastured and received some of their food from grasses. As was carefully chronicled in Michael Pollan’s book “The Omnivore’s Dilemma” during the past 40 years, our agricultural economy as orchestrated by the Department of Agriculture has created a system of fattening cows using an unnatural feed, corn and soy. Cows are healthier and belch less methane if they are feed a diet similar to one they evolved to eat. This should not be surprising and is a small example of unintended consequences of man trying to bend the earth to our will. Our tools to impact and change remain far more powerful than our wisdom to know the right course of action to take with them. We would be far better off if we could restrain ourselves from wide sweeping actions and dip our toes in first to see the results. Try to develop wisdom before we try to manage the natural cycles of the earth, and instead follow the earth’s lead.
Lately, there seems to be a focus by environmental groups and nutrition activists on increasing meat consumption as the cause of the increase in methane and greenhouse gasses. The Center for Science in the Public Interest, Nutrition Action Newsletter points out that the consumption of animal products increases global warming due to a variety of causes and argues for the vegetarian life. CH4 is produced as part of normal digestive processes in animals. During digestion, microbes present in an animal’s digestive system ferment food consumed by the animal. This microbial fermentation process, referred to as enteric fermentation, produces CH4 as a byproduct, which can be exhaled or eructated by the animal. The amount of CH4 produced and emitted by an individual animal depends primarily upon the animal's digestive system, and the amount and type of feed it consumes. Ruminant animals including cows are the major emitters of CH4 because of their unique digestive system. Ruminants possess a rumen, or large "fore-stomach," in which microbial fermentation breaks down the feed they consume into products that can be absorbed and metabolized. The microbial fermentation that occurs in the rumen enables them to digest coarse plant material that non-ruminant animals can not. Ruminant animals, consequently, have the highest CH4 emissions among all animal types. In addition to the type of digestive system, an animal’s feed quality and feed intake also affects CH4 emissions.
On Friday in the New York Times was an article by Leslie Kaufman, “Greening the Herds: A New Diet to Cap Gas.” For the past five months cows at 15 farms across Vermont have had their grain feed adjusted to include more plants like alfalfa and flaxseed and less corn. This feed is more like the natural grasses that the cows evolved eating. The methane output of the Vermont cows dropped 18 percent while milk production remained stable. In addition to producing less methane, the cows were observed to be healthier. This study evolved out of research performed by the makers of Danon yogurt in France. Scientists working with Groupe Danone had been studying why their cows were healthier and produced more milk in the spring. The answer, the scientists determined, was that spring grasses are high in Omega-3 fatty acids, which may help the cow’s digestive tract operate smoothly.
Corn and soy, the feed that became dominant feed in the agro-industrial dairy industry, has a completely different type of fatty acid structure. The French sturdy found a reduction in methane release of about 30% at 600 farms. The difference from the Vermont experience was attributed to the fact that the Vermont animals were pastured and received some of their food from grasses. As was carefully chronicled in Michael Pollan’s book “The Omnivore’s Dilemma” during the past 40 years, our agricultural economy as orchestrated by the Department of Agriculture has created a system of fattening cows using an unnatural feed, corn and soy. Cows are healthier and belch less methane if they are feed a diet similar to one they evolved to eat. This should not be surprising and is a small example of unintended consequences of man trying to bend the earth to our will. Our tools to impact and change remain far more powerful than our wisdom to know the right course of action to take with them. We would be far better off if we could restrain ourselves from wide sweeping actions and dip our toes in first to see the results. Try to develop wisdom before we try to manage the natural cycles of the earth, and instead follow the earth’s lead.
Friday, June 5, 2009
My Water Test Results and Hard Water
Water is the fluid of life. The water I drink comes from a private domestic well drawing from the ground water beneath my land. Ground water is the world's largest source of fresh water. Scientists estimate the amount of ground water is 400 times greater than all the fresh water in lakes, reservoirs, streams, and rivers. All water, on the earth's surface and beneath the surface, moves through the hydrologic cycle: Precipitation falls on land; some water evaporates and returns to the atmosphere, some flows to streams and rivers, and some seeps into the soil. Water not used by plants and their root systems move deeper into the ground, downward through cracks, into empty spaces or pores in the soil, sand, and rocks layers until the water reaches an impermeable layer of rock. The water then fills the voids above the rock layer. The top of the water in the soil, sand, or rocks is called the water table and the water that fills the empty spaces is called ground water.
I take seriously my duty as care taker of a portion of the watershed. Part of that is the care and monitoring of my private domestic well, the source of my own water supply. One of the precautionary steps I take is to test my water annually, though in the Commonwealth of Virginia this is not required. The annual water analysis just came back for my well. I had my water tested for total Coliform bacteria at my local laboratory and had the WaterCheck with pesticides analysis performed at National Testing Laboratories, Ltd.
Private wells are usually only tested for total Coliform bacteria and fecal Coliform bacteria. Coliform bacteria live in the intestine of warm-blooded animals and serve as an indication of other bacterial problems. Testing for Coliform bacteria is easier and less expensive than testing for specific, disease-causing microorganisms. Coliform bacteria itself is rather harmless, but are indicators that the water supply is contaminated and that disease-causing bacteria may be present. Coliform bacteria can be an indication of contaminated surface water entering the well or water delivery system or the result of a faulty septic system. Fecal Coliform bacteria indicate contamination by human or animal waste. It is unacceptable for fecal Coliform bacteria to be present in any concentration.
There are treatments for contamination, but I prefer my water pure and unadulterated. So, I was pleased to receive the report of:
ABSENT for Total Coliform Bacteria
ABSENT for Fecal Coliform Bacteria
If you do have a bacterial problem, fix it. There are four types of water treatment that can be easily and inexpensively used to remove bacteria. They are chlorination, ozonation, ultraviolet light, and heat. Chlorination is the most commonly used means of disinfection in private water systems. High chlorine concentrations can have objectionable tastes and odors, and even low chlorine concentrations react with some organic compounds to produce strong, unpleasant tastes and odors. To eliminate the excessive amounts of chlorine, the water is then dechlorinated. Activated carbon filters are the most common devices used to dechlorinate water, remove objectionable chlorine tastes, and reduce corrosion of plumbing systems. In addition to removing taste and odor problems, granular activated carbon absorption is a good method to remove other impurities including some pesticide residues, and radon.
In addition to bacteria that may exist in domestic water supplies, other contaminants may be present including minerals, chemicals or metals that occur naturally in the soil or enter ground water as a result of human activities. While many natural contaminants such as iron, sulfate, and manganese are not considered serious health hazards, they can give drinking water an unpleasant taste, odor, or color.
The WaterCheck with Pesticides is an informational test packages targeted to be an affordable option for consumers. The WaterCheck with Pesticide covers 15 heavy metals, 5 inorganic chemicals, 5 physical factors, 4 trihalo methanes, 43 volatile organic chemicals (solvents), and 20 pesticides, herbicides and PCB’s. The Minimum Detection Levels, which are the lowest levels at which the laboratory detects that contaminant are below the levels established by the Safe Drinking Water Act so this affordable (relatively) test will serve as a broad screen of drinking water. The WaterCheck with Pesticides test results showed only detectable levels of calcium, magnesium, silica, sodium zinc. My water is slightly more than moderately hard, meaning, in my case, that calcium carbonate is present at 170 mg/l. All other substance tested for were non-detect.
Hard water contains minerals, such as calcium, magnesium, and iron. Water containing approximately 125 milligrams of calcium, magnesium and iron per liter of water can reduce the cleaning action of soaps and detergents and can form a scale (limescale) in cookware, hot water pipes, and water heaters. There are a number of simple things you can do to reduce the effects of hard water in your home, without having to resort to treating your water, so called softening. My water has elevated levels of calcium and magnesium. My iron content is very low. High iron content can begin to stain your teeth at 0.3 parts per million (ppm), You may also notice brown/orange stains on tubs, inside dishwashers, sinks and laundry. The simple things to do to address hard water are:
Choose a detergent based laundry product. Some laundry detergents/soaps do not produce as many suds in hard water, these are likely to be soap-based products and do not work as well in hard-water as detergent based products. These days, there are laundering powders and liquids available for a wide range of water hardness. Also, manufacturers often recommend using slightly more detergent to compensate for the hard water. Check the package.
Reduce the temperature of your hot water heater. When water temperature increases, more mineral deposits will appear in your dishwasher, hot water tank and pipes. By reducing the temperature, you will save money and will reduce the amount of mineral build-up in your pipes and tank. Use rinse agents to remove mineral deposits. There are low pH (acidic) products available to remove mineral deposits from pots and pans and dishwasher. Alternatively, you can use plain white vinegar by using the dishwasher dispenser or placing a cup of vinegar on the dishwasher rack. Boil some white vinegar in your kettle to remove hard water deposits. Drain and rinse your hot water heater annually.
In days past, at the first sign of hard water, domestic water supplies were commonly softened by using a tank containing an ion-exchange material, which takes up the calcium, magnesium and small amounts of dissolved iron from water in exchange for sodium. Conditioning the home water supply with sodium is pleasing to some. The amount of sodium in water conditioning systems is a real problem. Personally, I do not care to add all that sodium to my diet while removing calcium carbonate and magnesium (something that is also sold in pill form for stronger bones). Household water treatment services are very profitable because of the monthly bills. Conditioning the water supply may include water softening, iron removal, neutralization of acid water, reverse osmosis, turbidity control, removal of objectionable tastes and odors, and aeration. Water softening and filtering are the most common methods of conditioning well water.
Rather than start playing around with my drinking water, adding chlorine, filtering, adding sodium, I prefer to drink clean natural water. I purchased a home with water I found acceptable in its natural state. I spent the money up front to test (and taste) the water. Annually, I spend the money and test my water to ensure that the water we drink is still beautifully clean ground water fresh from my well.
I take seriously my duty as care taker of a portion of the watershed. Part of that is the care and monitoring of my private domestic well, the source of my own water supply. One of the precautionary steps I take is to test my water annually, though in the Commonwealth of Virginia this is not required. The annual water analysis just came back for my well. I had my water tested for total Coliform bacteria at my local laboratory and had the WaterCheck with pesticides analysis performed at National Testing Laboratories, Ltd.
Private wells are usually only tested for total Coliform bacteria and fecal Coliform bacteria. Coliform bacteria live in the intestine of warm-blooded animals and serve as an indication of other bacterial problems. Testing for Coliform bacteria is easier and less expensive than testing for specific, disease-causing microorganisms. Coliform bacteria itself is rather harmless, but are indicators that the water supply is contaminated and that disease-causing bacteria may be present. Coliform bacteria can be an indication of contaminated surface water entering the well or water delivery system or the result of a faulty septic system. Fecal Coliform bacteria indicate contamination by human or animal waste. It is unacceptable for fecal Coliform bacteria to be present in any concentration.
There are treatments for contamination, but I prefer my water pure and unadulterated. So, I was pleased to receive the report of:
ABSENT for Total Coliform Bacteria
ABSENT for Fecal Coliform Bacteria
If you do have a bacterial problem, fix it. There are four types of water treatment that can be easily and inexpensively used to remove bacteria. They are chlorination, ozonation, ultraviolet light, and heat. Chlorination is the most commonly used means of disinfection in private water systems. High chlorine concentrations can have objectionable tastes and odors, and even low chlorine concentrations react with some organic compounds to produce strong, unpleasant tastes and odors. To eliminate the excessive amounts of chlorine, the water is then dechlorinated. Activated carbon filters are the most common devices used to dechlorinate water, remove objectionable chlorine tastes, and reduce corrosion of plumbing systems. In addition to removing taste and odor problems, granular activated carbon absorption is a good method to remove other impurities including some pesticide residues, and radon.
In addition to bacteria that may exist in domestic water supplies, other contaminants may be present including minerals, chemicals or metals that occur naturally in the soil or enter ground water as a result of human activities. While many natural contaminants such as iron, sulfate, and manganese are not considered serious health hazards, they can give drinking water an unpleasant taste, odor, or color.
The WaterCheck with Pesticides is an informational test packages targeted to be an affordable option for consumers. The WaterCheck with Pesticide covers 15 heavy metals, 5 inorganic chemicals, 5 physical factors, 4 trihalo methanes, 43 volatile organic chemicals (solvents), and 20 pesticides, herbicides and PCB’s. The Minimum Detection Levels, which are the lowest levels at which the laboratory detects that contaminant are below the levels established by the Safe Drinking Water Act so this affordable (relatively) test will serve as a broad screen of drinking water. The WaterCheck with Pesticides test results showed only detectable levels of calcium, magnesium, silica, sodium zinc. My water is slightly more than moderately hard, meaning, in my case, that calcium carbonate is present at 170 mg/l. All other substance tested for were non-detect.
Hard water contains minerals, such as calcium, magnesium, and iron. Water containing approximately 125 milligrams of calcium, magnesium and iron per liter of water can reduce the cleaning action of soaps and detergents and can form a scale (limescale) in cookware, hot water pipes, and water heaters. There are a number of simple things you can do to reduce the effects of hard water in your home, without having to resort to treating your water, so called softening. My water has elevated levels of calcium and magnesium. My iron content is very low. High iron content can begin to stain your teeth at 0.3 parts per million (ppm), You may also notice brown/orange stains on tubs, inside dishwashers, sinks and laundry. The simple things to do to address hard water are:
Choose a detergent based laundry product. Some laundry detergents/soaps do not produce as many suds in hard water, these are likely to be soap-based products and do not work as well in hard-water as detergent based products. These days, there are laundering powders and liquids available for a wide range of water hardness. Also, manufacturers often recommend using slightly more detergent to compensate for the hard water. Check the package.
Reduce the temperature of your hot water heater. When water temperature increases, more mineral deposits will appear in your dishwasher, hot water tank and pipes. By reducing the temperature, you will save money and will reduce the amount of mineral build-up in your pipes and tank. Use rinse agents to remove mineral deposits. There are low pH (acidic) products available to remove mineral deposits from pots and pans and dishwasher. Alternatively, you can use plain white vinegar by using the dishwasher dispenser or placing a cup of vinegar on the dishwasher rack. Boil some white vinegar in your kettle to remove hard water deposits. Drain and rinse your hot water heater annually.
In days past, at the first sign of hard water, domestic water supplies were commonly softened by using a tank containing an ion-exchange material, which takes up the calcium, magnesium and small amounts of dissolved iron from water in exchange for sodium. Conditioning the home water supply with sodium is pleasing to some. The amount of sodium in water conditioning systems is a real problem. Personally, I do not care to add all that sodium to my diet while removing calcium carbonate and magnesium (something that is also sold in pill form for stronger bones). Household water treatment services are very profitable because of the monthly bills. Conditioning the water supply may include water softening, iron removal, neutralization of acid water, reverse osmosis, turbidity control, removal of objectionable tastes and odors, and aeration. Water softening and filtering are the most common methods of conditioning well water.
Rather than start playing around with my drinking water, adding chlorine, filtering, adding sodium, I prefer to drink clean natural water. I purchased a home with water I found acceptable in its natural state. I spent the money up front to test (and taste) the water. Annually, I spend the money and test my water to ensure that the water we drink is still beautifully clean ground water fresh from my well.
Wednesday, June 3, 2009
Keeping Moisture and Pests out of your House
Too much moisture in a home can lead to mold, mildew, and other biological growth. The presence of these molds can lead to a variety of health problems including allergies (my problem) asthma and more serious respiratory problems. In addition to health problems, excess moisture can lead to problems such as rot, structural damage, and paint failure and create a hospitable environment for pests. For example, termites require moisture in order to survive. Although ridding an infested house of termites requires an integrated pest management approach that includes a conventional chemical treatment or the use of baits, there are steps a homeowner should take to make a house less ideal for termite invasions to minimize the use of chemicals. Subterranean termites and drywood termites are the two general types. Subterranean termites "nest" in the soil and from there they can attack structures by building shelter tubes from the soil to the wood in structures. Subterranean termites cause more of the damage to homes and structures than drywood termites so will be my primary focus here. Termites will attack any material with cellulose, including wood, paper coated wall board, and paper (as in that treasured book collection). Wood that is at least 30 percent water saturated provides enough moisture. Additionally, termites will find free-standing water such as condensation, rain or plumbing leaks and use this moisture as their main source for survival.
Correcting and preventing moisture problems is a first defense against termites and should be considered before prophylactic application of barrier termiticide post construction. It is to be noted that many states (especially in the south east) require the application of termiticide chemical barrier before construction as part of site preparation. These preconstruction treatments are required to be effective for five years. It is reported that the newer, biodegradable chemicals are effective for 5-11 years; however, some termiticides breakdown easily in water and should not be applied within 50 feet of a water well or the water table. Other chemicals are more persistent and cling to soil. There are other techniques that should be used first and should be used as part of a termite management strategy. According to BIRC (The Bio-Integral Resource Center in Berkeley, CA) prevention and early detection are the most effective methods of reducing the use of toxic chemical in termite control. As a side note BIRC has an “ask the expert” section of their organization web site where they will help you identify the least toxic solution to your pest problem. They are really nice so ask away.
Rain water should not run up against the house (nor should plant irrigation water). The soil level against the exterior walls should slope away from the building. Even in areas where the natural topography is towards the house and artificial slope should be created in a shallow “V” to prevent water from pooling around the foundation. The ground around the home's foundation should be graded to slope down and away from the house at a rate of 1/2" to 1" per linear foot to drain surface water away from the house. If need be a French drain should be dug an adequate distance from the house. Most houses are built with the soil level sloped away from the building, but landscaping and time can undermine this. Plants should not be planted closely against a structure to avoid “watering the building” instead of the plants.
Water from down spouts should be directed away from the house, discharging at least a few feet from the foundation. If the natural flow of water is to pool in any location, it may be advisable to direct all down spouts to a dry well system or pipe to a more natural drainage location. Test any underground drains with a hose to make sure they are working properly. Often underground drains become blocked with debris or broken and allow water to drain against the building. Drains that are not working should be repaired or replaced. Be sure that driveways, sidewalks, and patios slope down and away from foundation walls at 1/4" per linear foot.
If these solutions do not eliminate a moisture problem then the next steps will have to be taken. In extreme cases, you may have to dig out around the foundation and replace the fill with an exterior drain tile and with a good draining material such clean gravel. Because this can be very expensive in existing homes, all the previous solutions should be rigorously tried first. In some areas, there may not be enough room outside the dwelling to provide proper drainage - in these cases; it is often recommended that interior drain tile and a sump pump be installed to remove water from basements and crawlspaces. This is very expensive and messy, but is extremely effective.
Correcting and preventing moisture problems is a first defense against termites and should be considered before prophylactic application of barrier termiticide post construction. It is to be noted that many states (especially in the south east) require the application of termiticide chemical barrier before construction as part of site preparation. These preconstruction treatments are required to be effective for five years. It is reported that the newer, biodegradable chemicals are effective for 5-11 years; however, some termiticides breakdown easily in water and should not be applied within 50 feet of a water well or the water table. Other chemicals are more persistent and cling to soil. There are other techniques that should be used first and should be used as part of a termite management strategy. According to BIRC (The Bio-Integral Resource Center in Berkeley, CA) prevention and early detection are the most effective methods of reducing the use of toxic chemical in termite control. As a side note BIRC has an “ask the expert” section of their organization web site where they will help you identify the least toxic solution to your pest problem. They are really nice so ask away.
Rain water should not run up against the house (nor should plant irrigation water). The soil level against the exterior walls should slope away from the building. Even in areas where the natural topography is towards the house and artificial slope should be created in a shallow “V” to prevent water from pooling around the foundation. The ground around the home's foundation should be graded to slope down and away from the house at a rate of 1/2" to 1" per linear foot to drain surface water away from the house. If need be a French drain should be dug an adequate distance from the house. Most houses are built with the soil level sloped away from the building, but landscaping and time can undermine this. Plants should not be planted closely against a structure to avoid “watering the building” instead of the plants.
Water from down spouts should be directed away from the house, discharging at least a few feet from the foundation. If the natural flow of water is to pool in any location, it may be advisable to direct all down spouts to a dry well system or pipe to a more natural drainage location. Test any underground drains with a hose to make sure they are working properly. Often underground drains become blocked with debris or broken and allow water to drain against the building. Drains that are not working should be repaired or replaced. Be sure that driveways, sidewalks, and patios slope down and away from foundation walls at 1/4" per linear foot.
If these solutions do not eliminate a moisture problem then the next steps will have to be taken. In extreme cases, you may have to dig out around the foundation and replace the fill with an exterior drain tile and with a good draining material such clean gravel. Because this can be very expensive in existing homes, all the previous solutions should be rigorously tried first. In some areas, there may not be enough room outside the dwelling to provide proper drainage - in these cases; it is often recommended that interior drain tile and a sump pump be installed to remove water from basements and crawlspaces. This is very expensive and messy, but is extremely effective.
Monday, June 1, 2009
Sump Pumps
Too much moisture in a home can lead to mold, mildew, and other biological growth. This in turn can lead to a variety of health effects, but it is mold and mildew I fear. Mold and moisture are the enemies of book collections and beneath my house in what most people would call a basement is the Library. It is home to my husband’s collection of books that he has spent 40 years assembling from every used book sale he has every hunted down or that we traveled to on vacation. While there are no first editions of note, there are things like the $2 used book he found in and old part of St. Louis which is worth many times that.
An unfinished, dry, partial daylight basement was a selection criterion for this house. There are steps that can be taken to minimize or eliminate water intrusion into a basement and they will be discussed in more depth on later posts, but since it was my intention to finish the basement as a library, we selected a house with a dry basement. The house which was only a few years old had been vacant for about seven months with the power shut off so I had the opportunity to see how the basement did through the spring without the sump pumps operating. The ground slope was less than optimal but good and the water table below the basement level.
The house has two sump pumps, one in the northeast corner of the basement and the other in the south east corner. The natural slope of the site is from the north west to the south east. Like most newer houses in the northeast United States, the basement slab is poured over a bed of about a foot of gravel stone into which are buried drain tiles that are (hopefully) pitched toward the sump. The drain tiles may extend under the entire basement, just along the footings or only in the area of the sump pumps depending on local codes and site considerations. These days drain tile is usually perforated plastic pipe run around the interior and exterior perimeter of the house. At one time clay tile was used and the name is retained from those days. The sump is a two foot diameter hole that should hold around 15-25 gallons of water.
A sump pump lifts the water up and out of the sump through a pipe that extends outside the house at ground level. There are two basic types of sump pumps, the pedestal and the submersible. The pedestal pump has the motor on top of the pedestal and is visible above the floor surface. The motor on a pedestal pump is turned on and off by a ball float. The on/off switch is visible so that it can be checked. Submersible pumps are designed to be sit at the bottom of the sump and is activated by a ball float or a sealed mercury switch. Both types of sumps have a check valve on the water outlet pipe to prevent flow back when the pump is shut off.
Most sump pumps are turned on automatically when water level in the sump rises. Why have a sump pump if you wait until flooding has occurred to turn it on? There are three types of switches: a float, a pressure and mercury activated floating switch. Mercury switches have been phasing out. Since 2004, many states have passed legislation restricting the sale of mercury-added switches and relays, including float switches and tilt switches. As more of these state laws go into effect, mercury use in switches will likely decline. The following states currently have restrictions on the sale and/or distribution of mercury-containing pumps: Connecticut, Louisiana, and Rhode Island. In addition, California, Illinois, Maine, Massachusetts, Minnesota, New Hampshire, New York, and Vermont restrict the sale and/or distribution of pumps that contain a mercury switch as a component.All pedestal sump pumps are activated by a float activator (a ball valve just like in a toilet rises and turns the motor on when the water rises). Submersible sump pumps are activated by either a pressure sensor (water has more pressure than air so that when the water rises to cover the switch the motor turns on), or a floating switch.
Both types of sump pumps are centrifugal pumps. They use impellers to force the water to the sides of the pipe creating a vacuum at the center which sucks in the water. You’ve seen this with the functioning of an immersion blender. The typical sump pump is sized at either 1/3 or ½ horsepower. A larger motor is needed to lift water to a higher level.
Soil conditions, groundwater level, rain and siting aspects of the property will determine the amount of time that a sump pump will operate and the number of pumps necessary to keep a house dry. Many houses with limited issues are built with one sump pump to handle occasional or seasonal problems. Sump pumps are powered by electricity and often it is recommended that a ground interrupt plug is used because you have water and electricity coming together. There are two problems with that. The first is that the pump needs to be regularly checked to make sure the ground interrupt has not been tripped, ground interrupt plugs trip at the wall. Without checking, only failure of the sump pump (flooding) would indicate that the plug failed. The second problem is power interruption. Sump pumps are often most needed during and after rain storms. This is a time when power often fails. Without power the typical sump pump will not operate. A battery backup system is the most common type of backup and should be seriously considered for any installation where the dryness of the basement matters. Running a sump pump on a backup generator is another possibility, and if your property runs on “city water” there are sump pumps driven by the municipal water supply pressure. They should only be used as back up pumps because they do waste drinking water.
Sump pumps and backup systems need to be tested or checked regularly so that an equipment failure will be noticed before the system fails and the basement floods. At least annually the system should be checked for functioning.
An unfinished, dry, partial daylight basement was a selection criterion for this house. There are steps that can be taken to minimize or eliminate water intrusion into a basement and they will be discussed in more depth on later posts, but since it was my intention to finish the basement as a library, we selected a house with a dry basement. The house which was only a few years old had been vacant for about seven months with the power shut off so I had the opportunity to see how the basement did through the spring without the sump pumps operating. The ground slope was less than optimal but good and the water table below the basement level.
The house has two sump pumps, one in the northeast corner of the basement and the other in the south east corner. The natural slope of the site is from the north west to the south east. Like most newer houses in the northeast United States, the basement slab is poured over a bed of about a foot of gravel stone into which are buried drain tiles that are (hopefully) pitched toward the sump. The drain tiles may extend under the entire basement, just along the footings or only in the area of the sump pumps depending on local codes and site considerations. These days drain tile is usually perforated plastic pipe run around the interior and exterior perimeter of the house. At one time clay tile was used and the name is retained from those days. The sump is a two foot diameter hole that should hold around 15-25 gallons of water.
A sump pump lifts the water up and out of the sump through a pipe that extends outside the house at ground level. There are two basic types of sump pumps, the pedestal and the submersible. The pedestal pump has the motor on top of the pedestal and is visible above the floor surface. The motor on a pedestal pump is turned on and off by a ball float. The on/off switch is visible so that it can be checked. Submersible pumps are designed to be sit at the bottom of the sump and is activated by a ball float or a sealed mercury switch. Both types of sumps have a check valve on the water outlet pipe to prevent flow back when the pump is shut off.
Most sump pumps are turned on automatically when water level in the sump rises. Why have a sump pump if you wait until flooding has occurred to turn it on? There are three types of switches: a float, a pressure and mercury activated floating switch. Mercury switches have been phasing out. Since 2004, many states have passed legislation restricting the sale of mercury-added switches and relays, including float switches and tilt switches. As more of these state laws go into effect, mercury use in switches will likely decline. The following states currently have restrictions on the sale and/or distribution of mercury-containing pumps: Connecticut, Louisiana, and Rhode Island. In addition, California, Illinois, Maine, Massachusetts, Minnesota, New Hampshire, New York, and Vermont restrict the sale and/or distribution of pumps that contain a mercury switch as a component.All pedestal sump pumps are activated by a float activator (a ball valve just like in a toilet rises and turns the motor on when the water rises). Submersible sump pumps are activated by either a pressure sensor (water has more pressure than air so that when the water rises to cover the switch the motor turns on), or a floating switch.
Both types of sump pumps are centrifugal pumps. They use impellers to force the water to the sides of the pipe creating a vacuum at the center which sucks in the water. You’ve seen this with the functioning of an immersion blender. The typical sump pump is sized at either 1/3 or ½ horsepower. A larger motor is needed to lift water to a higher level.
Soil conditions, groundwater level, rain and siting aspects of the property will determine the amount of time that a sump pump will operate and the number of pumps necessary to keep a house dry. Many houses with limited issues are built with one sump pump to handle occasional or seasonal problems. Sump pumps are powered by electricity and often it is recommended that a ground interrupt plug is used because you have water and electricity coming together. There are two problems with that. The first is that the pump needs to be regularly checked to make sure the ground interrupt has not been tripped, ground interrupt plugs trip at the wall. Without checking, only failure of the sump pump (flooding) would indicate that the plug failed. The second problem is power interruption. Sump pumps are often most needed during and after rain storms. This is a time when power often fails. Without power the typical sump pump will not operate. A battery backup system is the most common type of backup and should be seriously considered for any installation where the dryness of the basement matters. Running a sump pump on a backup generator is another possibility, and if your property runs on “city water” there are sump pumps driven by the municipal water supply pressure. They should only be used as back up pumps because they do waste drinking water.
Sump pumps and backup systems need to be tested or checked regularly so that an equipment failure will be noticed before the system fails and the basement floods. At least annually the system should be checked for functioning.
- Check the plug to ensure that there is power to the pump motor
- Remove the lid from the sump and inspect the interior with a flashlight, look to make sure that the pump is upright.
- Slowly pour a five gallon bucket of water into the sump. The sump should automatically starts and the sump drains quickly.
- If your sump pump is a submersible one, the grate on the bottom should be periodically cleaned of any gravel debris that might have been pulled into the pump.
- Know where your discharge pipes are and ensure that that are unblocked. They can become blocked with garden debris.
Subscribe to:
Posts (Atom)