Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Groundwater is commonly found in most parts of the world, but the quality of the water may be sufficiently poor to preclude or limit its use. Contaminants that affect groundwater use are related to human health, aquatic health, economic costs, or even societal perception. In this way, water-quality drivers might be considered different from factors of integrated groundwater management (IGM) covered in Chap. 1 and other chapters. For example, in their commentary on defining water quality, Chapelle et al. (2009) suggest the term “water quality” is inherently based on human judgments as to how water of given composition fits perceived needs, where the needs can be those of the individual, group, or ecosystem. At the same time, human judgments of water quality are dynamic. In the twentieth century water became cheap, safe, and widely available – something that had not happened before during the whole of human history (Fishman 2011). Such dynamic views can become drivers that inform current opinion and perceptions of water quality in the twenty-first century. In addition, constantly improving technology for water quality characterization identifies more contaminants at lower detection limits, which contributes to the dynamic perception of water quality, including whole new classes of contaminants (e.g., Focazio et al. 2008). How such issues are handled in a management framework can influence the subjective idea of water quality. In this way, IGM forms an important intersection of environmental characterization (e.g., water chemical analyses), engineering (e.g., water treatment and sanitation), societal needs (e.g., food supply), and human perception of water quality. This intersection of disparate drivers can, in turn, act as a key driver for societal cost-benefit analyses and other decision making.

How do we judge if water quality is limiting availability? For some contaminants and uses, objective water-quality criteria are available. For example, risk-based regulatory limits have set threshold quantities such as a “Maximum Contaminant Level (MCL) ” or, a less stringent, “Preventative Action Limit” (PAL) used in the United States and similar thresholds in other countries (Table 14.1). Yet, subjective judgments can also affect perceptions of water quality, thus making acceptable water quality a dynamic interpretation.

Table 14.1 Comparison of drinking water-quality standards and guidelines for the World Health Organization, European Union, Australia, United States, and Canada. All standards and guidelines in mg/L (modified from Boyd (2006) with updates to United States as of 2013 http://water.epa.gov/action/advisories/drinking/upload/dwstandards2012.pdf)

This chapter will present three examples that demonstrate how water quality factors can influence groundwater use and related management options. The examples are intended to present: (1) an overview of mechanisms of how water quality affects IGM; (2) a short listing of classes of contaminants that have affected groundwater use; and (3) a description of issues and associated IGM responses that have been used to address classes of water quality issues. Because the range of potential societally-relevant water quality issues is large, we focus here on transferable elements contained within the examples. Using the dimensions of integrated groundwater management outlined in Chap. 1, water quality can be seen as integration of both natural and human systems across multiple scales of space and time. Moreover, a definition of adequate water quality is highly dependent on stakeholders, as well as new methods of identifying and quantifying contaminants. It should be noted that some water quality topics are also covered separately in more detail elsewhere in this book, including salinity (Chap. 15).

2 Contaminants that Affect Acceptable Water Quality Determinations

For convenience, contaminants are grouped into two broad categories that affect groundwater use: naturally occurring contaminants and human-introduced contaminants. Such a distinction cannot hold universally– for example, human activities such as high capacity pumping change the aquifer geochemical environment, which in turn can mobilize contaminants or transform them into different forms. Likewise, salinity is naturally occurring, but also can be a water quality concern in areas where it is not naturally occurring as a result of human use such as application of salt to prevent road icing. Our distinction is more robust, however, when considering the primary sources of contaminants and how they propagate to issues of water quality. Therefore, our discussion here follows this overarching criterion.

Table 14.2 lists a number of naturally occurring and human-introduced contaminants that can potentially influence groundwater management. Potential management actions to address water quality may include, but are not limited to, strategies involving:

  • Source removal (e.g., centralized waste digesters, integrated pest management plans, organic farming)

  • Tiered water quality designations that allow reuse of “grey water” or use of waters naturally having lesser quality (e.g., brackish groundwater)

  • Blending of water supplies from different sources to meet regulatory limits

  • Modifying well open intervals or pumping regimes to minimize poor water quality

  • Artificial aquifer recharge or aquifer storage and recovery systems

  • Source minimization (e.g., landuse restrictions in wellfield capture areas, voluntary conservation)

  • Water treatment at wellhead or point-of-use

  • Wastewater treatment

Table 14.2 Common contaminants listed as a source of poor water quality

These actions are often used in combination, and span a range of capital cost incurred during initial implementation as well as on-going cost of operation and maintenance. As might be expected given the range of cost and range of potential concerns shown in Table 14.2, there is no single or universally recommended approach for addressing water quality issues in an integrated groundwater management framework. Therefore, examples of groundwater management are used to illustrate applications where one or more of the actions described above were considered.

3 Three Examples of Water Quality Issues and Integrated Groundwater Management

In this Sect. 14.3 Case studies are presented here that use one naturally occurring and two human-introduced contaminants to illustrate the intersection of water quality and integrated groundwater management. Each will discuss the contaminant sources, health/aquatic/economic implications, factors affecting contaminant transport and transformation, and management solutions investigated.

3.1 Naturally Occurring Contaminant: Arsenic

Arsenic (As) is a contaminant that is commonly derived from natural sources and has affected the availability or use of groundwater. This case study of arsenic illustrates the importance of integrating water quality into groundwater management. People and policy makers in many parts of the world – but especially in South Asia and North China Plain–are aware of the dangers of drinking poor quality groundwater high in arsenic (Mukherjee et al 2006; Sharma et al. 2006; Singh et al 2014). Other studies predicting the occurrence of arsenic worldwide suggest that arsenic concentrations of human-health concern can be expected over large regions (Fig. 14.1) (Welch et al. 2000; Smedley et al. 2002; Amini et al. 2008; Winkel et al. 2008; Van Halem et al. 2009). Integrated groundwater management for arsenic is a function of: (1) understanding the spatial and vertical extent of the problem by monitoring; and (2) managing human activities, such as pumping or locating landfills, that can change the geochemical conditions of the aquifer and mobilize arsenic.

Fig. 14.1
figure 1figure 1

Arsenic affected countries (red) of the world (From Van Halem et al. 2009)

Health effects from exposure to arsenic in drinking water include increased risk for bladder, skin, kidney, and lung cancers, and increased risk for diabetes and heart disease (National Research Council 2001). Research on the health effects of low-to-moderate concentrations of arsenic caused the U.S. Environmental Protection Agency (USEPA) in 2006 to lower the MCL from 50 to 10 μg/L illustrating how new research and information can change the perception of acceptable water quality. Many countries have similar drinking water-quality guidelines for arsenic and other contaminants (Table 14.1). The United States, European Union, and World Health Organization consider 10 μg/L of arsenic acceptable for drinking water (Boyd 2006).

Integrated groundwater management can mean appreciable resources are needed for monitoring and characterizing the extent and changes in arsenic concentration. For example, in the United States testing for arsenic in publicly-supplied drinking water is part of the Safe Drinking Water Act so public supplies are monitored regularly. Yet over 43 million people in the United States get their drinking water from privately owned household wells (DeSimone 2009). The quality and safety of these privately-owned water supplies are not regulated under Federal, or in most cases state, law. Individual homeowners are responsible for maintaining their water supply systems and for any routine water-quality monitoring. The U.S. Geological Survey National Water Quality Assessment Program (NAWQA) included sampling of more than 2100 privately owned wells in the United States (DeSimone 2009) and found that about 7 % of privately owned wells contained arsenic greater than 10 μg/L. In some areas, such as the methanogenic parts of the glacial aquifer system, up to 50 % of the privately owned wells had arsenic concentrations greater than 10 μg/L (Thomas 2007). The publicly supplied drinking water is managed because routine monitoring identifies the high arsenic concentrations that need to be addressed, yet voluntary self-monitoring of privately owned wells is not routine. Identification of the problem is a first step for IGM.

Monitoring over time to assess seasonal changes in water-quality concentrations imply that there is not a one-size-fits-all solution to water-quality management over a year. A study in Albuquerque, New Mexico, shows that arsenic concentrations vary spatially and temporally in water from public-supply wells partly because groundwater with different arsenic concentrations migrates between different parts of the basin-fill aquifer within the wellbores of idle supply wells (Eberts et al. 2013). During times when the wells are not pumping, high-arsenic groundwater from deep within the aquifer moves up and out into the shallow parts of the aquifer in areas where hydraulic gradients are upward. When pumping resumes, arsenic-laden water enters these wells from both shallow and deep parts of the aquifer. Concentrations in the produced water are then elevated until the high-arsenic water is purged from the shallow parts of the aquifer. Public-supply wells in this area are pumped less frequently in the winter than in the summer so arsenic concentrations are highest in winter water samples from the deepest wells in the parts of the aquifer having upward hydraulic gradients. Well construction (depth), well operation (duration of pumping), and position within the groundwater-flow system (location with respect to vertical hydraulic gradients) affect high arsenic concentrations in water from public-supply wells. Monitoring changes in pumping and arsenic concentrations over time will enable resource managers to better manage concentrations in the produced water by pumping existing wells for longer periods during the winter and by installing new supply wells at shallower depths in certain areas (Laura Bexfield, U.S. Geological Survey, written commun., 2012).

Naturally occurring contaminants like arsenic are ubiquitous in many aquifer systems and the identification of the processes that control their mobilization and transport could help water managers meet compliance standards (e.g., Gotkowitz et al. 2004). Solid-phase chemistry data are useful in understanding arsenic sources, but do not always correspond to the relative concentrations in ground water (Brown et al. 2007). The transport of arsenic to drinking water wells is controlled by physical and geochemical processes.

Physical processes such as preferential flow paths, human induced and natural, can result in faster travel times and higher concentrations of arsenic in public-supply wells. Brown et al. (2007) identified preferential flow paths that include zones of high permeability in sand and gravel aquifers, conduit flow in karst aquifers, downward well-bore flow in a public-supply during periods of low or no pumping, and short-circuit pathways through wells and boreholes open to multiple aquifer layers. Methods using geophysical techniques, depth-dependent sampling, and sampling of monitoring wells adjacent to public supplies, improve the understanding of preferential flow paths and other factors such as redox chemistry and competing ions that affect the movement of arsenic to public-supply wells.

Groundwater age information is a tool that adds to our understanding of the processes resulting in elevated arsenic. For example, in the glacial aquifer system, arsenic concentrations above the drinking water standard (10 micrograms per liter (μg/L)) were most often associated with groundwater that recharged the aquifer system prior to the 1950s. Similarly, Eberts et al. (2013) found arsenic concentrations in water from public-supply wells in study areas in California, Connecticut, Ohio, Nebraska, Nevada, and Utah increased with increasing travel times to the wells (increasing groundwater age). The groundwater-age mixture for a well characterizes the complete range of time that it might take contaminants that are released to the groundwater to reach a well. An estimate for the groundwater-age mixture for a well is a useful measure of the potential for elevated arsenic in water from the well. In addition, public-supply well construction and operation (screen placement, pumping rates and schedules) can lead to differences in the age mixture of the groundwater pumped from different wells, including wells within the same aquifer. Many of the public supplies sampled as part of the NAWQA study showed a mixture of groundwater ages. This indicates that groundwater management practices need to consider natural and human-induced changes in the aquifer geochemistry over time.

Mixing of groundwater from different parts of the aquifer system can change the chemistry of the groundwater and the potential for elevated arsenic. Ayotte et al. (2011) show that pumping-induced hydraulic gradient changes and artificial connection of aquifers by well screens can mix chemically distinct groundwater. Chemical reactions between these mixed groundwater and solid aquifer materials can result in the mobilization of arsenic, with subsequent transport to water-supply wells. For example, near Tampa, Florida, much of the downward movement of groundwater is along flow pathways that follow natural conduits in the limestone bedrock (Jagucki et al. 2009). High-volume pumping from the wells in this study pulled shallow, oxic and low-pH water, which is capable of dissolving arsenic-bearing minerals, into deeper, anoxic and high-pH parts of the aquifer system where arsenic can remain in solution. This accelerated mixing of dissimilar waters both mobilizes arsenic from the rocks and allows it to remain dissolved in the newly mixed water.

In many areas, dissolved oxygen is an easily determined concentration that indicates the likelihood of elevated arsenic in the water. In the glacial aquifer system, United States, geochemical conditions identified by presence or absence of dissolved oxygen (less than or greater than 0.5 mg/L) is a good indicator of the likelihood of detecting (or not detecting) arsenic concentrations greater than the drinking-water standard (10 μg/L) (Warner and Ayotte 2014). Human activities can alter recharge or change groundwater flow in ways that lead to changes in the aquifer geochemical conditions (Eberts et al. 2013). These changes result in chemical reactions between the groundwater and the solid aquifer material, releasing naturally occurring arsenic into the groundwater. As a result, concentrations of arsenic in water from wells increases. Similarly, Gotkowitz et al. (2004) found that drawdowns resulting from pumping created conditions that mobilized naturally occurring mineralized arsenic quickly in drinking water wells that historically were not characterized as having arsenic contamination.

Other human activities can cause local and regional scale changes in aquifer geochemical conditions and indirectly increase arsenic concentrations in groundwater and in water from public-supply wells. For example, groundwater in the vicinity of a landfill can have elevated concentrations of arsenic, yet the source of the arsenic is not the contents of the landfill (Warner and Ayotte 2014). Rather the source is geologic – part of the solid aquifer material (Delemos et al. 2006). This type of situation occurs because microorganisms degrade large amounts of organic carbon derived from the waste within the landfills, creating anoxic conditions in the groundwater. Arsenic is then released from the solid aquifer material to the groundwater under the newly anoxic conditions, thus increasing arsenic concentrations in groundwater downgradient from the landfill.

Water managers who understand how redox conditions are distributed within an aquifer system are in a position to anticipate which chemical constituents in the groundwater (for example, nitrate, arsenic, iron, manganese, and certain VOCs or pesticides) would (or would not) be expected to occur in water from a particular well. In addition, knowledge about redox conditions in an aquifer system can help water managers select the most suitable water-treatment methods for water from their wells. Redox conditions of groundwater also are important because the oxidation state of some elements affects their toxicity. For example, the oxidized form of chromium (hexavalent chromium, Cr6+) is more toxic than the reduced form (trivalent chromium, Cr3+) (Mills and Cobb 2015). Another way that human activities can affect concentrations of natural contaminants in groundwater is by altering groundwater flow so that waters with different chemical characteristics mix.

Human-induced alteration of groundwater flow patterns can affect concentrations of naturally occurring trace elements like arsenic. Adverse water-quality impacts attributed to human activities are commonly assumed to be related solely to the release of the various anthropogenic contaminants at the land surface; yet, human activities including various land uses, well drilling, and pumping rates and volumes can adversely impact the quality of water in supply wells indirectly, when associated with naturally-occurring trace elements in aquifer materials (Ayotte et al. 2011). This occurs because subtle but significant changes in geochemistry are associated trace element mobilization as well as enhancing advective transport processes.

Sources of natural contaminants like arsenic are largely distributed and not usually mitigated with source remediation. The cost of treating for arsenic in large public-water utilities is an economic cost, but the human health cost of not treating for elevated arsenic in drinking water can be substantial. Costs, like that of public water suppliers using the glacial aquifer system in the United States, were estimated at 29 million dollars in 1999 to treat groundwater for a single issue of concern: elevated arsenic concentrations (Warner and Ayotte 2014). In the United States in 2006 when the drinking water standard was lowered to 10 μg/L the Illinois Environmental Protection Agency estimated that the initial cost to reduce arsenic concentrations to below the MCL of 10 μg/L for 50 of the community water supplies with elevated arsenic concentrations in Illinois (Fig. 14.2) could reach a total of $40 million dollars, with the highest costs associated with small community supplies (Warner and Ayotte 2014; Warner et al. 2003; Warner 2001). On a national or worldwide scale, this is a large water-quality cost to consider. Understanding the processes that affect the mobilization of arsenic in groundwater leads to more informed and integrated water management decisions in areas where arsenic is a concern, which in turn can provide cost savings.

Fig. 14.2
figure 2figure 2

Cost of treating drinking water for elevated arsenic (From Warner and Ayotte 2014)

3.2 Human-Introduced Contaminant (Abiotic): Agricultural Inputs

The pervasive use of organic and inorganic chemicals in agricultural areas has led to the deterioration of the quality of groundwater and surface water, and has become a concern for human consumption over the last decades. Water quality deterioration by pesticides, for example, is well recognized, for surface or drained water (Schiavon and Jacquin 1973; White et al. 1967) and groundwater (Muir and Baker 1978). Since the early identification of the concern, degradation of water quality by pesticides become widespread in Europe (Capriel et al. 1985; Heydel et al 1999; Réal et al. 2001, 2004; European Commission 2002, 2010). Many recent studies have reported the presence of pesticides higher than the European regulatory limits of 0.1 μg/L and 0.5 μg/L for surface water and groundwater, respectively. In one survey, total concentration of pesticides was over 0.5 μg/L in 18 % of surface water samples and 3.8 % of groundwater samples analyzed (SOeS 2010).

With the expected conflicting goals of crop production and preservation of surface and groundwater quality , an integrated water resources management approach is needed. Integrated groundwater management, specifically, must embrace spatial and temporal uncertainty both in the source (due to changing human application rates and chemical properties) and in the groundwater aquifers that embody a heterogeneous application and transport of that source. Even defining the groundwater system of interest can be problematic because: 1) groundwatersheds can be difficult to delineate accurately and often do not align with the easily delineated overlying surface watershed (e.g., Hunt et al. 1998; Winter et al. 2003); 2) the amount of effort expended on the characterization is likely not equal in space and time in an area of interest; and 3) the land surface encompasses different political boundaries, which may change the regulatory agency charged with the management of the water resource. Integrated groundwater management must also address the fact that a groundwater system is buffered by an unsaturated zone that separates the land surface where pesticides are applied from the aquifer used. This buffering can affect the timing and amount of recharge to the water table – effects that change as the unsaturated zone thickness changes (e.g., Hunt et al. 2008). Delays and lags between an activity, or change in activity like Best Management Practices, at the land surface and its appearance in the groundwater resource can confound simple cause-and-effect relations that underpin decision making.

For agricultural contaminants, integrated groundwater management is a function of: (i) changes in protective areas specified at land surface that can determine and influence the contaminant source; and (ii) the importance of lags and delays between the driving forces at the land surface and the change of the groundwater resource.

3.2.1 Changing Protective Areas at the Land Surface

Here we use two French groundwater systems as examples, the Vittel and Lons-le-Saunier catchments located near the French-Swiss border. Vittel watershed is managed through voluntary agreements between diverse stakeholders and the private enterprise Nestlé Water (Benoît et al. 1997). The Vittel catchment has been the focus of a delineation process since 1925 (Barbier and Chia 2001). The catchment outline defined during negotiations with farmers and other stakeholders began in 1987 was 4200 ha. In 1994, new hydrological work increased the catchment to 4500 ha. In the case of Lons-Le-Saunier, the catchment is managed by the municipality and a group of priority catchment organizations; they are called the “Grenelle Catchments” because they were designated through the Grenelle Initiative – a collection of political meetings that occurred during the fall of 2007 to make long-term decisions on sustainable development. The Lons-le-Saunier catchment also will likely have multiple delineations (Hellec et al. 2013; Barataud et al. 2014a).

Areas identified for protection within the delineated groundwater resource have also evolved over time as a result of increasing awareness of contamination, negotiations with the farmers, and the evolution of the driving regulatory context (from Public Health Laws to a patrimonial management of water in the recent Environment Code). Today the management zone is divided into four zones (Fig. 14.3). The water wells zone (zone I) of about 7 ha, without any agricultural activity, was bought by the municipality in 1961 at the beginning of the wells’ use. A proximity management zone with two sub-divisions was then defined: contracts between the municipality and the farmers were primarily established on a zone IIa (63 ha) in 1985 when nitrates and atrazine were noted in the wells; zone IIa was extended to a zone IIb (220 ha) in 1989 and the contracts were re-negotiated in 2006 as a new French regulatory requirement imposed a more formalized definition of protection perimeters. In 2006 the zone was again extended to include an additional 1500 ha (zone III). Currently, the protection zones consist of slightly less than 1800 ha, corresponding to about 30 % of the total catchment area. The total catchment was designated in 2009 as zone IV, defined using the hydrological report that resulted from the 2009 Grenelle Initiative.

Fig. 14.3
figure 3figure 3

Example of successive delimitation of the protection perimeters

Concurrently, a 1992 French law of the Public Health Code required a mandatory “Declaration of Public Utility” for water resources, which included a delineation of water protection areas in which conservation easements can restrict agricultural practices. In practice, the delineation of public utility is commonly delayed. A recent study showed that only two-thirds of catchments in the French Grenelle priority catchment were in conformity for the delineation of water protection areas (Barataud et al. 2014b), whereas a deadline of 5 years was given by the 1992 law . Local stakeholders noted a high level of inter-stakeholder conflict caused by these regulatory requirements. Using the catchments that are in conformity with the 1992 law, it is clear there is a wide range of management unit size (Fig. 14.4).

Fig. 14.4
figure 4figure 4

Variability of the protection area size within the priority Grenelle catchment

The size of the management unit can affect execution of protective measures. Perhaps most obviously, developing mutually agreeable solutions with the agricultural producers and other stakeholders in large catchment areas is more difficult because there are more entities to include, and is often hindered by simple organization challenges such as identifying meeting-times and discussion frameworks. In large catchments, accounting for the interests and wide ranging viewpoints often requires designation of intermediaries to facilitate discussion that represent the whole of the stakeholder group. In small catchment areas, protective practices may be identified but often involve improved agricultural practices over only small parts of the catchment rather than major farming practice reforms. Several studies have questioned the effectiveness of partial measures for protecting and restoring target groundwater resources (Kunkel et al. 2010; Thieu et al. 2010; Lam et al. 2011; Posen et al. 2011).

3.2.2 Temporal Characteristics of Groundwater Management

Clearly the spatial area included or excluded from a protective action will influence the associated groundwater quality . Temporal aspects can also affect integrated groundwater management . The temporal aspects covered here include timing of human implementation of protective measures at the land surface, and time lags that result from the natural groundwater system itself.

An example of the human dimension is seen in the 2000 European Water Framework Directive (WFD), which proposed three new articles: preservation of water bodies as a whole (taking into account non-point pollution and not just point-source pollution), an imposed schedule for adoption, and objectives defining quantified results for ecological restoration of the environment. This Directive is complex and ambitious, but is considered a cornerstone of the European Union’s environmental policy (Bouleau and Richard 2009). France partially conformed to this directive 6 years after the Directive was signed through its Law on Water and Aquatic Environments (Loi sur l’Eau et les Milieux Aquatiques [LEMA], 2006), where for the first time in French law the definition of non-point pollution appeared. However, it was not until the Grenelle Initiative in 2009 and the designation of the Grenelle priority catchments, that the notion of schedules, deadlines, and quantifiable results was written into French law. In the example of Lons-le-Saunier, 9 years were necessary to partially translate the WFD into application in one area of France, and the process was considered difficult by most all involved.

The human dimension also can result in unintended parallel protective actions. Faced with insufficient regulatory frameworks, many local water managers (municipalities, water utilities, private entities) outside of the Grenelle priority catchments have set up, or are currently setting up, their own coordination with farmers to promote protective practices to enhance local water resources quality. Each protective practice imposes various time frames for adoption, many of them distant into the future, as can be seen by comparing the timelines for the above mentioned Lons-le-Saunier and Vittel Catchments to two other European catchments (Fig. 14.5: La plaine du Saulce in western France and one near Munich, Germany). The Munich catchment is notable because it is an early example of protection of water quality internally developed after adoption of organic farming practices at a near catchment scale. The time from the identification of the problem and subsequent negotiations to formal protective measures can range between 5 and 20 years. Clearly lags in the adoption of protective measures will result in lags in obtaining the improved water quality that initially drove the adoption of protective measures.

Fig. 14.5
figure 5figure 5

Timelines of protection activities of catchments in four areas

Given the competing interests of the multiple stakeholders, problem scoping activities and protective action negotiations often require many months of discussion. For example, the mobilization of stakeholders, identification of needs and priorities, negotiations between stakeholders having conflicting interests, defining a consensus, and constructing adequate institutional forms, are all necessary stages which require different amounts of time and effort to execute. Even after protective measures are adopted, it is not uncommon to see delays of several years needed to coordinate and modify individual practices.

This temporal and spatial complexity of adopted protective measures then must then filter through the natural system to where the groundwater resource of interest is assessed. Nitrate pollution management in the Plaine du Saulce catchment discussed below exemplifies how the natural system dimensions can delay positive responses in the groundwater resources resulting from management intervention to reduce contamination. The water catchment area (86 km2) is situated 10 km south of the City of Auxerre, on a rural agricultural landscape consisting of 45 farms (4026 ha). In the early 1990s, high levels in nitrate concentration were recorded in the Auxerre groundwater wells in the early 1990s supplying one third of the 60000 inhabitants’ water requirements. In 1994, peaks reaching 70 mgNO3/L (exceeding the European drinking standard of 50 mgNO3-/l) precipitated a lively debate on management strategies to deal with this nitrate contamination. Various managing entities were brought to bear over the next three decades, with the first contract with farmers in 2002, 8 years after the first sign of severe degradation. The management strategy initially operating on a voluntary basis did not result in significant decrease in the nitrate concentrations. As a result, regulation was proposed in 2011 focusing on integrated agriculture, where adoption would become a mandatory after a period of 3 years. The proposed regulatory framework caused major tensions between stakeholders, made worse by a lack of understanding regarding the absence of improved water quality after many years of joint protective actions.

During 2012, a scientific committee met twice to update management strategies to account for the natural delay between changes in agricultural practices at the land surface and measurable improvements in water quality. One primary conclusion was that groundwater flow rates in the Sequanian limestone aquifer tapped by the wells are relatively longer than human timeframes considered in management actions. Water dating analysis through anthropogenic tracers CFC and SF6 estimated an aquifer residence time of around 25 years (±3 years) at the pumping wells (Anglade et al. 2013). As a result, nitrate levels observed at the wells reflected agricultural practices that occurred over two decades ago. Analysis of agricultural nitrate use also supported this assessment. Nitrogen inputs had sharply increased in the 1960s before stabilizing in the 1990s (Fig. 14.6); point-to-point comparison between nitrogen surplus and measured nitrate concentration also suggested an approximately 25 year lag in response at the wells.

Fig. 14.6
figure 6figure 6

(a) and (b) Evolution of harvested nitrogen and total nitrogen inputs (synthetics and organic fertilizers, atmospheric dry and wet deposition, biological nitrogen fixation) on arable land since 1950. (c) Calculation of N surplus (Harvested N – Total N inputs). (d) Resulting nitrates concentration (infiltration flux of 240 mm/year) and comparison with recorded nitrates levels in the wells (red points)

This example underlines that when planning and implementing management actions, expected time lags need to be communicated to stakeholders and funding agencies in order to reduce short-term expectations that may impair long-term political and financial support. At this point in the Plaine du Saulce catchment, such knowledge has opened up new possibilities for organic farming, with recognition that changes are needed beyond the catchment borders.

Human-introduced pesticides also represent challenges to integrated groundwater management. They can affect the quality of drinking water; especially groundwater close to land surface (e.g., Schreiber et al. 1993). Many pesticides can persist for long periods in the environment – organochlorine insecticides, for example, were still detectable in surface waters 20 years after their use had been banned (Larson et al. 1997). Others studies documented measurable pesticide concentrations years after their last application on the land surface (Baran et al. 2008; Buhler et al. 1993; Jarczyk 1987; Novak et al. 1998; Reiml et al. 1989). In France, Atrazine was banned in 2003; yet, analysis of the Grenelle priority catchment area suggests that half of the protected catchments have measurable atrazine or atrazine degradation product, called a metabolite, in 2011 (Barataud et al. 2014b).

Site-scale studies have been used to help explain the persistence of pesticides in groundwater (Perrin-Ganier et al. 1996). In one case study by Schrack et al. (2009, 2012) from the Lorraine region of France, agricultural practices had been recorded annually for 40 years, including pesticides use during conventional crop management (date, product, application rate). From September 2004 to the present, no pesticides have been used on the study fields as a result of conversion to organic farming practices. During the 30-year period prior to conversion to organic practices, many pesticides were applied on crops, including herbicides atrazine and 2,4-D (2,4-dichlorophenoxy acetic acid). Similar to the observations of Barataud et al. (2014b), measurable atrazine was documented over 10 years after atrazine application ceased. 2,4-D concentrations were higher than the regulatory limits in two water samples from drain tiles (Fig. 14.7), despite low detection frequency in point samples at the site. Thus it appears that even though the soil zone can reduce and transform pesticides applied to crops, it can also act as a diffuse source of groundwater contamination that persists after application ceases. That is, organic farming initiated in 2004 does not apply pesticides; however, more than 5 years after conversion to organic farming practices, pesticide concentration can still exceed the regulatory limit (e.g., 2,4-D drain water in Fig. 14.7).

Fig. 14.7
figure 7figure 7

Concentration of pesticides in experimental field after stopping their spreading on the experimental field (2,4-D: since 17 years; Ioxynil: since 13 years; Mecoprop: since 21 years; Dinosèbe: since 15 years; Atrazine: since 23 years; DEA: since 16 years; AMPA-glyphosate: since 17 years)

3.3 Human-Introduced Contaminant (Biological): Human Enteric Viruses

As shown in Tables 14.1 and 14.2 many types of human-source contaminants can influence groundwater management, and make an otherwise acceptable groundwater supply not suitable for an intended use. Agricultural contaminants, presented in Sect. 14.3.2 are a widely recognized example. Here we discuss a less known human contaminant – human enteric viruses, a subset of possible biological entities, called pathogens, that can affect drinking water suitability. Although the importance of viruses as a groundwater contaminant is primarily restricted to human drinking water, this example helps illustrate how recent advances in methodologies for detection and quantification provide new insights into vulnerability of groundwater supplies not provided by the traditional understanding of water quality contaminants. The material in this section is taken from Borchardt et al. (2004, 2012) and Hunt et al. (2005, 2010, 2014); the interested reader is directed there for additional information.

Viruses are infectious particles of nucleic acid wrapped in protein and sometimes an outer layer of lipid that replicate only within cells of living hosts. In the environment they are metabolically inert. Virus spread is facilitated by concentrated sources and the very low exposure needed for infection. For example, a gram of feces from an infected host can contain trillions of infectious viruses, yet only 1–10 viruses are required to infect a new host. The human health implications of waterborne virus contamination are multi-fold. Recent studies have demonstrated occurrence of human enteric viruses in domestic and municipal wells in the United States (Abbaszadegan et al. 2003; Borchardt et al. 2003; Fout et al. 2003; USEPA 2006). Of the 248 recorded drinking water outbreaks caused by untreated groundwater in the United States between 1971 and 2008, 32 (12.9 %) had a viral etiology. Moreover, in 135 outbreaks (54.4 %) the etiology was unidentified (Wallender et al. 2013), but believed to be viral as in the early years of outbreak surveillance the technology to detect waterborne viruses was less widely available than it is today. Outbreaks related to virus-contaminated groundwater have also been documented in other parts of the world (Gallay et al. 2006; Beller et al. 1997), suggesting widespread hydrologic conditions suitable for virus survival and transport.

Viruses are much smaller (27–75 nm) than bacterial and protozoan pathogens and thus are more easily transported through pores that physically filter larger pathogens. Virus adsorption onto sediment grains is a primary removal mechanism, although the strength of adsorptive forces depends on sediment and water chemistries (Borchardt 2006). These factors notwithstanding, viruses may still be transported some distance, even into confined aquifers at travel rates relevant for human-health concern (e.g., Borchardt et al 2007; Bradbury et al. 2013). As a result, the United States Environmental Protection Agency has listed several viruses on the third drinking water Contaminant Candidate List, emphasizing that waterborne viruses are a research priority (http://www.epa.gov/ogwdw000/ccl/ccl3.html).

There is also significant public and regulatory interest in understanding the vulnerability of water-supply wells to contamination by human enteric viruses (e.g., http://www.epa.gov/safewater/ccl/index.html; Unregulated Contaminant Monitoring Rule 3 – USEPA 2011). However, assessing well vulnerability to infectious pathogens is different because pathogen vulnerability assessments require knowledge of very fast (<3 year) times of travel – a timeframe not characterized by common groundwater age dating methods (Hunt et al. 2005, 2014). Therefore, a different conceptualization is needed to assess well vulnerability to pathogens.

Plume center-of-mass approaches of contaminant transport typically define risk from non-pathogen contaminants such as those listed in Tables 14.1 and 14.2; they reflect the bulk properties of the aquifer which control transport to a drinking well where risk is calculated using long-term exposure relevant for slowly moving plumes. Pathogen transport to groundwater-supply wells is different because adverse health effects can only occur while a pathogen is still infectious; viruses are reported to remain infectious in groundwater for time periods less than 3 years (Seitz et al. 2011). However, unlike dissolved contaminants, as particles pathogens tend to follow fast preferential flow pathways with minimal matrix diffusion (McKay et al. 1993; DeBorde et al. 1999). Thus, rather than well vulnerability assessment based on decade-scale water movement, it is the fast pathway properties of the aquifer that are most important for understanding the vulnerability to pathogens and the risk for disease transmission.

For many groundwater systems, a 1–3 year travel time might be considered of little importance because distances traveled in many unstressed groundwater systems in even 3 years are short. But this is not true for all groundwater systems; large distances can be traveled in short timeframes in karst and fractured rock aquifers (e.g., Borchardt et al. 2011). Even in porous media aquifers, high capacity water-supply wells significantly depressurize local groundwater systems and create large hydraulic gradients. These gradients, in turn, result in faster local groundwater velocities than occur in natural groundwater flow systems. This could explain, in part, why virus contamination frequency tends to be greater in high capacity wells than in private domestic wells (Borchardt et al. 2003). More surprising, in the confined aquifer supplying drinking water to Madison, Wisconsin USA, there are pathways sufficiently fast that virus transport to deep supply wells cased through the aquitard can occur in several weeks (Bradbury et al. 2013).

Viruses can only be a contaminant of concern, however, if there is an infectious human fecal source. One common source is leaking sanitary sewers (Hunt et al. 2010). Reported estimates of sanitary sewer leakage, or “exfiltration”, range from 1 % to 56 % of the dry weather flow (Rutsch et al. 2008). In the United States, exfiltration has been estimated as 30 % of system flow as a result of infrastructure deterioration, and in local areas, sanitary sewer leakage has been reported to be as high as 50 % of the system flow (USEPA 1989). The exfiltration rate for a European sanitary sewer has been reported on the order of 1 l/m of sewer line per day (Lerner and Halliday 1994). Exfiltrated volumes for large municipalities are thought to reach tens of thousands of cubic meters per day (millions of gallons per day), exceeding the capacity of the sediments to filter, absorb, and immobilize contaminants carried therein (Amick and Burgess 2000). Even though more research is needed to make general system predictions (Rutsch et al. 2008; Tafuri and Selvakumar 2002), local sanitary sewers have been related to drinking-water associated outbreaks of gastroenteritis (e.g., see Amick and Burgess 2000; Bishop et al. 1998). Older, non-maintained systems are thought to be more susceptible to exfiltration, as well as systems including pressurized by sewage lift stations (Decker 1994a, b). For example, of the wells sampled by Borchardt et al. (2004), the highest number of positive virus samples was obtained from a drinking water well near a pressurized lift station. When the water table is below the utility infrastructure, exfiltrated sewage is often concentrated and transported in the trenches surrounding sanitary sewers, especially during conditions of rainfall-induced infiltration, such that they can threaten drinking-water supplies (Tafuri and Selvakumar 2002). Sanitary sewer infrastructure is often located near municipal wellheads, and carries a high viral load during periods of infections in a community (e.g., Sedmak et al. 2003; Bradbury et al. 2013). From an IGM perspective, this presents management action options: a groundwater-supplied municipality could work to minimize sewer contamination of its urban aquifer by integrating its management teams for wastewater and drinking water, making sure both teams are aware of each other’s activities that might affect the aquifer.

From a contaminant monitoring perspective, total coliform bacteria and E. coli – standard microbiological indicators of water sanitary quality – are rarely correlated with viruses (Wu et al. 2011), likely due to their differences in transport/ filtering and survival characteristics in an aquifer. Even with direct analysis, virus occurrence is commonly temporally sporadic when viruses are analyzed at the wellhead. Therefore, assessing drinking well vulnerability can involve a multiple samplings, perhaps more than might be used for traditional contaminant vulnerability assessments. Fortunately, water samples for viruses can now be collected inexpensively and routinely (Lambertini et al. 2008; Gibbons et al 2010; Mull and Hill 2012), which allows affordable collecting of larger sample numbers. In the early 2000s, results from viral analysis by conventional polymerase chain reaction (PCR) usually included only virus identification and presence/absence; virus quantification could only be accomplished by culture methods and these are laborious, expensive, and restricted to only a few virus groups. Now, with the advancement of real-time quantitative PCR (qPCR), the quantities of many virus types can be reliably measured with high-throughput, low cost, and less labor. Detailed genetic information on virus subtypes can also be obtained with high-throughput sequencers widely available. Therefore, from a practical standpoint, this newly developed technology has created a capability to assess well vulnerability that was not available to groundwater managers even 15 years ago.

These available technologies have also allowed the advent of a new concept in groundwater management, using viruses as tracers of young-age groundwater (Hunt et al. 2014). Because the maximum survival time for viruses in groundwater is approximately 3 years, a positive virus signal in mixed-age groundwater, in effect, zeros-out the contribution of older water and indicates young water must be present. Moreover, because different virus types infect and then disappear from the host population over time as the number of susceptible and resistant hosts changes, this creates a time-varying signal that can be tracked in the environment. When fecal waste from an infected population is released to the environment, whether from people, livestock or wildlife, the combination of virus identities and quantities in the waste becomes a “virus snapshot” for a specific point in time. Measuring this “snapshot” at suspected virus sources and waiting for it to appear at “downstream” receptors, such as a supply well, can be used to make inferences about time-of-travel to the well; wells with very young water are typically considered more susceptible to all water quality contaminants. Unlike traditional well vulnerability assessments that are relevant for contaminants carried by “high-yield slow-pathways” in the aquifer to the well, viruses as tracers for well vulnerability assessment gives information on the less-studied leading edge and early arrival of a pathogen contaminant, which is driven by preferential flowpaths that provide “low-yield fast-pathways” to the well (Hunt et al. 2010).

In areas where groundwater supplies for drinking water are not disinfected, the economic cost of virus contamination can be considerable. In an epidemiological study of 14 groundwater-supplied communities in Wisconsin that did not practice disinfection, Borchardt et al. (2012) determined that 6 % to 22 % of the acute gastrointestinal illnesses (AGI) in these communities resulted from their virus-contaminated drinking water. The economic cost of these groundwater-borne illnesses can be roughly estimated from US data on healthcare utilization and costs for AGI in young children (Cortes et al. 2009) and extending the assumption these data apply to the rest of the population. Such an assumption is likely justified for American adults 18–54 years old because in this age group the prevalence and severity of gastrointestinal illness is not much lower than that for young children (Jones et al. 2006). From Cortes et al. (2009), for children less than 5 years old the national hospitalization rate for AGI is 0.5 %, the emergency room visit rate is 1.8 %, and the outpatient visit rate is 13.3 %. The United States median payments for AGI treatment by hospitalization, ER visit, and outpatient is $3135, $332, and $90 (reported in 2009 USD), respectively. The number of people drinking non-disinfected municipal groundwater in Wisconsin is about 100,000. If the baseline AGI rate in Wisconsin is 1 episode/person-year, about the national average, and using the midpoint of 14 % of AGI attributable to virus-contaminated groundwater, the healthcare costs in Wisconsin are approximately $500,000 USD per year. This only includes direct payment to healthcare providers. It does not include costs to the economy from work lost either by the ill person or their caregiver, nor does it include the cost of death. It also does not consider the most disease-vulnerable populations, the immunocompromised and elderly. Moreover, this estimate can be considered conservatively low because it does not account for the legal, social, and economic costs if virus-contaminated groundwater resulted in a disease outbreak. The AGI reported in the study by Borchardt et al (2012) only measured sporadic non-outbreak illnesses.

Studies by Borchardt et al. (2012) and Lambertini et al. (2011, 2012) were part of a large United States government funded epidemiological study (the Wisconsin Water And Health Trial for Enteric Risks, or WAHTER Study), designed to measure the level of illness in communities that rely on non-disinfected groundwater as their source for drinking water. Concurrent with the study, the Wisconsin Department of Natural Resources (DNR), the state agency ceded the authority for regulation of drinking water quality, was preparing to implement the United States Federal Groundwater Rule. As it became clear the 14 Wisconsin communities enrolled in the WAHTER Study had significant virus contamination of their groundwater supplies, the DNR decided to incorporate into their statewide implementation plan a change to the State drinking water code to require disinfection for all groundwater-source municipal drinking water systems in the state. The code change was approved by the DNR oversight board. However, after a statewide election in 2010, the State legislature reversed the DNR’s decision and passed a bill prohibiting the DNR from requiring drinking water disinfection (http://docs.legis.wisconsin.gov/2011/proposals/ab23, accessed August 12, 2014). The bill was signed into law in 2011 (http://docs.legis.wisconsin.gov/2011/related/acts/19, accessed August 12, 2014). This statewide action was taken despite expert testimony describing the WAHTER study results and associated estimated costs to its citizens.

In an IGM context, there were factors associated with human enteric viruses that may have influenced the decision making process. A new contaminant, viruses, and a new technology, qPCR, were unfamiliar to many drinking water utilities and policymakers. People viewed the traditional pathogen indicators total coliform and E. coli tests as the “gold standard” for sanitary quality; if these traditional indicators were negative the water was considered acceptable. Positive tests for traditional indicators, when they occurred, were interpreted as a distribution system problem not a quality problem associated with the groundwater source itself. Such assumptions were deemed reasonable because non-disinfecting communities were not required by State code to collect microbiological samples from their drinking-water production wells, and a common perception is that the groundwater must be clean because it is filtered by soil and aquifer material, and thus can be considered microbiologically pure. Waterborne disease may have also been viewed as being events that only occurred as disease outbreaks as reported by news headlines; the concept of low-level, but measurable, sporadic disease transmission was unfamiliar. Lastly, the actions were consistent with a public view that State government should not supersede local control of drinking water regulation. A second independent study has since corroborated the WAHTER Study findings and showed heavy precipitation events result in more children seeking medical treatment for AGI in groundwater-supplied communities in Wisconsin that do not practice disinfection compared to those communities that do (Uejio et al. 2014). This study prompted a bill to reverse the disinfection prohibition but it did not move forward (Wisconsin Assembly Bill 545, https://docs.legis.wisconsin.gov/2013/proposals/ab545, accessed August 12, 2014).

4 Implications for IGM

Groundwater is under increasing threat from over-development, over-extraction and pollution, due to increasing population pressure, increasing living standards, industrialization, and a lack of proper management to match the demands and use patterns (see Chap. 2 for more detail). This is a global trend, although there are regional differences. The availability of groundwater with adequate quality to meet ecological and human health needs is often in direct and immediate conflict with strategies of livelihood. Competing demands for quantity and quality of groundwater can be result in fragmented management policies. These competing needs present a problem for researchers and managers to communicate the complexity of groundwater-quality changes with changing demands and uses. There is a strong need to close the gap between the perceptions of groundwater quality and understanding.

The latest technologies and approaches in groundwater modeling, laboratory analytical methods, engineering design, and economic modeling can all inform decision-making in an IGM framework, but societal subjective perceptions of water quality and societal behavior can be equally important in some circumstances. In the context of an IGM framework, water quality issues can require regulators to devote appreciable resources to managing societal perceptions and societal behavior – additional resources beyond that needed to perform the more easily recognized components of IGM such as monitoring, engineering, and risk assessment. Moreover, additional dimensions of acceptable water quality can appear as new technology becomes available, which in turn can become important forcing functions on IGM activities. In addition to changing technology, increasing the sampling frequency used in traditional groundwater monitoring assessments can influence IGM activities. For example, infrequent sampling (often once a year) and long-term exposure risk assessment approaches may not adequately represent the dynamism of the groundwater system quality – for either pathogen or non-pathogen concerns (e.g., Hunt et al. 2010). New advances in monitoring continuous water quality, such as specific conductance and other parameters, show that changes can occur within hours or days of a precipitation event depending on the system. On the other hand, the time lag between actions at the land surface and expression in the groundwater system must also be accounted. Clearly identifying and characterizing potential water quality drivers is the first step for a successful IGM framework. From such an understanding associated risks can be estimated, which in turn can form the basis of societal discussion of costs and benefits that will form the foundation for all IGM activities that follow.