Abstract
The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of userfriendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice.
Keywords
Inverse problem Aquifer Groundwater Modeling Parameter estimationRésumé
L’état du problème inverse des eaux souterraines est synthétisé. L’accent est placé sur la caractérisation de l’aquifère, où les modélisateurs doivent jouer avec l’incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d’échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d’état (charges hydrauliques, concentrations) des propriétés de l’aquifère. A cause de ces difficultés, le calibrage ne peut être séparé du processus de modélisation, comme c’est le cas dans d’autres cas de figure. Par ailleurs, il peut être vu comme une des étapes dans le processus de détermination du comportement de l’aquifère. Il est montré que les méthodes d’évaluation des paramètres actuels ne diffèrent pas si ce n’est dans les détails des calculs informatiques. Il est montré qu’il existe une large panoplie de techniques d ‹inversion : codes de calcul utilisables par toutunchacun, accommodation de la variabilité via la géostatistique, incorporation d’informations géologiques et de différents types de données (température, occurrence, concentration en isotopes, âge, etc.), détermination de l’incertitude. Vu ces développements, la calibration automatique facilite énormément la modélisation. Par ailleurs, il est souhaitable que son utilisation devienne une pratique standardisée.
Resumen
Se sintetiza el estado del problema inverso en aguas subterráneas. El énfasis se ubica en la caracterización de acuíferos, donde los modeladores tienen que enfrentar la incertidumbre del modelo conceptual (principalmente variabilidad temporal y espacial), dependencia de escala, muchos tipos de parámetros desconocidos (transmisividad, recarga, condiciones limitantes, etc), no linealidad, y frecuentemente baja sensibilidad de variables de estado (típicamente presiones y concentraciones) a las propiedades del acuífero. Debido a estas dificultades, no puede separarse la calibración de los procesos de modelado, como frecuentemente se hace en otros campos. En su lugar, debe de visualizarse como un paso en el proceso de entendimiento del comportamiento del acuífero. En realidad, se muestra que los métodos reales de estimación de parámetros no difieren uno del otro en lo esencial, aunque sí pueden diferir en los detalles computacionales. Se discute que existe amplio espacio para la mejora del problema inverso en aguas subterráneas: desarrollo de códigos amigables al usuario, acomodamiento de variabilidad a través de geoestadística, incorporación de información geológica y diferentes tipos de datos (temperatura, presencia y concentración de isótopos, edad, etc), explicación apropiada de incertidumbre, etc. A pesar de esto, aún con los códigos existentes, la calibración automática facilita enormemente la tarea de modelado. Por lo tanto, se sostiene que su uso debería de convertirse en práctica standard.
Introduction
In broad terms, inverse modeling refers to the process of gathering information about the model from measurements of what is being modeled. This includes two related concepts: model identification and parameter estimation. The latter will be used here as being synonymous with calibration. Model identification applies to methods to find the nature (features) of the model, such as the governing equations, boundary conditions, time regime, or heterogeneity patterns. Parameter estimation, instead, is restricted to assigning values to the properties characterizing those features.

Cost. Groundwater models are relatively expensive to run. They require building large systems of equations (to get an accurate and realistic picture of the system) that need to be solved for each model run. In addition, advances in computer science are used to increase the quality of the model performance rather than to reduce execution times.

Time dependence. State variables such as heads and concentrations are time dependent. Many flow problems are essentially under steady state conditions. Yet, even in this case, the information might be contained in temporal fluctuations of heads, thus deserving transient modeling.

Heterogeneity. Values of hydraulic conductivity, K, which is often the most dominant hydraulic property, may vary over several orders of magnitude. The same can be said about transmissivity, which is essentially equivalent to K in 2D models (hereinafter transmissivity and hydraulic conductivity will be used interchangeably).

Different types of parameters. While efforts are often concentrated on transmissivity, other parameters (recharge, boundary fluxes, etc.) may be equally uncertain and relevant.

Scale dependence. Parameters measured in the field often represent a small portion of the aquifer. As a result, they are qualitative and quantitatively different from what is needed in the model.

Model uncertainty. Geometry of the aquifer and heterogeneity patterns are controlled by the geology, which is never known accurately.

Low sensitivity. Depending on the problem, state variables may display low sensitivity to model parameters (i.e. their information content is low). In particular, heads (the most frequent and sometimes unique type of measurement) sometimes contain little information about hydraulic conductivity.
Because of the above features, the aquifer model predictions are highly uncertain. Moreover, parameter estimation cannot be formulated as clearly as in other fields. That is possible for pumping test interpretation, where one can indeed take heads straight from the test, enter them in a code and derive parameter values after a moderately qualitative model analysis. In aquifers, one is forced to cast inversion as one step of the modeling process. Many of these features are shared by groundwater’s sister science, surface water hydrology, where many of these issues also have been addressed (Beven 1993; Gupta et al. 1999).
Another consequence of the singularities of groundwater inversion is its relative isolation. Many inversion methods have been developed independently from those in other fields. The earliest methods were based on simply substituting heads, assumed to be known, into the flow equation, which leads to a first order partial differential equation in transmissivity (Stallman 1956). This method, termed “direct” by Neuman (1973), is relatively simple to understand and has been widely used after Nelson (1960, 1961). In fact, it allowed deriving transmissivities from flow nets based on head measurements (Bennet and Meyer 1952). Unfortunately, this approach has several drawbacks. First, it requires knowing heads (and recharge, storage coefficient and boundary conditions) over the whole domain in space and time. This can only be achieved through interpolation, which introduces smoothing and errors, so that the estimated transmissivity values become somewhat artificial. Second, it is unstable (small errors in heads cause large errors in transmissivity). To overcome the first problem, most recent inversion methods use what Neuman (1973) termed the indirect approach, which consists of acknowledging that measurements contain errors and finding the hydraulic properties that minimize these errors. That is, parameters are found by minimizing an objective function, which may become a huge computational task. To overcome instabilities, a number of approaches can be taken: adding a regularization term to the objective function to dampen unwarranted oscillations, considering additional types of data, reducing the number of parameters to be estimated without losing the ability to reproduce spatial variability, etc. These issues (computation, stabilization, different types of data, spatial variability) have remained the focus of much research. Reviews of them are presented by Yeh (1986), Carrera (1987), Kool et al. (1987), McLaughlin and Townley (1996) and de Marsily et al. (1999). Therefore, a thorough review of the state of the art in aquifer modeling inversion would be superfluous.
The conceptual model: knowledge and data
Knowledge and data are the basis of the conceptual model, which represents what is going to be actually modeled. Knowledge and data concepts have been defined in many different ways. An often encountered point of view is that “knowledge” consists in beliefs about reality. In science and philosophy it is of concern whether beliefs are justifiable and true. In this view, only if beliefs are justifiable and true are they deemed knowledge. The term “data” is used here to mean all the pieces of information about the aquifer, not only hard numbers for measurements in the field.
As shown in Fig. 1, groundwater models are based on generic scientific knowledge on the behavior of groundwater and site specific data. This leads to a site specific understanding, which can be expressed in terms of mathematical equations. These equations are often manipulated to get a set of discretized equations that can be solved numerically.
Classification of the types of data commonly used in hydrogeology
Type  Example  Use 

Knowledge  Darcy’s law validity, dimensionality, B.C.’s, etc  Governing equations 
SOFT DATA  
Qualitative  Geology (heterogeneity)  Parameterization 
Quantitative (indirect)  Geophysics  
Remote sensing  
Grain size analysis  
Measurement error distribution  Calibration  
HARD DATA  
Quantitative  Pumping test  Prior Information 
Tracer test  
Heads  Calibration, validation  
Concentrations of chemical constituents  
Other measurements 
Quantitative data are acquired through measurements, which contain errors. Therefore, a probability density function (hereinafter referred to as pdf) should be used instead of single values to define measurements. A Gaussian distribution is often used, both because it has been proven to suitably describe many variables (e.g. logtransmissivity, hereinafter referred to as logT) and because it can be fully described by the mean and the standard deviation. Given the ease of use of this distribution, data that do not follow a Gaussian pdf are often transformed so that the resulting pdf is close to Gaussian (e.g. transmissivity). This is a frequent source of errors, because the fact that point values of logT follow a Gaussian distribution does not imply that the spatial distribution is multigaussian (GómezHernández and Wen 1998). In fact, the opposite can often be argued (Meier et al. 1999). Yet, the multigaussian feature of logT is the most frequent assumption in hydrogeology. Geostatistical software may be used to generate alternative spatial distributions, for example using Markovchain methods (Weissmann et al. 1999).
Transformations of raw measurements of state variables may also be appropriate to increase the sensitivity of data to parameters. This topic has received a lot of attention in surface water hydrology (Meixner et al. 1999). A groundwater example of this trend is to use the total observed mass at an outflow point instead of individual concentration values when modeling a breakthrough curve. Peak concentration and peak time can also be used (e.g. Woodbury and Rubin 2000). The same can be achieved by using different types of data or by employing an adequate weighting scheme (see, e.g. Wagner and Gorelick 1987 and Anderman and Hill 1999). Concerning the use of different data types, the information content of data is problem dependent, so that it is difficult to give general rules. Including flow rate data in the calibration improves sensitivity to transmissivity (Larocque et al. 1999). Temperature data can be informative of vertical fluxes (Woodbury et al. 1987). Environmental isotopes and age data may be informative about regional flow trends (Varni and Carrera 1998). Chen et al. (2003) used subsidence rates to help in the estimation of permeability of aquitards. Streamflow gains and losses were used by Hill (1992) to help in model calibration. In short a wide range of state variables can be used.

Virtually all types of data on both state variables and model parameters can be accommodated, except data that are strictly qualitative (such as geological descriptions).

Statistical descriptions of these data are needed to properly weigh them in the inversion.
The meaning of p in Eqs. (1), (2) and (3) has gone purposefully undefined. This is the subject of the next section.
What is to be estimated?
The coefficients in the governing equations (see Fig. 1) are the hydraulic properties, whose value is to be estimated. Regardless of their scalar or tensorial character, all of them vary in space and some of them may vary in time as well. To obtain the value of the hydraulic properties at every point of a continuous model domain is impossible. Therefore, a discrete representation is required. The process of expressing hydraulic properties in terms of a hopefully small number of model parameters (unknowns to be found during the inversion process) is termed parameterization.
Zonation
Parameterization is accomplished by partitioning the domain in a set of subdomains (zones). Typically, each component of the vector of model parameters, p_{ i }, is associated to one subdomain. Within each of them, properties q(x) are assumed constant or prescribed to vary in a predefined manner and the value of the interpolation function in Eq. (4) is zero if point x falls outside the zone being considered.
The main advantage of zonation is its generality and flexibility to accommodate the geological information (e.g., zones may represent geological units or portions of them). It should be stressed, however, that zonation does not preclude the use of geostatistics. In fact, Clifton and Neuman (1982) used zonation coupled to kriging.
While the zones need not be large, the original spirit of zonation is to reduce the dimension of p, while ensuring geological consistency (Stallman 1956). In fact, Carrera et al. (1993b) argue that, when available, geological information about parameter variability is so compelling (in the sense that it can be included deterministically) that it overcomes the advantages of conventional geostatistics. Zonation is sometimes criticized as rigid. Hence, it is not surprising that efforts have been made to optimize the geometry of zones. A particularly appealing one is “geomorphing” where the geometry of zones is derived during the calibration process (Roggero and Hu 1998).
Point estimation
It can be viewed as the limiting case of zonation, as the size of zones tends to zero (actually, to the element or cell size). The formalism of Eq. (4) can still be used (e.g. Meier et al. 2001). However, the dimension of the parameter space becomes so large that it may be more appropriate to seek alternative formulations (Kitanidis and Vomvoris 1983; Dagan 1985; McLaughlin and Townley 1996). These will be outlined in the next section.
Heuristic interpolation functions
The interpolation functions α_{ i } in Eq. (4) can be chosen arbitrarily. Different types have been chosen, including finite elements (Yeh and Yoon 1981), Ridge functions (Mantoglou 2003), or others. These approaches offer significant flexibility, but it is not clear how to define prior information on model parameters.
Pilot points
Conditional simulation
Methods described so far are implicitly based on seeking some sort of optimal estimation. As will be seen in subsequent sections, it is sometimes preferable to seek equally likely simulations of q(x, t). Neuman and Wierenga (2003) present a comprehensive strategy in the context of simulation. Alternatives that have been used include the self calibration approach of Sahuquillo et al. (1992), GómezHernández et al. (1997) and Capilla et al. (1998), in which q_{0}(x) represents a simulation of the logT field conditioned by all (soft and hard) available information and the terms α_{ i } (x, t)p_{ i } represent perturbations imposed by a set of master points. Another possibility is to express q(x) as a linear combination of random functions (Roggero and Hu 1998; Hu 2002), where α_{ i }′s represent conditional simulations and the model parameters are simply weights of those simulations.
In summary, first, a representation of the variability of the hydraulic properties is necessary and, second, the most common parameterization schemes can be written using Eq. (4) or, in spite of their quite different appearance. The question now is how to find p.
How to estimate the model parameters, p
The estimation problem deals with the concept of “best” set of model parameters. A good question is what does “best” exactly mean? There is no perfect answer to this question. In fact, there may not be a single set of model parameters leading to a “good” representation of reality, which motivates conditional simulation methods GómezHernández et al. (1997) that aim at finding a collection of equally probable parameters sets. Following is a description of the most widely used methods.
Optimization methods
This equation was originally based on matching and stability arguments. The term F_{ p } can be viewed as a regularization term in the sense of Tihonov (1963). In hydrology, Emselem and de Marsily (1971) used it to dampen oscillations. Also, Eq. (9) can be derived by statistical means in which case C_{ h } is viewed as the covariance matrix of measurement errors. Gavalas et al. (1976) derived it by maximizing the posterior pdf of the model parameters (maximum a posteriori, MAP). The obtained objective function is equal to Eq. (9) with λ equal to 1. Carrera and Neuman (1986a) derived Eq. (9) by maximizing the likelihood of the parameters given the data (maximum likelihood estimation, MLE). The advantages of formulating Eq. (9) in a statistical framework lie in the fact that it yields ways to estimate not only the parameters controlling aquifer properties but also those controlling their uncertainty (variances, variogram and the like). As it turns out, the latter are no less important than the former (Zimmerman et al. 1998).
The minimization of the objective function is an arduous task because the relation between state variables and parameters is usually nonlinear. This is why formulations of the inverse problem can be classified as linear and nonlinear.
Linear methods
By extending matrices Q_{ph} and \(\mathbf{Q}_{\text{hh}}^{  1}\) in a way similar to what was done in Eqs. (2), (14) can also be viewed as cokriging (Kitanidis and Vomvoris 1983; Hoeksema and Kitanidis 1984), which results from minimizing the variance of estimated parameters. Possibly, the most important thing is that Eq. (14) and its kriging variants do not rely explicitly on any geometrical parameterization scheme. Once Q_{hh} has been found, one can theoretically estimate p at any point. In fact, for Kitanidis and Vomvoris (1983), the only parameters to be estimated are the ones characterizing the statistical properties of the logT field. In general, the covariance of measurement errors is also needed.
Non linear methods
Summary of estimation methods. Notice that they are all quite similar despite their apparent differences. Linear methods look different, but Carrera and Glorioso (1991) showed that they can be viewed as the first iteration of nonlinear methods. All methods are explicitly stated in the section, “How to estimate the model parameters, p”
Method  Type  Estimator  Algorithm (Eq. no.)  Reference 

Least squares  Non linear  F _{h}  
Maximum Likelihood Estimation (MLE)  Non linear  −2ln P(ph)  Carrera and Neuman (1986a)  
Conditional Expectation  Linear or Non Linear  E(ph)  
Cokriging  Linear or Non linear  Estimation variance  Kitanidis and Vomvoris (1983), Carrera et al. (1993a, b)  
Maximum a posteriori (MAP)  Non linear  P(ph)  (9) with λ=1^{1}  Gavalas et al. (1976) 
Differences among methods are restricted to the characterization of statistical parameters and to the evaluation of uncertainty. Without discussing how different methods deal with these issues, the next section is devoted to arguing that indeed they are important.
How codes work
 1.
Initialization: Read input data, set iteration counter i=0, initialise parameters, p^{0}
 2.
Solve the simulation problem, h(p^{ i }), compute the objective function F^{ i }, and possibly its gradient (assuming that it is continuously differentiable), g^{ i } and the Jacobian matrix, J_{hp}.
 3.
Compute an updating vector, d, possibly using information on previous iterations, as well as g^{ i } and J_{hp}.
 4.
Update parameters, \(\mathbf{p}^{i + 1} = \mathbf{p}^i + \mathbf{d}\)
 5.
If convergence has been reached, then stop. Otherwise, set i=i+1 and return to 2.
These steps are rather straightforward, except for the definition of the updating direction, d, and the computation of g and J_{hp}. The most frequent methods for these are outlined below.
Computing the updating step, d
Comparison of minimization methods used in groundwater inverse modeling
Most codes use these methods, which are generically termed descent methods, because they seek an improvement of the objective function at each iteration. As a result they tend to get stuck at local minima. Overcoming local minima is the motivation of many methods based on different variations of random searches such as simulated annealing (Rao et al. 2003) or genetic algorithms (Tsai et al. 2003). One method that has proven highly effective in surface water hydrology is the “Shuffled Complex Evolution” (Duan et al. 1992), which has not yet been tested in groundwater. These methods have not been widely used in practice both because their performance decreases when the number of parameters is large (say, larger than 20), which is frequent in aquifer modeling, and because they demand many evaluations of the objective function, which is rather expensive in groundwater models.
The complexity of the forward problem is relevant when comparing different optimization methods. For example, if the direct problem is nonlinear, the cost of evaluating the objective function increases much more than the cost of computing the gradient or the Jacobian matrix, because the computation of these is non iterative. This tends to favor order of convergence 2 methods over either order 0 or 1 methods for nonlinear problems.
Sensitivity equations
Sensitivities are the derivatives of state variables with respect to the model parameters. They are useful for two reasons. On the one hand, they can be employed in some of the above optimization methods. On the other hand, they yield useful information about the reliability of both the model and its estimated parameters. In addition, as they identify which data are most sensitive to which parameters, they can be used in the design of optimal networks (see section entitled, “How good is the model?”). In essence, sensitivities can be obtained in three ways: direct derivation, adjoint state and finite differences. A comparison of these methods can be found in Carrera et al. (1990a) and Carrera and Medina (1994).
It should be noticed that Eq. (15) allows the computation of the derivatives of nodal heads with respect to the parameters, but they can be computed at every point in space using the finite elements method (FEM) interpolation functions. The derivatives of any variable that depends on heads can be also computed as e.g. flows.
Adjoint state equations can also be used for computing the Jacobian matrix, being advantageous when the number of observation points is smaller than the number of parameters, or for the exact computation of the Hessian matrix (second order derivatives of the objective function with respect to model parameters; (see computation details in Carrera and Medina 1994; Medina and Carrera 2003).
Finite differences can be used without added cost to compute the Jacobian matrix. Mehl and Hill (2003) compare the ways to perform this calculation.
Synthetic comparison of methods to compute derivatives. Cost per GaussNewton iteration is estimated in terms of simulation runs for a hypothetical problem of 100 parameters and 20 observation points. In the case of direct nonlinear problems (i.e., nonlinear governing equations, e.g. unsaturated flow problem), it is assumed that 10 iterations are needed for the direct problem
Method  Advantages  Disadvantages  Cost per iteration  

Linear problem  Nonlinear problem  
Direct derivation  Exact  Hard to program  101^{1}  110^{1} 
Adjoint state  Exact  Hard to program  21^{1}  n/a 
Finite differences  Easy programming  Not exact  101  1010 
How good is the model?
Following the modeling procedure outlined in Fig. 1, once the parameters have been estimated, it is necessary to assess how good are both the model concept and estimated parameters. Model quality is affected by three factors (Beck 1987): (1) uncertainty about the model structure, (2) uncertainty about the values of the parameters appearing in the model structure and (3) uncertainty associated with predictions of the future behavior of the system. The term uncertainty is used here to mean not only random fluctuations in errors, but also biases (see, e.g., Barth et al. 2001).
Uncertainty about the model structure
Conceptual models have many uncertain features because data are never exhaustive and contain inconsistencies. Modelers are forced to make simplifying assumptions. Errors are introduced in the parameterization, in the discretization, in the selection of boundary conditions, etc. Addressing this uncertainty often requires posing several conceptual models. Different models can be compared in terms such as model fit Eq. (8), residual distribution, parameter correlation, and confidence intervals for parameters and predictions. Ideally, a good model should lead to a good match with observations, uncorrelated residuals, and reasonable parameter values. Still, several models may fit all these criteria and one may need to select one among them. Several model selection criteria have been defined from the field of time series analysis and applied to groundwater (Akaike 1974, 1977; Rissanen 1978; Schwarz 1978; Hannan 1980; Kashyap 1982). Carrera and Neuman (1986c) applied four of these criteria to a synthetic test case and concluded that the Kashyap criterion was the best. Similar results were obtained by Carrera et al. (1990b) and by Medina and Carrera (1996).
Still, one might question the wisdom of selecting only one model. This implies rejecting the others, which may not be logical if they are consistent with available knowledge and data. This line of argument leads to accepting a large set of models and using them all to characterize uncertainty in model predictions (Beven and Binley 1992; Beven and Freer 2001)
Difficulties associated with the calibration process
Instability and large uncertainty are different concepts, but both are often associated to elongated confidence regions, which can be characterized by the eigenvalues and eigenvectors of the posterior covariance matrix. The eigenvectors of this matrix form a set of orthogonal vectors, each of which is associated to an eigenvalue. The vector associated to the largest eigenvalue represents the linear combination of parameters that has the largest uncertainty, whereas the vector with the smallest eigenvalue defines the direction with least uncertainty. If eigenvalues are dramatically different, then one should expect instabilities to occur. One of the effects is parameter correlation. The parameters in Fig. 2 suffer from correlation: a shift in the estimation of one parameter (say, P1) causes a shift in the optimal value of the other (P2). In other words, the estimated parameters are not independent. This dependence causes the confidence intervals of the parameters to be larger than they would have been if they were independent.

Regularization: This is what motivated adding F_{p} to the objective function in (9). Weiss and Smith (1998) comment on which prior information will most effectively reduce uncertainty.

Reducing the number of parameters: This is what motivates the parameterization schemes discussed in the section entitled, “What is to be estimated?”.

Increasing the number and types of data, which was discussed in the section, “The conceptual model: Knowledge and data”.

Optimizing the observation scheme: Observation networks and experiments can be designed to minimize model uncertainty and/or to increase the ability of data to discriminate among alternative models (Knopman and Voss 1989; Usunoff et al. 1992)
Despite this, one may have to acknowledge that it is not possible to find a unique solution to the problem. This motivates some researchers to use stochastic simulation of parameter fields conditional to data, rather than estimation (e.g. GómezHernández et al. 1997). These techniques generate large numbers of e.g. transmissivity fields that satisfy the available head and transmissivity data. In this way, one ends up with a number of models, rather than one. Uncertainty is associated with the ensemble set of all simulations, rather than to statistical measures of uncertainty. As another alternative, Yapo et al. (1998) study instability using a concept known as the pareto optimum, which denotes the set of parameter vectors for which improving one component of the objective function causes a deterioration in another component.
Difficulties associated with predictions of the future behavior of the system
All the above difficulties cause the model parameters to have some uncertainty, which is inherited by the model predictions. The possibility of multiple conceptual models causes further uncertainty in the model predictions. Therefore, evaluating uncertainty in prediction requires analyzing the effect of both parameter and model uncertainty. The latter is usually analyzed by simulating with the models available and evaluating the range of predictions (e.g., Medina and Carrera 1996). While systematization is needed, the fact that conceptual model building is not systematic makes this objective hard to meet. Hence, what follows concentrates on evaluating the effect of parameter uncertainty.
 1)
Sensitivity of predictions to model parameters and initial conditions. Obviously, a parameter is a source of concern only if predictions are sensitive to it, as measured by ∂f/∂p. This is why sensitivity analyses are sometimes performed instead of a formal error analysis.
 2)
Uncertainty in model parameters (and/or initial state). This is measured by the covariance matrix \( {\mathbf{\Sigma }}_{p} \).
Non linear methods are more difficult to apply. Vecchia and Cooley (1987) present a way to compute nonlinear confidence intervals. They conclude that (i) corresponding linear and nonlinear confidence intervals are often offset or shifted towards each other, and the nonlinear ones are often larger, (ii) the variability in sizes of nonlinear confidence intervals is usually larger than the corresponding variability in linear confidence interval sizes, (iii) the difference between the sizes of linear and nonlinear confidence intervals increases as the sizes of the intervals increase, and (iv) prior information can alter the size of the confidence intervals. Christensen and Cooley (1999) present a measure to quantify model nonlinearity. The prediction analyzer of PEST currently includes nonlinear confidence intervals.
The Monte Carlo method is the most computationally intensive method. It is based on many forward problem evaluations with different sets of parameters. Its main advantages are that it is easy to understand, it yields a probability density function and does not require difficult assumptions. Its main problem is that it is hard to ascertain the required number of simulations. An example of this is the GLUE methodology (Generalized Likelihood Uncertainty Estimation; Beven and Freer 2001).
What is actually done?
Application trends
The literature in scientific journals tends to concentrate on the development of new methods and new interpretations. Therefore it is not appropriate for identifying application trends. Applications are most often found in internal reports or special sessions of congresses. None of these are easy to track exhaustively. Therefore, this section is biased by what can effectively be found and by the authors’ personal views. Despite the above, a search on the “web of science” (http://go5.isiknowledge.com/portal.cgi), which keeps track of all papers published in major journals, was performed. Results suggest that the number of papers about inverse modeling remains more or less steady (around 5% of those about groundwater modeling), while the number of papers using inverse modeling has slowly but steadily increased in the last 13 years. These papers cover a broad range of topics. Many have been cited in previous sections and some will be cited in the remaining sections. It is difficult, however, to identify trends. The only ones that emerge is that geostatistical inversion tends to be used in relatively small scale problems, such as the interpretation of hydraulic tests, while large scale models tend to be based on zonation. Since most groundwater applications can be classified in one of these two categories, they deserve further attention and are discussed below.
Geostatistical inversion
The use of geostatistics is motivated by the need to address the variability of hydraulic properties (specifically transmissivity) when modeling aquifers. In practice, however, this need can be understood in two different ways: (1) to constrain model parameters and (2) because it is deemed necessary for model predictions. While these two views are not exclusive, they rarely go together.
Methodologically, geostatistical inversion follows the steps originally proposed by Clifton and Neuman (1982). That is, one starts by proposing a stochastic model (i.e., whether the logT is stationary, what is its variogram and mean, etc). Second, available data is used to produce a prior, best estimate of logT and its covariance, using Eqs. (2) and (3). Finally, these are used to obtain an estimate of model parameters either by minimizing Eq. (9) or using Eq. (14). Kitanidis and Vomvoris (1983) and Carrera and Neuman (1986a) modified this concept by emphasizing the need for optimal estimation of statistical parameters (variances of errors, correlation distance and the like). The importance of these parameters has been recognized by many. In fact, this is one of the conclusions of Zimmerman et al. (1998), after comparing different geostatistical inversion techniques.
Successful applications to real field data are restricted to relatively smallscale problems. These include hydraulic test interpretation (Yeh and Liu 2000; Meier et al. 2001; Vesselinov et al. 2001), well capture zone delineation (Vassolo et al. 1998; Kunstmann et al. 2002; Harrar et al. 2003), and others (Barlebo et al. 2004). The fact that applications to largescale aquifers are scarce reflects the difficulty in modeling them, but also points out the two limitations of geostatistics as it is most frequently used: it fails to include geological information and it fails to reproduce actual variability.
Geological information is normally expressed in terms that are difficult to account for during inversion (depositional patterns, orientation of conductive features, and the like). This type of information can become precise at the large scale but rarely at the test scale. Here, the only thing that can be said is that permeability is variable, which is properly acknowledged by geostatistics. Still, when discrete features are identified, inversion gains significantly by explicitly including them as deterministic features. This was acknowledged by Meier et al. (2001), who used information about the stress state of a shear zone to define the anisotropy direction of fracture transmissivity.
Ironically, the main problem with geostatistical estimation is the fact that it fails to reproduce actual variability. As discussed in the section, “How to estimate the model parameters, p”, Eqs. (9) and (14) yield the conditional expectation of logT given the available data. As such, the estimation says little about actual deviations because the expected value filters them out. When data are abundant, which is frequent in detailed hydraulic tests, variability patterns can be delineated with some certainty (e.g. Meier et al. 2001). Otherwise, only the estimation covariance provides some information about variability and about the continuity of high conductivity zones (paths for solutes migration) or low conductivity barriers. Another source of hope for largescale geostatistical inversion is the use of geophysical data (Hubbard and Rubin 2000).
GómezHernández and coworkers (GómezHernández et al. 1997; Capilla et al. 1998) address the above issue by rejecting optimal estimation altogether and, instead, performing conditional simulations. Ideally, the average of all these simulations should be equal to the conditional estimation, but each of them reproduces the assumed variability. As such, when used for predicting processes that are sensitive to variability (e.g., contaminant transport), simulations are much more appropriate.
Geologically based inversion
The main difference between aquifer scale models and test scale models is the degree of reliance on geology when defining variability and the multiplicity of parameter types. Regarding variability, geological data are rarely precise but cannot be ignored. Different formations, or different units within a formation, may have different properties. Thus, when these units can be outlined, the worth of this information cannot be ignored. Unfortunately, boundaries between units are rarely known accurately. Hence, a lot of work may be needed to test different geometries whenever the effect of geometry is found to be important. The process involves defining geometry and testing it against available data, which is repeated until a satisfactory fit is found. The procedure is tedious, it involves interactions between modelers and geologists, and it is not systematic. In fact, it does not get properly documented, so that if the model is revised years later, one is not sure about the reason behind the selected model structure. Things are made worse by the fact that not only transmissivity but also other types of parameters need to be specified. In the experience of the authors of this paper, ambiguities frequently exist about recharge (both average amount and time variability), boundary fluxes, pumping rates (the main finding of Castro et al. 1999, was that official pumping rates were badly underestimated) and the nature of riveraquifer interaction.

Point values of hydraulic conductivity and transmissivity are prone to error. Moreover they may be of little use when modeling at scales much larger than the pumping test in which they are based. These measurements need to be put into the related geological context.

Dominant features (i.e. conductive fractured zones, paleochannels, or the like) must be included in the model even if they are not known accurately.

Much information about aquifer behavior is contained in discrete events (floods, big rainfalls). Taking full advantage of these requires transient simulations.

Model calibration is rarely unique (i.e., different model structures may fit hard data satisfactorily). This uncertainty ought to be acknowledged when performing model predictions. Reducing it often requires the use of complementary data as discussed in the section, “The conceptual model: Knowledge and data”.
This kind of approach displays several drawbacks. On one the hand, it does not account properly for uncertainty. The resulting covariance matrix of model parameters is conditioned not only on hard data, but also on the many subjective decisions the modeler has made. While these can be taken into account by making predictions with several conceptual models, it is rarely done. On the other hand, one is never sure about the validity of the model beyond calibration conditions. In summary, the procedure needs to be systematized. This is best done in a geostatistical framework, hence the need to seek geostatistical descriptions that take advantage of qualitative geology data.
What comes next?
The discussion in the previous section makes it clear that it is believed that the time is ripe for standard use of inverse modeling in groundwater studies aimed at aquifer characterization and management, a view which we share with Poeter and Hill (1997).
In the 1970s, the U.S. Geological Survey promoted the use of numerical modeling for aquifer studies. This led to a significant rise in the quality of understanding of groundwater flow by many hydrologists. Numerical modeling forced them to be quantitatively consistent when integrating different data types. Since this is never straightforward, hydrologists were forced to test different parameter values, to perform sensitivity analyses, and to guess at what could be the cause behind observed data. In the end, they might not be fully successful, but they gained understanding, which is what counts for proper decision making.
The situation is now changing on several accounts. For one, hydrologists are increasingly expected to make hard decisions for which qualitative understanding is not sufficient. Instead, accurate quantitative models are needed. Second, the volume of data is also increasing. Long data sets of heads, pumping history, and hydrogeochemical measurements are becoming available. While they contain valuable qualitative information, it is clear that much more can be gained by using them quantitatively, i.e., by building models that can match those data. Third, modeling exercises such as INTRAVAL (Larsson 1992) have made it clear that the most important issue when modeling an aquifer is the conceptual model. Since manual calibration is very tedious, modellers have not been able to concentrate sufficiently on conceptual issues. Automatic calibration should change that.

Difficulty in running inversion codes. Since one has to introduce data on parameterization, observations, their reliability, etc., general purpose inversion codes are cumbersome and hard to run. The emergence of codes such as PEST or UCODE has changed that trend significantly, but places an additional burden on the modeler by requiring him or her to know programming and the inside of codes. This could be alleviated by developing ‘utility programs’ to enable the interfacing of the inversion engine with the forward model.

Incorporation of geological information. As discussed earlier, zonation is the most convenient and widely used approach for realistic incorporation of geological data available in the form of maps. This is too rigid and incomplete. Geological information is usually “soft”, in the sense that boundaries between formations underground are rarely known accurately. Moreover, important features (paleochannels, water conducting faults, etc.) may have gone unmapped. Furthermore, even within a formation, there may be a lot of qualitative information (depositional patterns, continuity of water conducting features, gradation of materials and the like) that is difficult to incorporate in a zonation framework. For solute transport problems, these sources of small scale variability may be just as important as large scale trends. Presumably, they can be best handled in a geostatistical framework. While some approaches are available for incorporating them (i.e., treat each zone geostatistically, use geological maps as soft information), existing codes lack sufficient robustness and flexibility to be of general use.

Incorporation of age, environmental isotopic data, temperature and other sources of information. As discussed, using different types of data improves dramatically the stability of inversion and the robustness of the model. In fact, a number of authors have shown that incorporating these data does improve the reliability of the model. The problem, again, lies in the availability of easytorun, flexible codes.

Representation of uncertainties. It has also been discussed that dealing with uncertainty is an integral part of modeling. This is true both at the characterization stage, where data contain errors and model structure is never accurately known, and at the prediction stage. Well informed decisionmaking can not be based on single model predictions. Acknowledging uncertainties in both model concept and parameters is required. As discussed, linear estimates of uncertainty are very poor (they may underestimate actual errors in orders of magnitude; Carrera and Glorioso 1991). Alternative, nonlinear estimates of error are difficult to use for hydrologists who are not familiar with statistics, even though tools are becoming increasingly userfriendly. Therefore, Monte Carlo simulation remains the most appealing. An advantage is the relative ease to accommodate conceptual model uncertainty. Still, at present, Monte Carlo methods are extremely expensive and again, not easy to use.

Coupling to GIS. One of the drawbacks of inverse modeling is the need to incorporate all causes of variability; geology, soil properties and use, pumping, etc. If any of them is missed (e.g., if the effects of a pumping well are not incorporated), the algorithm will react by modifying the parameters to circumvent the effect of the error so as to best fit the measurements (e.g., reduce recharge, increase transmissivity, etc.). The resulting model is thus erroneous not only because of the missing factor, but also because of those modifications. Conventional modeling is less sensitive to this kind of error (one may be aware that model calculations may be erroneous in areas affected by unknown factors, but that does not affect the rest of the model). Therefore, inverse modeling requires careful accounting. As the history of aquifers becomes increasingly complex (growing number of pumping wells, evolving soil uses, improved knowledge of aquifer geometry), so does the difficulty to incorporate all factors. The tendency to incorporate all territorial data in GIS sheds some hope on the possibility of incorporating all these data in a model. In fact, there have been a number of efforts to link GIS to traditional models (Gogu et al. 2001; Chen et al. 2002). It is clear, however, that this need is more pressing for inverse models. Efforts along the line of Graphical User Interfaces are starting.
All the above issues might suggest that inverse modeling is not yet mature. It should be stressed, however, that all the abovementioned difficulties are just that, difficulties. They can be, and indeed are, overcome in practice through laborious work. Moreover, these difficulties are not specific of inverse modeling; they are mostly shared by conventional modeling (although they are often ignored). Finally, while it is true that further improvement of programs is needed, it is not clear that their applicability is more difficult than it was for the use of early codes when the U.S. Geological Survey made modeling a routine for aquifer studies. Routine application of inverse modeling is the future. The sooner it starts, the better prepared will hydrologists be to face the challenges of the soontocome future.
Notes
Acknowledgments
The final manuscript benefitted from comments by Mary Hill, Matt Tonkin and Johan Valstar
References
 Akaike H (1974) A new look at statistical model identification. IEEE Trans Automat Contr AC19:716–722Google Scholar
 Akaike H (1977) On entropy maximization principle. In: Krishnaiah PR (ed) Applications of statistics. North Holland, Ámsterdam, pp 27–41Google Scholar
 Anderman ER, Hill MC (1999) A new multistage groundwater transport inverse method, Presentation, evaluation, and implications. Water Resour Res 35(4):1053–1063Google Scholar
 Barlebo HC, Hill MC, Rosbjerg D (2004) Identification of groundwater parameters at Columbus, Mississippi, using threedimensional inverse flow and transport model. Water Resour Res 40(4):W0421110Google Scholar
 Barth GR, Hill MC, Illangasekare TH, Rajaram H (2001) Predictive modeling of flow and transport in a twodimensional intermediatescale, heterogeneous porous media. Water Resour Res 37(10):2503–2512Google Scholar
 Beck MB (1987) Water quality modelling: a review of the analysis of uncertainty. Water Resour Res 23(8):1393–1442Google Scholar
 Bennet RR, Meyer RR (1952) Geology and groundwater resources of the Baltimore area. Mines and Water Resour Bull 4, Maryland Dept of GeologyGoogle Scholar
 Beven K (1993) Prophecy, Reality and uncertainty in distributed hydrological modeling. Adv Water Resour 16(1):41–51Google Scholar
 Beven KJ, Binley AM (1992) The future of distributed models: model calibration and uncertainity prediction. Hydrol Process 6(3):279–298Google Scholar
 Beven KJ, Freer J (2001) Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems using the GLUE methodology. J Hydrol 249:11–29Google Scholar
 Bredehoeft J (2004) Modeling: the conceptualization problemsurprise. Hydrogeol J (this issue)Google Scholar
 Capilla JE, GómezHernández JJ, Sahuquillo A (1998) Stochastic simulation of transmissivity fields conditional to both transmissivity and piezometric head data—3. Application to the Culebra formation at the waste isolation pilot plan (WIPP), New Mexico, USA. J Hydrol 207(3–4):254–269Google Scholar
 Carrera J (1987) State of the art of the inverse problem applied to the flow and solute transport problems. In: Groundwater flow and quality modeling, NATO ASI Ser: 549–585Google Scholar
 Carrera J, Neuman SP (1986a) Estimation of aquifer parameters under transient and steadystate conditions, 1. Maximum likelihood method incorporating prior information. Water Resour Res 22(2):199–210Google Scholar
 Carrera J, Neuman SP (1986b) Estimation of aquifer parameters under transient and steadystate conditions, 2. Uniqueness, stability and solution algorithms. Water Resour Res 22(2):211–227Google Scholar
 Carrera J, Neuman SP (1986c) Estimation of aquifer parameters under transient and steadystate conditions, 3. Application to synthetic and field data. Water Resour Res 22(2):228–242Google Scholar
 Carrera J, Navarrina F, Vives L, Heredia J, Medina A (1990a) Computational aspects of the inverse problem. In Proc. of VIII international conference on computational methods in water resources. CMP, pp 513–523Google Scholar
 Carrera J, Heredia J, Vomvoris S, Hufschmied P (1990b) Fracture Flow Modelling: Application of automatic calibration techniques to a small fractured Monzonitic Gneiss Block. In: Neuman N (ed) Proc hydrogeology of low permeability environments, IAHPV, Hydrogeology, selected papers, vol 2, pp 115–167Google Scholar
 Carrera J, Glorioso L (1991) On Geostatistical Formulations of the Groundwater Flow Inverse Problem. Adv Water Resour 14(5):273–283Google Scholar
 Carrera J, Medina A, Galarza G (1993a) Groundwater inverse problem. Discussion on geostatistical formulations and validation. Hydrogéologie (4):313–324Google Scholar
 Carrera J, Mousavi SF, Usunoff E, SanchezVila X, Galarza G (1993b) A discussion on validation of hydrogeological models. Reliability Eng Syst Saf 42:201–216Google Scholar
 Carrera J, Medina A (1994) An improved form of adjointstate equations for transient problems. In: Peters, Wittum, Herrling, Meissner, Brebbia, Grau, Pinder (eds) Proc X international conference on methods in water resources, pp 199–206Google Scholar
 Castro A, VazquezSuñe E, Carrera J, Jaen M, Salvany JM (1999) Calibración del modelo regional de flujo subterráneo en la zona de Aznalcóllar, España: ajuste de las extracciones [Calibration of the groundwater flow regional model in the Aznalcollar site, Spain: extractions fit]. In Tineo A (ed) Hidrología Subterránea. II,13. Congreso Argentino de Hidrogeología y IV Seminario Hispano Argentino sobre temas actuales de la hidrogeologiaGoogle Scholar
 Chen CX, Pei SP, Jiao JJ (2003) Land subsidence caused by groundwater exploitation in Suzhou City, China. Hydrogeol J 11(2):275–287Google Scholar
 Chen Z, Huang GH, Chakma A, Li J (2002) Application of a GISbased modeling system for effective management of petroleumcontaminated sites. Env Eng Sci 9(5):291–303Google Scholar
 Christensen S, Cooley RL (1999) Evaluation of confidence intervals for a steady state leaky aquifer model. Adv Water Resour 22(8):807–817Google Scholar
 Clifton PM, Neuman SP (1982) Effects of kriging and inverse modeling on conditional simulation of the Avra valley aquifer in southern Arizona. Wat Resour Res 18(4):1215–1234Google Scholar
 Cooley RL (1977) A method of estimating parameters and assessing reliability for models of steady state groundwater flow, 1, Theory and numerical properties. Water Resour Res 13(2):318–324Google Scholar
 Cooley RL (1985) A comparison of several methods of solving nonlinearregression groundwaterflow problems. Water Resour Res 21(10):1525–1538Google Scholar
 Cooley RL, Konikow LF, Naff RL (1986) Nonlinear regression groundwaterflow modeling of a deep regional aquifer system. Water Resour Res 22(13):1759–1778Google Scholar
 Dagan G (1985) Stochastic modeling of groundwater flow by unconditional and conditional probabilities: the inverse problem. Water Resour Res 21(1):65–72Google Scholar
 de Marsily GH, Lavedan G, Boucher M, Fasanino G (1984) Interpretation of interference tests in a well field using geostatistical techniques to fit the permeability distribution in a reservoir model. In: Verly et al (ed) Proc Geostatistics for natural resources characterization. Part 2. D. Reidel Pub. Co. : pp 831–849Google Scholar
 de Marsily G, Delhomme JP, Delay F, Buoro A (1999) 40 years of inverse problems in hydrogeology. Comptes Rendus de l’Academie des Sciences Series IIA. Earth and Planet Sci 329(2):73–87. Elsevier ScienceGoogle Scholar
 Doherty J, Brebber L, Whyte P (2002) PESTModelling dependent parameter estimation. Water Mark Computing. Corinda (Australia)Google Scholar
 Doherty J (2003) Groundwater model calibration using pilot points and regularization. Ground Water 41(2):170–177Google Scholar
 Duan QY, Sorooshian S, Gupta V (1992) Effective and efficient global optimization for conceptual rainfallrunoff models. Water Resour Res 28(4):1015–1031Google Scholar
 Emselem Y, de Marsily G (1971) An automatic solution for the inverse problem. Wat Resour Res 7(5):1264–1283Google Scholar
 Gavalas GR, Shaw PC, Seinfeld JH, (1976) Reservoir history matching by Bayesian estimation. Soc Petrol Eng J 261:337–350Google Scholar
 Gogu RC, Carabin G, Hallet V, Peters V, Dassargues A (2001) GISbased hydrogeological databases and groundwater modeling. Hydrogeol J 9(6):555–569Google Scholar
 GómezHernández JJ, Sahuquillo A, Capilla JE (1997) Stochastic simulation of transmissivity fields conditional to both transmissivity and piezometric data. 1. Theory. J Hydrol 204(1–4):162–174Google Scholar
 GómezHernández JJ, Wen XH (1998) To be or not to be multiGaussian? A reflection on stochastic hydrogeology. Adv Water Resour 21(1):47–61Google Scholar
 Gupta HV, Bastidas LA, Sorooshian S, Shuttleworth WJ, Yang ZL (1999) Parameter estimation of a land surface scheme using multicriteria methods. J Geophys ResAtmos 104(D16):19491–19503Google Scholar
 Hadamard J (1902) Sur les problemes aux derivees partielles et leur signification physique. [On the problems about partial derivatives and their physical significance]. Bull Univ Princeton 13:49–52Google Scholar
 Hannan ES (1980) The estimation of the order of an ARMA process. Ann Stat (8):1071–1081Google Scholar
 Harrar WG, Sonnenborg TO, Henriksen HJ (2003) Capture zone, travel time, and solutetransport predictions using inverse modeling and different geological models. Hydrogeol J 11(5):536–548Google Scholar
 Hernandez AF, Neuman SP, Guadagnini A, Carrera J, (2003) Conditioning mean steady state flow on hydraulic head and conductivity through geostatistical inversion. Stochas Env Res Risk Assess 17(5):329–338Google Scholar
 Hill MC (1990) Relative efficiency of four parameterestimation methods in steadystate and transient groundwater flow models. In: Gambolati, Rinaldo, Brebbia, Gray, Pinder (eds) Proc Computational Methods in Subsurface Hydrology, International Conference on Computational Methods in Water Resources, pp 103–108Google Scholar
 Hill MC (1992) A computer program (MODFLOWP) for estimating parameters of a transient, threedimensional, groundwater flow model using nonlinear regression. U.S. Geological SurveyGoogle Scholar
 Hill MC (1998) Methods and guidelines for effective model calibration. US geological survey. WaterResour Investigat Rep 98–4005, 91 pp, ColoradoGoogle Scholar
 Hill MC, Cooley RL, Pollock DW (1998) A controlled experiment in ground water flow model calibration. Ground Water 36(3):520–535Google Scholar
 Hoeksema RJ, Kitanidis PK (1984) Comparison of Gaussian conditional mean and kriging estimation in the geostatistical solution to the inverse problem. Water Resour Res 21(6):337–350Google Scholar
 Hollenbeck KJ, Jensen KH. (1998) Maximumlikelihood estimation of unsaturated hydraulic parameters. J Hydrol 210(1–4):192–205Google Scholar
 Hu LY (2002) Combination of Dependent Realizations within the gradual deformation method. Math Geol 34(8):953–963Google Scholar
 Hubbard S, Rubin Y (2000) A review of selected estimation techniques using geophysical data. J Contamin Hydrol 45(2000):3–34Google Scholar
 Kashyap RL (1982) Optimal choice of AR and MA parts in autoregressive moving average models. IEEE Trans Pattern Anal Mach Intel PAMI 4(2):99–104Google Scholar
 Kitanidis PK, Vomvoris EG (1983) A geostatistical approach to the inverse problem in groundwater modelling (steady state) and one dimensional simulations. Water Resour Res 19(3):677–690Google Scholar
 Kitanidis PK (1997) Introduction to geostatistics: applications to hydrogeology. Cambridge University Press, Cambridge, NYGoogle Scholar
 Knopman DS, Voss CI (1989) Multiobjective sampling design for parameterestimation and model discrimination in groundwater solute transport. Water Resour Res 25(10):2245–2258Google Scholar
 Kool JB, Parker JC, Van Genuchten MT (1987) Parameter estimation for unsaturated flow and transport models. A Review J Hydrol 91:255–293Google Scholar
 Kool JB, Parker JC (1988) Analysis of the inverse problem for transient unsaturated flow. Water Resour Res 24(6):817–830Google Scholar
 Kowalsky MB, Finsterle S, Rubin Y (2004) Estimating flow parameter distributions using groundpenetrating radar and hydrological measurements during transient flow in the vadose zone. Adv Water Resour 27:583–599Google Scholar
 Kunstmann H, Kinzelbach W, Siegfried T (2002) Conditional firstorder secondmoment method and its application to the quantification of uncertainty in groundwater modeling. Water Resour Res 38(4):Art. No. 1035Google Scholar
 Larocque M, Banton O, Ackerer P, Razack M (1999) Determining karst transmissivities with inverse modeling and an equivalent porous media. Ground Water 37(6):897–903Google Scholar
 Larsson A (1992) The International Projects INTRACOIN, HYDROCOIN and INTRAVAL. Adv Water Resour 15(1):85–87Google Scholar
 Mantoglou A (2003) Estimation of Heterogeneous Aquifer Parameters from Piezometric Head Data using Ridge Functions and Neural Networks. Stochas Environmen Risk Assess 17:339–352Google Scholar
 Marquardt DW (1963) An algorithm for leastsquares estimation of nonlinear parameters. J Soc Indust Appl Math 11(2)Google Scholar
 McLaughlin D, Townley LLR (1996) A reassessment of the groundwater inverse problem. Water Resour Res 32(5):1131–1161Google Scholar
 Medina A, Carrera J (1996) Coupled estimation of flow and solute transport parameters. Water Resour Res 32(10):3063–3076Google Scholar
 Medina A, Carrera J (2003) Geostatistical inversion of coupled problems: dealing with computational burden and different types of data. J Hydrol 281:251–264Google Scholar
 Mehl SW, Hill MC (2003) Locally refined blockcentered finitedifference groundwater models. In: Kovar K, Zbynek H (eds) Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions. IAHS Publication 277, p. 227–232Google Scholar
 Meier P, Carrera J, SanchezVila X (1999) A numerical study on the relation between transmissivity and specific capacity in heterogeneous aquifers. Ground Water 37(4):611–617Google Scholar
 Meier P, Medina A, Carrera J (2001) Geoestatistical inversion of crosshole pumping tests for identifyingpreferential flow channels within a shear zone. Ground Water 39(1):10–17Google Scholar
 Meixner T, Gupta HV, Bastidas LA, Bales RC (1999) Sensitivity analysis using mass flux and concentration. Hydrol Proc 13(14–15):2233–2244Google Scholar
 Nelson RW (1960) In place measurement of permeability in heterogeneous media, 1. Theory of a proposed method. J Geophys Res 65(6):1753–1760Google Scholar
 Nelson RW (1961) In place measurement of permeability in heterogeneous media, 2. Experimental and computational considerations. J Geophys Res 66:2469–2478Google Scholar
 Neuman SP (1973) Calibration of distributed parameter groundwater flow models viewed as a multipleobjective decision process under uncertainty. Water Resour Res 9(4):1006–1021Google Scholar
 Neuman SP, Wierenga PJ (2003) A comprehensive strategy of hydrogeologic modeling and uncertainty analysis for nuclear facilities and sites. NUREG/CR6805, US Nuclear Regulatory Commision, Washington, DCGoogle Scholar
 Poeter EP, Hill MC (1997) Inverse models: A necessary next step in groundwater modeling. Ground Water 35(2):250–260Google Scholar
 Poeter EP, Hill MC (1998) Documentation of UCODE: a computer code for universal inverse modeling. U.S. Geological Survey WaterResources Investigations Report 98–4080: 116 ppGoogle Scholar
 Ramarao BS, Lavenue AM, de Marsily GH, Marietta MG (1995) Pilot point methodology for automated calibration of an ensemble of conditionally simulated transmissivity fields, 1. Theory and computational experiments. Water Resour Res 31(3):475–493CrossRefGoogle Scholar
 Rao SVN, Thandaveswara BS, Bhallamudi SM (2003) Optimal groundwater management in deltaic regions using simulated annealing and neural networks. Water Resour Manag 17(6):409–428Google Scholar
 Rissanen J (1978) Modeling by shortest data description. Automatica 14:465–471CrossRefGoogle Scholar
 Roggero F, Hu LY (1998) Gradual deformation of continuous geostatistical models for history matching. In annual technical conference, SPE 49004Google Scholar
 Rubin Y (2003) Applied stochastic hydrogeology. Oxford University Press, New York 391 ppGoogle Scholar
 Rubin Y, Dagan G (1987) Stochastic Identification of Transmissivity and Effective Recharge in Steady GroundwaterFlow, 1 Theory. Water Resour Res 23(7):1185–1192Google Scholar
 Rudin M, Beckmann N, Prosas R., Reese T, Bochelen D, Sauter A. (1999) In vivo magnetic resonance methods in pharmaceutical research: current status and perspectives. NMR Biomed 12(2):69–97Google Scholar
 Rühli FJ, Lanz C, UlrichBochsler S, Alt KW (2002) Stateoftheart imaging in palaeopathology: the value of multislice computed tomography in visualizing doubtful cranial lesions. Int J Osteoarchaeol 12(5):372–379Google Scholar
 Sahuquillo A, Capilla J, GómezHernández JJ, Andreu J (1992) Conditional simulation of transmissivity fields honouring piezomètrica data. In: Blain WR, Cabrera E Fluid Flow Modeling, Comput. Mech., Billerica, Mass, pp 201–212Google Scholar
 Sambridge M, Mosegaard K (2002) Monte Carlo methods in geophysical inverse problems. Rev Geophys 40(3):Art. No. 1009Google Scholar
 Schwarz G (1978) Estimating the dimension of a model. Ann Stat 6(2):461–464Google Scholar
 Stallman RW (1956) Numerical analysis of regional water levels to define aquifer hydrology. Am Geophys Union Trans 37(4):451–460Google Scholar
 Tihonov AN (1963) Regularization of incorrectly posed problems, Sov. Math Dokl 4:1624–1627Google Scholar
 Tsai FTC, Sun NZ, Yeh WG (2003) Globallocal optimization methods for the identification of threedimensional parameter structure in groundwater modeling. Water Resour Res 39(2) Art:1043CrossRefGoogle Scholar
 Usunoff E, Carrera J, Mousavi SF (1992) An approach to the design of experiments for discriminating among alternative conceptual models. Adv Water Resour 15(3):199–214Google Scholar
 Varni M, Carrera J (1998) Simulation of groundwater age distribution. Wat Resour Res 34(12):3271–3281Google Scholar
 Vassolo S, Kinzelbach W, Schafer W (1998) Determination of a well head protection zone by stochastic inverse modelling. J Hydrol 206(3–4):268–280Google Scholar
 Vecchia AV, Cooley RL (1987) Simultaneous confidence and prediction intervals for nonlinear regression models with application to a groundwater flow model. Water Resour Res 23(7):1237–1250Google Scholar
 Vesselinov VV, Neuman SP, Illman WA (2001) Threedimensional numerical inversion of pneumatic crosshole tests in unsaturated fractured tuff 2. Equivalent parameters, highresolution stochastic imaging and scale effects. Water Resour Res 37(12):3019–3041CrossRefGoogle Scholar
 Wagner BJ, Gorelick SM (1987) Optimal groundwater quality management under parameter uncertainty. Water Resour Res 23(7):1162–1174Google Scholar
 Weiss R, Smith L (1998) Efficient and responsible use of prior information in inverse methods. Ground Water 36(1):151–163Google Scholar
 Weissmann GS, Carle SA, Fogg GE (1999) Threedimensional hydrofacies modeling based on soil survey analysis and transition probability geostatistics. Water Resour Res 35(6):1761–1770Google Scholar
 Woodbury AD, Rubin Y (2000) A fullBayesian approach to parameter inference from tracer travel time moments and investigation of scale effects at the Cape Cod experimental site. Water Resour Res 36(1):159–171Google Scholar
 Woodbury AD, Smith JL, Dunbar WS (1987) Simultaneous inversion of temperature and hydraulic data, 1. Theory and application using hydraulic head data. Water Resour Res 23(8):1586–1606Google Scholar
 Xiang Y, Sykes JF, Thomson NR (1992) A composite L_{1} parameter estimator for model fitting in groundwater flow and solute transport simulation. Water Resour Res 29(6):1661–1673Google Scholar
 Yapo PO, Gupta HV, Sorooshian S (1998) Multiobjective global optimization method for hydrological models. J Hydrol 204:83–87Google Scholar
 Yeh TCJ, Liu SY (2000) Hydraulic tomography: Development of a new aquifer test method. Water Resour Res 36(8):2095–2105Google Scholar
 Yeh WWG, Yoon YS (1981) Aquifer parameter identification with optimum dimension in parameterization. Wat Resour Res 17(3):664–672Google Scholar
 Yeh WWG (1986) Review of parameter estimation procedures in groundwater hydrology: The inverse problem. Water Resour Res 22:95–108Google Scholar
 Zimmerman DA, de Marsily G, Gotway CA, Marietta MG, Axness CL, Beauheim RL, Bras RL, Carrera J, Dagan G, Davies PB, Gallegos DP, Galli A, GómezHernández JJ, Grindrod P, Gutjahr AL, Kitanidis PK, Lavenue AM, McLaughlin D, Neuman SP, RamaRao BS, Ravenne C, Rubin Y (1998) A comparison of seven geostatistically based inverse approaches to estimate transmissivities for modeling advective transport by groundwater flow. Water Resour Res 34(6):1373–1413Google Scholar