Exploiting Sparsity in Solving PDE-Constrained Inverse Problems: Application to Subsurface Flow Model Calibration

  • Azarang Golmohammadi
  • M-Reza M. Khaninezhad
  • Behnam JafarpourEmail author
Part of the The IMA Volumes in Mathematics and its Applications book series (IMA, volume 163)


Inverse problems are frequently encountered in many areas of science and engineering where observations are used to estimate the parameters of a system. In several practical applications, the dynamic processes that take place in a physical system are described using a set of partial differential equations (PDEs), which are typically nonlinear and coupled. The inverse problems that arise in those systems ought to be constrained to honour the governing PDEs. In this chapter, we consider high-dimensional PDE-constrained inverse problems in which, because of spatial patterns and correlations in the distribution of physical properties of a system, the underlying parameters tend to reside in (usually unknown) low-dimensional manifolds, thus have sparse (low-rank) representations. The sparsity of the parameters is amenable to an effective and flexible regularization form that can be exploited to improve the solution of such inverse problems. In applications where prior training data are available, sparse manifold learning methods can be adopted to tailor parameter representations to the specific requirements of the prior data. However, a major risk in employing prior training data is the significant uncertainty about the underlying conceptual models and assumptions used to develop the prior. A group-sparsity formulation is discussed for addressing the uncertainty in the prior training data when multiple distinct, but plausible, prior scenarios are encountered. Examples from geosciences application are presented where images of rock material properties are reconstructed from limited nonlinear fluid flow measurements.


Inverse Problem Formulation Subsurface Flow Model Partial Differential Equations (PDEs) Prior Training Data Hybrid Dictionary 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



The content of this chapter is based on research partially funded by the US Department of Energy, Foundation CMG, and American Chemical Society.

Appendix 1: k-SVD Dictionary Learning

The k-SVD algorithm is used to construct learned sparse dictionaries from a training dataset. The algorithm is similar to the k-means clustering method and is designed to find a dictionary \(\boldsymbol {\Phi } \in \mathbb {R}^{n\times k}\) containing k elements that sparsely represent each of the training samples in Un×L = [u1uiuL]. To achieve this goal, the algorithm attempts to solve the following minimization problem:
$$\displaystyle \begin{aligned} \hat{\mathbf{V}},\hat{\boldsymbol{\Phi}}={\text{argmin}}_{\mathbf{V},\boldsymbol{\Phi}}\quad {\sum_{i=1}^{L}{\lVert {\mathbf{u}}_i- \boldsymbol{\Phi}{\mathbf{v}}_i \rVert}_2^2}\quad \quad \text{s.t.,}\quad \quad {\lVert {\mathbf{v}}_i \rVert}_0\leq S \quad \text{for} \quad i\in1:L\end{aligned} $$
where Vk×L = [v1vivL] are the expansion coefficients corresponding to the training data. Given the NP-hard nature of the problem, the k-SVD algorithm uses a heuristic greedy solution technique by dividing the above optimization problem into two subproblems: (i) sparse coding and (ii) dictionary update. In the sparse coding step, for the current dictionary, a basis pursuit algorithm is used to find the sparse representation for each member of the training dataset. In the dictionary update step, the sparse representation obtained in the first step is fixed and the dictionary elements are updated to reduce the sparse approximation error. These two steps are repeated until convergence. Table 2 summarizes the k-SVD algorithm. Further details about the k-SVD algorithm may be found in [2]. We note that for high-dimensional training data the k-SVD dictionary learning can be computationally expensive. The computational complexity of each iteration of k-SVD is O(L(2nk + S2k + 7Sk + S3 + 4Sn) + 5nk2), where S is the sparsity level. One strategy to improve the computational efficiency of the algorithm includes using segmentation or approximate low-rank representations of the training data (to reduce n).
Table 2

k-SVD algorithm

Initialization: Initialize dictionary with \(\boldsymbol {\Phi }^{(0)} \in \mathbb {R}^ {n \times k}\). Set j = 1.

REPEAT until stopping criteria is met

a. Sparse Coding Step:

-Using a pursuit algorithm (e.g. OMP) compute \({\mathbf {V}}_{k \times L}^{(j)}=[{\mathbf {v}}_1 {\mathbf {v}}_2\ldots {\mathbf {v}}_L]\) as the solution of

\({\mathbf {V}}^{(j)}={\text{argmin}}_{{\mathbf {v}}_i} \quad {\lVert {\mathbf {u}}_i- \boldsymbol {\Phi }^{(j-1)}{\mathbf {v}}_i \rVert }_2^2\quad \quad \text{s.t.,}\quad \quad {\lVert {\mathbf {v}}_i \rVert }_0\leq S \quad \text{for} \quad i\in 1:L\)

b. Dictionary Update Step:

For each column c = 1, 2, …, k in Φ(j−1)

-Define the group of prior model instances that use this element

ωc = {i|1 ≤ i ≤ L, V(j)(c, i) ≠ 0}

-Compute the residual matrix \(\mathbf {{E}}_c=\mathbf {{U}}-\sum _{i\neq c}^{}\boldsymbol {\phi }_i{{\mathbf {v}}_c}^{T}\), where vcT is the cth row of V(j)

-Restrict Ec by choosing columns corresponding to ωc , i.e. find \(\mathbf {{E}}_c^{\omega }\)

-Apply rank-1 SVD decomposition \(\mathbf {{E}}_c^{\omega }=\mathbf {{A}}\boldsymbol {\Delta }\mathbf {{B}}\)

-Update the dictionary element ϕc = a1 and the sparse representation vc by \({\mathbf {v}}_c^{\omega }=\boldsymbol {\Delta }{\mathbf {b}}_1\)


Appendix 2: IRLS Algorithm

We use the IRLS algorithm [14] to solve the 1-norm regularized least-square minimization problem, that is:
$$\displaystyle \begin{aligned} \underset{\mathbf{v}}{\text{min}} \quad J(\mathbf{v})={\lVert \mathbf{v} \rVert}_1 + \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}\mathbf{v}) \rVert}_2^2\end{aligned} $$
At iteration n of the IRLS algorithm, the 1-norm is approximated using a weighted 2-norm as follows:
$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i}^{}w_i^{(n)}{v_i^{(n)}}^2+ \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}{\mathbf{v}}^{(n)}) \rVert}_2^2\end{aligned} $$
where \(w_i^{(n)}=\frac {1}{({v_i^{(n-1)}}^2+\epsilon ^{(n)})^{0.5}}\), (n) stands for the iteration n, and 𝜖(n) is a sequence of small numbers (that converge to zero with increasing n). Using this approximation of the objective function, and a first-order Taylor expansion for g( Φv(n)), the objective function in (33) takes the form:
$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i}^{}w_i^{(n)}{v_i^{(n)}}^2+ \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}{\mathbf{v}}^{(n-1)})- {{\mathbf{G}}_{\mathbf{v}}}^{(n)}({\mathbf{v}}^{(n)}-{\mathbf{v}}^{(n-1)}) \rVert}_2^2 \end{aligned} $$
Here, Gv(n) is the Jacobian matrix of g(.) with respect to v at v = v(n−1). The updated solution at iteration n can be easily found by taking the derivative of the above convex function w.r.t. v(n) and setting it to zero.

Appendix 3: Group-Sparsity Inversion

The objective function for group-sparsity regularization can be expressed as:
$$\displaystyle \begin{aligned} \underset{\mathbf{v}}{\text{min}} \quad J(\mathbf{v})=\sum_{i=1}^{p}{\lVert {\mathbf{v}}_i \rVert}_2 + \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}\mathbf{v}) \rVert}_2^2\end{aligned} $$
where the notations are discussed in the text. At iteration n, using the Gauss-Newton method and the first-order Taylor series for g( Φv), the linearized version of the above function takes the form:
$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i=1}^{p}(\sum_{j=1}^{s_i}({v_i^{j}}^{(n)})^{2})^{\frac{1}{2}} + \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}{\mathbf{v}}^{(n-1)})- {{\mathbf{G}}_{\mathbf{v}}}^{(n)}({\mathbf{v}}^{(n)}-{\mathbf{v}}^{(n-1)}) \rVert}_2^2\end{aligned} $$
where Gv(n) is the Jacobian matrix of g(v), and \({v_i^{j}}\) is the jth basis in the ith group. Denoting Δd(n) = d −g( Φv(n−1)) + Gv(n)v(n−1), (36) can be simplified to:
$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i=1}^{p}(\sum_{j=1}^{s_i}({v_i^{j}}^{(n)})^{2})^{\frac{1}{2}} + \lambda^2{\lVert \boldsymbol{\Delta}{\mathbf{d}}^{(n)}-{{\mathbf{G}}_{\mathbf{v}}}^{(n)}{\mathbf{v}}^{(n)} \rVert}_2^2\end{aligned} $$
The derivative of the regularization term with respect to \({v_i^{j}}^{(n)}\) can be approximated as:
$$\displaystyle \begin{aligned} \frac{{v_i^{j}}^{(n)}}{(\sum_{k=1}^{s_i}({v_i^{k}}^{(n)})^{2})^{\frac{1}{2}}}\approx \frac{{v_i^{j}}^{(n)}}{(\sum_{k=1}^{s_i}({v_i^{k}}^{(n-1)})^{2}+{\epsilon_i}^{(n)})^{\frac{1}{2}}}\end{aligned} $$
where 𝜖i(n) is a small positive number that is used to avoid zero denominators. Note that \({v_i^{k}}^{(n)}\) in the denominator is approximated as \({v_i^{k}}^{(n-1)}\). Choosing 𝜖 such that 0 < 𝜖i(n) < 𝜖i(n−1) and \( \underset {n\rightarrow \infty }{\text{lim}}{\epsilon _i}^{(n)}=0\), it can be shown that this approximation does not change the solution of the original minimization problem. The iterative solution of (37) can now be derived as:
$$\displaystyle \begin{aligned} ( \boldsymbol{\Lambda}^{(n)}+\alpha { {{\mathbf{G}}_{\mathbf{v}}}^{(n)}}^{T} {{\mathbf{G}}_{\mathbf{v}}}^{(n)}) {\mathbf{v}}^{(n)} = \alpha { {{\mathbf{G}}_{\mathbf{v}}}^{(n)}}^{T}\boldsymbol{\Delta}{\mathbf{d}}^{(n)}\end{aligned} $$
where α = 2λ2, and Λ(n) is a diagonal matrix with diagonal entries \(\frac {1}{(\sum _{k=1}^{s_i}({v_i^{k}}^{(n-1)})^{2}+{\epsilon _i}^{(n)})^{\frac {1}{2}}}\).


  1. 1.
    Aanonsen SI, Nævdal G, Oliver DS, Reynolds AC, Vallès B, et al (2009) The ensemble Kalman filter in reservoir engineering–a review. Spe Journal 14(03):393–412CrossRefGoogle Scholar
  2. 2.
    Aharon M, Elad M, Bruckstein A (2006) rmk-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing 54(11): 4311–4322zbMATHCrossRefGoogle Scholar
  3. 3.
    Ahmed N, Natarajan T, Rao KR (1974) Discrete cosine transform. IEEE transactions on Computers 100(1):90–93MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Baraniuk RG (2007) Compressive sensing [lecture notes]. IEEE signal processing magazine 24(4):118–121CrossRefGoogle Scholar
  5. 5.
    Berinde R, Gilbert AC, Indyk P, Karloff H, Strauss MJ (2008) Combining geometry and combinatorics: A unified approach to sparse signal recovery. In: Communication, Control, and Computing, 2008 46th Annual Allerton Conference on, IEEE, pp 798–805Google Scholar
  6. 6.
    Bhark EW, Jafarpour B, Datta-Gupta A (2011) A generalized grid connectivity–based parameterization for subsurface flow model calibration. Water Resources Research 47(6)Google Scholar
  7. 7.
    Blumensath T, Davies ME (2009) Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis 27(3):265–274MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge university presszbMATHCrossRefGoogle Scholar
  9. 9.
    Bracewell RN, Bracewell RN (1986) The Fourier transform and its applications, vol 31999. McGraw-Hill New YorkGoogle Scholar
  10. 10.
    Candes EJ (2008) The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 346(9–10):589–592MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Candès EJ, Wakin MB (2008) An introduction to compressive sampling. IEEE signal processing magazine 25(2):21–30CrossRefGoogle Scholar
  12. 12.
    Carrera J, Neuman SP (1986) Estimation of aquifer parameters under transient and steady state conditions: 1. maximum likelihood method incorporating prior information. Water Resources Research 22(2):199–210CrossRefGoogle Scholar
  13. 13.
    Chandrasekaran V, Recht B, Parrilo PA, Willsky AS (2012) The convex geometry of linear inverse problems. Foundations of Computational mathematics 12(6):805–849MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Chartrand R, Yin W (2008) Iteratively reweighted algorithms for compressive sensing. In: Acoustics, speech and signal processing, 2008. ICASSP 2008. IEEE international conference on, IEEE, pp 3869–3872Google Scholar
  15. 15.
    Chen GH, Tang J, Leng S (2008) Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Medical physics 35(2):660–663CrossRefGoogle Scholar
  16. 16.
    Chen S, Doolen GD (1998) Lattice Boltzmann method for fluid flows. Annual review of fluid mechanics 30(1):329–364MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM review 43(1):129–159MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Chen Y, Oliver DS (2012) Multiscale parameterization with adaptive regularization for improved assimilation of nonlocal observation. Water resources research 48(4)Google Scholar
  19. 19.
    Chorin AJ (1968) Numerical solution of the Navier-Stokes equations. Mathematics of computation 22(104):745–762MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Constantin P, Foias C (1988) Navier-stokes equations. University of Chicago PresszbMATHGoogle Scholar
  21. 21.
    Donoho DL (2006) Compressed sensing. IEEE Transactions on information theory 52(4):1289–1306MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Efendiev Y, Durlofsky L, Lee S (2000) Modeling of subgrid effects in coarse-scale simulations of transport in heterogeneous porous media. Water Resources Research 36(8):2031–2041CrossRefGoogle Scholar
  23. 23.
    Eldar YC, Kuppinger P, Bolcskei H (2010) Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Transactions on Signal Processing 58(6):3042–3054MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    Engl HW, Hanke M, Neubauer A (1996) Regularization of inverse problems, vol 375. Springer Science & Business MediaGoogle Scholar
  25. 25.
    Feyen L, Caers J (2006) Quantifying geological uncertainty for flow and transport modeling in multi-modal heterogeneous formations. Advances in Water Resources 29(6):912–929CrossRefGoogle Scholar
  26. 26.
    Gavalas G, Shah P, Seinfeld JH, et al (1976) Reservoir history matching by Bayesian estimation. Society of Petroleum Engineers Journal 16(06):337–350CrossRefGoogle Scholar
  27. 27.
    Gholami A (2015) Nonlinear multichannel impedance inversion by total-variation regularization. Geophysics 80(5):R217–R224MathSciNetCrossRefGoogle Scholar
  28. 28.
    Golmohammadi A, Jafarpour B (2016) Simultaneous geologic scenario identification and flow model calibration with group-sparsity formulations. Advances in Water Resources 92:208–227CrossRefGoogle Scholar
  29. 29.
    Golmohammadi A, Khaninezhad MRM, Jafarpour B (2015) Group-sparsity regularization for ill-posed subsurface flow inverse problems. Water Resources Research 51(10):8607–8626CrossRefGoogle Scholar
  30. 30.
    Golub G, Kahan W (1965) Calculating the singular values and pseudo-inverse of a matrix. Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis 2(2):205–224MathSciNetzbMATHCrossRefGoogle Scholar
  31. 31.
    Golub GH, Heath M, Wahba G (1979) Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21(2):215–223MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Gómez-Hernánez JJ, Sahuquillo A, Capilla J (1997) Stochastic simulation of transmissivity fields conditional to both transmissivity and piezometric data-i. theory. Journal of Hydrology 203(1–4):162–174CrossRefGoogle Scholar
  33. 33.
    Grimstad AA, Mannseth T, Nævdal G, Urkedal H (2003) Adaptive multiscale permeability estimation. Computational Geosciences 7(1):1–25MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    Hansen PC (1992) Analysis of discrete ill-posed problems by means of the l-curve. SIAM review 34(4):561–580MathSciNetzbMATHCrossRefGoogle Scholar
  35. 35.
    Hansen PC (1998) Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion. SIAMCrossRefGoogle Scholar
  36. 36.
    Hill MC, Tiedeman CR (2006) Effective groundwater model calibration: with analysis of data, sensitivities, predictions, and uncertainty. John Wiley & SonsGoogle Scholar
  37. 37.
    Jacquard P, et al (1965) Permeability distribution from field pressure data. Society of Petroleum Engineers Journal 5(04):281–294CrossRefGoogle Scholar
  38. 38.
    Jafarpour B, Tarrahi M (2011) Assessing the performance of the ensemble Kalman filter for subsurface flow data integration under variogram uncertainty. Water Resources Research 47(5)Google Scholar
  39. 39.
    Jafarpour B, McLaughlin DB, et al (2009) Reservoir characterization with the discrete cosine transform. SPE Journal 14(01):182–201CrossRefGoogle Scholar
  40. 40.
    Jenatton R, Obozinski G, Bach F (2010) Structured sparse principal component analysis. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp 366–373Google Scholar
  41. 41.
    Jolliffe IT (1986) Principal component analysis and factor analysis. In: Principal component analysis, Springer, pp 115–128Google Scholar
  42. 42.
    Kandel ER, Schwartz JH, Jessell TM, Siegelbaum SA, Hudspeth AJ, et al (2000) Principles of neural science, vol 4. McGraw-Hill New YorkGoogle Scholar
  43. 43.
    Khaninezhad MM, Jafarpour B (2014) Prior model identification during subsurface flow data integration with adaptive sparse representation techniques. Computational Geosciences 18(1):3–16MathSciNetzbMATHCrossRefGoogle Scholar
  44. 44.
    Khaninezhad MM, Jafarpour B, Li L (2012) Sparse geologic dictionaries for subsurface flow model calibration: Part i. inversion formulation. Advances in Water Resources 39:106–121CrossRefGoogle Scholar
  45. 45.
    Khaninezhad MM, Jafarpour B, Li L (2012) Sparse geologic dictionaries for subsurface flow model calibration: Part ii. robustness to uncertainty. Advances in water resources 39:122–136CrossRefGoogle Scholar
  46. 46.
    Khodabakhshi M, Jafarpour B (2013) A Bayesian mixture-modeling approach for flow-conditioned multiple-point statistical facies simulation from uncertain training images. Water Resources Research 49(1):328–342CrossRefGoogle Scholar
  47. 47.
    Kitanidis PK (1997) Introduction to geostatistics: applications in hydrogeology. Cambridge University PressCrossRefGoogle Scholar
  48. 48.
    Klema V, Laub A (1980) The singular value decomposition: Its computation and some applications. IEEE Transactions on automatic control 25(2):164–176MathSciNetzbMATHCrossRefGoogle Scholar
  49. 49.
    Landis EM (1934) Capillary pressure and capillary permeability. Physiological Reviews 14(3):404–481CrossRefGoogle Scholar
  50. 50.
    Lee J, Kitanidis P (2013) Bayesian inversion with total variation prior for discrete geologic structure identification. Water Resources Research 49(11):7658–7669CrossRefGoogle Scholar
  51. 51.
    Li L, Jafarpour B (2010) A sparse Bayesian framework for conditioning uncertain geologic models to nonlinear flow measurements. Advances in Water Resources 33(9):1024–1042CrossRefGoogle Scholar
  52. 52.
    Liu X, Kitanidis P (2011) Large-scale inverse modeling with an application in hydraulic tomography. Water Resources Research 47(2)Google Scholar
  53. 53.
    Lochbühler T, Vrugt JA, Sadegh M, Linde N (2015) Summary statistics from training images as prior information in probabilistic inversion. Geophysical Journal International 201(1):157–171CrossRefGoogle Scholar
  54. 54.
    Luo J, Wang W, Qi H (2013) Group sparsity and geometry constrained dictionary learning for action recognition from depth maps. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1809–1816Google Scholar
  55. 55.
    Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE transactions on pattern analysis and machine intelligence 11(7):674–693zbMATHCrossRefGoogle Scholar
  56. 56.
    Marvasti F, Azghani M, Imani P, Pakrouh P, Heydari SJ, Golmohammadi A, Kazerouni A, Khalili M (2012) Sparse signal processing using iterative method with adaptive thresholding (IMAT). In: Telecommunications (ICT), 2012 19th International Conference on, IEEE, pp 1–6Google Scholar
  57. 57.
    Miller K (1970) Least squares methods for ill-posed problems with a prescribed bound. SIAM Journal on Mathematical Analysis 1(1):52–74MathSciNetzbMATHCrossRefGoogle Scholar
  58. 58.
    Mohimani H, Babaie-Zadeh M, Jutten C (2009) A fast approach for overcomplete sparse decomposition based on smoothed l0 norm. IEEE Transactions on Signal Processing 57(1):289–301MathSciNetzbMATHCrossRefGoogle Scholar
  59. 59.
    Mueller JL, Siltanen S (2012) Linear and nonlinear inverse problems with practical applications. SIAMzbMATHCrossRefGoogle Scholar
  60. 60.
    Murray CD, Dermott SF (1999) Solar system dynamics. Cambridge university presszbMATHGoogle Scholar
  61. 61.
    Needell D, Tropp JA (2009) CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis 26(3):301–321MathSciNetzbMATHCrossRefGoogle Scholar
  62. 62.
    Oliver DS, Chen Y (2011) Recent progress on reservoir history matching: a review. Computational Geosciences 15(1):185–221zbMATHCrossRefGoogle Scholar
  63. 63.
    Oliver DS, Reynolds AC, Liu N (2008) Inverse theory for petroleum reservoir characterization and history matching. Cambridge University PressCrossRefGoogle Scholar
  64. 64.
    Patankar S (1980) Numerical heat transfer and fluid flow. CRC presszbMATHGoogle Scholar
  65. 65.
    Peterson AF, Ray SL, Mittra R, of Electrical I, Engineers E (1998) Computational methods for electromagnetics. IEEE press New YorkGoogle Scholar
  66. 66.
    Resmerita E (2005) Regularization of ill-posed problems in Banach spaces: convergence rates. Inverse Problems 21(4):1303MathSciNetzbMATHCrossRefGoogle Scholar
  67. 67.
    Riva M, Panzeri M, Guadagnini A, Neuman SP (2011) Role of model selection criteria in geostatistical inverse estimation of statistical data-and model-parameters. Water Resources Research 47(7)Google Scholar
  68. 68.
    Rousset M, Durlofsky L (2014) Optimization-based framework for geological scenario determination using parameterized training images. In: ECMOR XIV-14th European Conference on the Mathematics of Oil RecoveryGoogle Scholar
  69. 69.
    Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(1–4):259–268MathSciNetzbMATHCrossRefGoogle Scholar
  70. 70.
    Sarma P, Durlofsky LJ, Aziz K (2008) Kernel principal component analysis for efficient, differentiable parameterization of multipoint geostatistics. Mathematical Geosciences 40(1): 3–32MathSciNetzbMATHCrossRefGoogle Scholar
  71. 71.
    Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge university presszbMATHCrossRefGoogle Scholar
  72. 72.
    Shirangi MG (2014) History matching production data and uncertainty assessment with an efficient TSVD parameterization algorithm. Journal of Petroleum Science and Engineering 113:54–71CrossRefGoogle Scholar
  73. 73.
    Shirangi MG, Durlofsky LJ (2016) A general method to select representative models for decision making and optimization under uncertainty. Computers & Geosciences 96:109–123CrossRefGoogle Scholar
  74. 74.
    Snieder R (1998) The role of nonlinearity in inverse problems. Inverse Problems 14(3):387MathSciNetzbMATHCrossRefGoogle Scholar
  75. 75.
    Strebelle S (2002) Conditional simulation of complex geological structures using multiple-point statistics. Mathematical Geology 34(1):1–21MathSciNetzbMATHCrossRefGoogle Scholar
  76. 76.
    Suzuki S, Caers JK, et al (2006) History matching with an uncertain geological scenario. In: SPE Annual Technical Conference and Exhibition, Society of Petroleum EngineersGoogle Scholar
  77. 77.
    Talukder KH, Harada K (2010) Haar wavelet based approach for image compression and quality assessment of compressed image. arXiv preprint arXiv:10104084Google Scholar
  78. 78.
    Tarantola A (2005) Inverse problem theory and methods for model parameter estimation. SIAMzbMATHCrossRefGoogle Scholar
  79. 79.
    Tarantola A, Valette B (1982) Generalized nonlinear inverse problems solved using the least squares criterion. Reviews of Geophysics 20(2):219–232MathSciNetCrossRefGoogle Scholar
  80. 80.
    Taubman D, Marcellin M (2012) JPEG2000 image compression fundamentals, standards and practice: image compression fundamentals, standards and practice, vol 642. Springer Science & Business MediaGoogle Scholar
  81. 81.
    Tikhonov A, Arsenin VY (1979) Methods of solving incorrect problemsGoogle Scholar
  82. 82.
    Tosic I, Frossard P (2011) Dictionary learning. IEEE Signal Processing Magazine 28(2):27–38zbMATHCrossRefGoogle Scholar
  83. 83.
    Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53(12):4655–4666MathSciNetzbMATHCrossRefGoogle Scholar
  84. 84.
    Vo HX, Durlofsky LJ (2014) A new differentiable parameterization based on principal component analysis for the low-dimensional representation of complex geological models. Mathematical Geosciences 46(7):775–813zbMATHCrossRefGoogle Scholar
  85. 85.
    Vogel CR (2002) Computational methods for inverse problems. SIAMzbMATHCrossRefGoogle Scholar
  86. 86.
    Vrugt JA, Stauffer PH, Wöhling T, Robinson BA, Vesselinov VV (2008) Inverse modeling of subsurface flow and transport properties: A review with new developments. Vadose Zone Journal 7(2):843–864CrossRefGoogle Scholar
  87. 87.
    Yeh WWG (1986) Review of parameter identification procedures in groundwater hydrology: The inverse problem. Water Resources Research 22(2):95–108CrossRefGoogle Scholar
  88. 88.
    Zhou H, Gómez-Hernández JJ, Li L (2012) A pattern-search-based inverse method. Water Resources Research 48(3)Google Scholar
  89. 89.
    Zhou H, Gómez-Hernández JJ, Li L (2014) Inverse methods in hydrogeology: Evolution and recent trends. Advances in Water Resources 63:22–37CrossRefGoogle Scholar
  90. 90.
    Zimmerman D, Marsily Gd, Gotway CA, Marietta MG, Axness CL, Beauheim RL, Bras RL, Carrera J, Dagan G, Davies PB, et al (1998) A comparison of seven geostatistically based inverse approaches to estimate transmissivities for modeling advective transport by groundwater flow. Water Resources Research 34(6):1373–1413CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Azarang Golmohammadi
    • 1
  • M-Reza M. Khaninezhad
    • 1
  • Behnam Jafarpour
    • 1
    • 2
    Email author
  1. 1.Ming Hsieh Department of Electrical EngineeringUniversity of Southern CaliforniaLos AngelesUSA
  2. 2.Mork Family Department of Chemical Engineering and Material ScienceUniversity of Southern CaliforniaLos AngelesUSA

Personalised recommendations