Skip to main content

Advertisement

Log in

Variational Models for Color Image Correction Inspired by Visual Perception and Neuroscience

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Reproducing the perception of a real-world scene on a display device is a very challenging task which requires the understanding of the camera processing pipeline, the display process, and the way the human visual system processes the light it captures. Mathematical models based on psychophysical and physiological laws on color vision, named Retinex, provide efficient tools to handle degradations produced during the camera processing pipeline like the reduction of the contrast. In particular, Batard and Bertalmío (in J Math Imaging Vis 60(6):849–881, 2018) described some psychophysical laws on brightness perception as covariant derivatives, included them into a variational model, and observed that the quality of the color image correction is correlated with the accuracy of the vision model it includes. Based on this observation, we postulate that this model can be improved by including more accurate data on vision with a special attention on visual neuroscience here. Then, inspired by the presence of neurons responding to different visual attributes in the area V1 of the visual cortex as orientation, color or movement, to name a few, and horizontal connections modeling the interactions between those neurons, we construct two variational models to process both local (edges, textures) and global (contrast) features. This is an improvement with respect to the model of Batard and Bertalmío as the latter cannot process local and global features independently and simultaneously. Finally, we conduct experiments on color images which corroborate the improvement provided by the new models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137(1–2, Ser. A), 91–129 (2013)

    MathSciNet  MATH  Google Scholar 

  2. Banert, S., Bot, R.I.: A general double-proximal gradient algorithm for DC programming. Math. Program. 178(1–2, Ser. A), 301–326 (2019)

    MathSciNet  MATH  Google Scholar 

  3. Bot, R.I., Csetnel, E.R., Nguyen, D.-K.: A proximal minimization algorithm for structured nonconvex and nonsmooth problems. SIAM J. Optim. 29(2), 1300–1328 (2019)

    MathSciNet  MATH  Google Scholar 

  4. Batard, T., Sochen, N.: A class of generalized Laplacians devoted to multi-channel image processing. J. Math. Imaging Vis. 48(3), 517–543 (2014)

    MathSciNet  MATH  Google Scholar 

  5. Batard, T., Bertalmío, M.: A class of nonlocal variational problems on a vector bundle for color image local contrast reduction/enhancement. Geom. Imaging Comput. 2(3), 187–236 (2015)

    MathSciNet  MATH  Google Scholar 

  6. Batard, T., Bertalmío, M.: A geometric model of brightness perception and its application to color images correction. J. Math. Imaging Vis. 60(6), 849–881 (2018)

    MathSciNet  MATH  Google Scholar 

  7. Batard, T., Ramon, E., Steidl, G., Bertalmío, M.: A connection between image processing and artificial neural networks layers through a geometric model of visual perception. In: Lellmann, J., Burger, M., Modersitzki, J. (eds.) Scale Space and Variational Methods in Computer Vision. Lecture Notes in Computer Science, vol. 11603, pp. 459–471. Springer, Berlin (2019)

    Google Scholar 

  8. Bertalmío, M., Caselles, V., Provenzi, E., Rizzi, A.: Perceptual color correction through variational techniques. IEEE Trans. Image Process. 16(4), 1058–1072 (2007)

    MathSciNet  Google Scholar 

  9. Bertalmío, M., Cowan, J.D.: Implementing the Retinex algorithm with Wilson–Cowan equations. J. Physiol.-Paris 103(1–2), 69–72 (2009)

    Google Scholar 

  10. Bertalmío, M., Caselles, V., Provenzi, E.: Issues about Retinex theory and contrast enhancement. Int. J. Comput. Vis. 83(1), 101–119 (2009)

    Article  Google Scholar 

  11. Bertalmío, M.: Image Processing for Cinema. Chapman & Hall/CRC Press, London (2014)

    Book  MATH  Google Scholar 

  12. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2, Ser. A), 459–494 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bressloff, P.C., Cowan, J.D.: An amplitude equation approach to contextual effects in visual cortex. Neural Comput. 14(3), 493–525 (2002)

    Article  MATH  Google Scholar 

  14. Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numer. 25, 161–319 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chossat, P., Faugeras, O.: Hyperbolic planforms in relation to visual edges and textures perception. PLoS Comput. Biol. 5(12), 1–16 (2009)

    Article  MathSciNet  Google Scholar 

  16. Cowan, J.D., Bressloff, P.C.: Visual cortex and the Retinex algorithm. In: Proceedings of SPIE, Volume 4662, Human Vision and Electronic Imaging VII (2002)

  17. Cyriac, P., Batard, T., Bertalmío, M.: A non local variational formulation for the improvement of tone mapped images. SIAM J. Imaging Sci. 7(4), 2340–2363 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Fairchild, M.D., Pirrotta, E.: Predicting the lightness of chromatic objects colors using CIELAB. Color Res. Appl. 16(6), 385–393 (1991)

    Article  Google Scholar 

  19. Ferradans, S., Bertalmío, M., Provenzi, E., Caselles, V.: An analysis of visual adaptation and contrast perception for tone mapping. IEEE Trans. Pattern Anal. Mach. Intell. 33(10), 2002–2012 (2011)

    Article  Google Scholar 

  20. Förstner, W., Gülch, E.: A fast operator for detection and precise location of distinct points, corners and centres of circular features. In: Proceedings on ISPRS Intercom-mission Conference on Fast Processing of Photogrammetric Data, pp. 281–305 (1987)

  21. Georgiev, T.: Relighting, Retinex theory, and perceived gradients. In: Proceedings of Mirage (2005)

  22. Getreuer, P.: Automatic color enhancement (ACE) and its fast implementation. IPOL J. Image Process. On Line 2, 266–277 (2012)

    Google Scholar 

  23. Hubel, D., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154.2 (1962)

    Google Scholar 

  24. Hubel, D.H.: Eye, Brain and Vision. Scientific American Library. W.H. Freeman & Co., New York (1988)

    Google Scholar 

  25. Hurvich, L.M., Jameson, D.: An opponent-process theory of color vision. Psychol. Rev. 64(6), 384–404 (1957)

    Google Scholar 

  26. Johnson, E.N., Hawken, M.J., Sharpley, R.: The orientation selectivity of color-responsive neurons in macaque V1. J. Neurosci. 28(32), 8096–8106 (2008)

    Google Scholar 

  27. Land, E., McCann, J.J.: Lightness and Retinex theory. J. Opt. Soc. Am. 61(1), 1–11 (1971)

    Google Scholar 

  28. Land, E.: The Retinex theory of color vision. Sci. Am. 237, 108–128 (1977)

    Google Scholar 

  29. Nikolova, M., Steidl, G.: Fast hue and range preserving histogram specification: theory and new algorithms for color image enhancement. IEEE Trans. Image Process. 23(9), 4087–4100 (2014)

    MathSciNet  MATH  Google Scholar 

  30. Pierre, F., Aujol, J.-F., Bugeau, A., Ta, V.-R.: Luminance-hue specification in the RGB space. In: Aujol, J.F., Nikolova, M., Papadakis, N. (eds.) Scale Space and Variational Methods in Computer Vision. Lecture Notes in Computer Science, vol. 9087, pp. 413–424. Springer, Berlin (2015)

    Google Scholar 

  31. Pierre, F., Aujol, J.-F., Bugeau, A., Steidl, G., Ta, V.-T.: Variational contrast enhancement of gray-scale and RGB images. J. Math. Imaging Vis. 57(1), 99–116 (2017)

    MathSciNet  MATH  Google Scholar 

  32. Provenzi, E., De Carli, L., Rizzi, A., Marini, D.: Mathematical definition and analysis of the Retinex algorithm. J. Opt. Soc. Am. A 22(12), 2613–2621 (2005)

    MathSciNet  Google Scholar 

  33. Reinhard, E., Ward, G., Pattanaik, S.N., Debevec, P.E., Heidrich, W., Myszkowski, K.: High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, San Francisco (2010)

    Google Scholar 

  34. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 317. Springer, Berlin (1998)

    Google Scholar 

  35. Sabach, S., Pock, T.: Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems. SIAM J. Imaging Sci. 9(4), 1756–1787 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  36. Song, A., Faugeras, O., Veltz, R.: A neural field model for color perception unifying assimilation and contrast. PLoS Comput. Biol. 15(6), e1007050 (2019)

    Article  Google Scholar 

  37. Tao, P.D., An, L.T.H.: Convex analysis approach to d.c. programming: theory, algorithms, and applications. Acta Math. Vietnam 22(1), 289–355 (1997)

    MathSciNet  MATH  Google Scholar 

  38. Toland, J.F.: A duality principle for non-convex optimisation and the calculus of variations. Arch. Ration. Mech. Anal. 71(1), 41–61 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  39. Wilson, H.R., Cowan, J.D.: Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12(1), 1–24 (1972)

    Article  Google Scholar 

  40. Wilson, H.R., Cowan, J.D.: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol. Cybernet. 13(2), 55–80 (1973)

    MATH  Google Scholar 

  41. Yeonan-Kim, J., Bertalmío, M.: Analysis of retinal and cortical components of Retinex algorithms. J. Electron. Imaging 26(3), 031208 (2017)

    Google Scholar 

Download references

Acknowledgements

The authors thank the anonymous reviewers for helpful remarks and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Batard.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. Covariant Derivatives and Visual Perception

1.1 A.1. Interpretation of the Kernel-Based Retinex Model

The original Retinex formulation [28] and the Kernel-based Retinex [10] can be formulated as follows: The perceived color at a given pixel of the image results from a weighted averaging of the perceptual difference between the given pixel and the other pixels in the image domain. This suggests that the key object in color perception is the perceived gradient, and that the accuracy of the estimation of the perceived colors depends on the accuracy of the estimation of the perceived gradient.

More precisely, given an RGB color image

\(a=(a^1,a^2,a^3) :\Omega \subset \mathbb {R}^2 \longrightarrow \mathbb {R}^3\), the perceived image \(L=(L^1,L^2,L^3)\) is, according to Kernel-based Retinex [10], given for \(k=1,2,3\), by

$$\begin{aligned} L^k(x)&= \int _{y:a^k(y) \ge a^k(x)} w(x,y) \left[ A \log \left( \dfrac{a^k(x)}{a^k(y)} \right) +1 \right] \mathrm{d}y \\&+ \int _{y: a^k(y)<a^k(x)} w(x,y) \, \mathrm{d}y, \end{aligned}$$

where w is a Gaussian kernel and A is a constant, which can be rewritten as

$$\begin{aligned} L^k(x)&= \int _{y \in \Omega } w(x,y) \, \zeta \, ( \log [a^k(x)] - \log [a^k(y)] ) \, \mathrm{d}y \\&= \int _{y \in \Omega } w(x,y) \, \zeta \left( \int _{\gamma _{y,x}} d_{\gamma _{y,x}'(t)} \log (a^k)(\gamma _{y,x}(t)) \mathrm{d}t \right) \! \mathrm{d}y \end{aligned}$$

for some nonlinear function \(\zeta \), and for any path \(\gamma _{y,x}\) joining y and x.

Then, it turns out that the quantity \(\nabla \log (a^k) = \mathrm{d}a^k/a^k\) can be interpreted as the perceived gradient of the image according to Weber’s law in vision and the quantity

$$\begin{aligned} \int _{\gamma _{y,x}} \mathrm{d}_{\gamma _{y,x}'(t)} \log (a^k)(\gamma _{y,x}(t)) \mathrm{d}t. \end{aligned}$$
(34)

as the induced perceptual difference between x and y.

Indeed, given an uniform background of intensity \(\mathcal {I}\), Weber’s law states that the following equality holds

$$\begin{aligned} \dfrac{\delta \, \mathcal {I}}{\mathcal {I}}=c, \end{aligned}$$

where \(\delta \mathcal {I}\) is the minimum intensity increment of \(\mathcal {I}\) to which the human sensitivity distinguish \(\mathcal {I}\) and \(\mathcal {I}+\delta \mathcal {I}\), and c is a constant. Hence, Weber’s law shows that the human sensitivity to an intensity increment depends on the intensity of the background. In particular, it shows that human perception is more sensitive to intensity changes in dark backgrounds than in the bright ones.

1.2 A.2. Equivariance Property of the Perceived Gradient

Based on the assumption that the color constancy property of the HVS comes from an invariance property of the perceived gradient with respect to lighting changes and under the identification between these latter and moving frame changes on a vector bundle, Georgiev [21] suggested that a covariant derivative is a good candidate to describe the perceived gradient, due to the invariance of this differential operator with respect to moving frame changes. More precisely, given a Lie group G, a G-associated vector bundle E, and a covariant derivative \(\nabla :=d+\omega \) on E, we have \(\nabla (\mathcal {G} a )= \mathcal {G} \, \nabla (a)\) for any G-valued moving frame \(\mathcal {G}\) and section a of E.

The perceptual difference formula (34) can then be modified by means of a covariant derivative, which gives

$$\begin{aligned} \int _{\gamma _{y,x}} \nabla _{\gamma _{y,x}'(t)} a(\gamma _{y,x}(t)) \, \mathrm{d}t = \tau _{y,x,\gamma _{y,x}} a(y) -a(x). \end{aligned}$$
(35)

Note that the quantity (35) is independent of the path \(\gamma _{y,x}\) provided that \(\nabla \) is flat.

1.3 A.3. A Perceptual Law Derived from a Covariant Derivative Compatible with the Metric: Helmholtz–Kohlrausch Effect

The Helmholtz–Kohlrausch effect is a color appearance phenomena which states that the brightness of a color depends not only on its achromatic component but also on its chromatic component. More precisely, it states that chromatic colors appear brighter than achromatic colors, and some hues appear brighter than others.

Given a color image \(a=(a^1,a^2,a^3)\), let us consider the SO(h)-associated vector bundle \(\Omega \times \mathbb {R}^3 \longrightarrow \Omega \), for some metric h, and the connection 1-form \(\omega _a\) given by

$$\begin{aligned} \left( \begin{array}{ccc} 0 &{} \dfrac{a^1 d a^2 - a^2 d a^1}{\alpha + \Vert a\Vert _h^2} &{} \dfrac{a^1 d a^3 - a^3 d a^1}{\alpha + \Vert a \Vert _h^2} \\ - \dfrac{a^1 d a^2 - a^2 d a^1}{\alpha + \Vert a\Vert _h^2} &{} 0 &{} \dfrac{a^2 d a^3 - a^3 d a^2}{\alpha + \Vert a \Vert _h^2} \\ - \dfrac{a^1 d a^3 - a^3 d a^1}{\alpha + \Vert a\Vert _h^2} &{} - \dfrac{a^2 d a^3 - a^3 d a^2}{\alpha + \Vert a\Vert _h^2} &{} 0 \end{array} \right) , \end{aligned}$$
(36)

for \(\alpha \ge 0\). Having \(\omega \in \varGamma (T^{*} \Omega \otimes \mathfrak {so}(h))\) makes the corresponding covariant derivative \(\nabla \) be compatible with h.

At the limit case \(\alpha =0\), it has been shown in [6] that the corresponding covariant derivative \(\nabla \) is flat, from which follows the existence of a moving frame P in which \(\omega _a\) vanishes. More precisely, denoting by \((r \sin \theta \cos \varphi , r \sin \theta \sin \varphi , r \cos \theta )\) the spherical coordinates of a, then P is of the form (37).

figure f

Then, assuming that the coordinates \((a^1,a^2,a^3)\) correspond to the CIE \(L^{*}a^{*}b^{*}\) components of a, and the metric h is given by

$$\begin{aligned} \left( \begin{array}{lll} 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \xi ^2 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \xi ^2 \end{array} \right) \end{aligned}$$

in this frame, where

$$\begin{aligned} \xi = (2.5-0.0025L^{*}) \left( 0.116 \left| \, \sin \left( \dfrac{H^{*}-90}{2} \right) \right| + 0.085 \right) , \end{aligned}$$

for \(H^{*}\) denoting the hue component. The quantity

$$\begin{aligned} r = \Vert a \Vert _h = \sqrt{(L^{*}(a))^2 + \xi ^2 \left( {(a^{*}(a)})^2 + ({b^{*}(a)})^2 \right) } \end{aligned}$$
(38)

can be identified to the brightness defined by Fairchild and Pirrotta [18], and which takes into account the Helmholtz–Kohlrausch effect.

As a consequence, the perceptual difference (35), which is given here by

$$\begin{aligned} P^{-1}(y) \, a(y) - P^{-1}(x) \, a(x) \end{aligned}$$

in the frame P can be identified to the brightness difference between a(y) and a(x), as \(P^{-1}a = (r,0,0)\).

Appendix B. Proof of Theorem 1

Semi-algebraic functions are functions which graph is a semi-algebraic set, i.e., a finite intersections of polynomial sets \(P(x) =0\) and \(Q(x) <0\) and finite unions thereof. Recall that semi-algebraic functions fulfill the Kurdyka–Łojasiewicz (KŁ) property, see [12]. In particular, the function \(\varPhi \) in (31) satisfies the KŁ  property. Let us assume that after discretization \(\varPhi : {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2} \rightarrow {\mathbb {R}} \cup \{ +\infty \}\).

The following theorem can be found in [1, Theorem 2.9].

Theorem 2

Let \(f:{\mathbb {R}}^d \rightarrow {\mathbb {R}} \cup \{\infty \}\) fulfill the KŁ  property. Let \(\{ x^{n}\}_{n \in \mathbb N}\) be a sequence which fulfills the following conditions:

  1. (i)

    There exists \(K_1>0\) such that \(f(x^{n+1}) - f(x^{n}) \le - K_1 \Vert x^{n+1} - x^{n} \Vert ^2\) for every \(n \in \mathbb N\).

  2. (ii)

    There exists \(K_2>0\) such that for every \(n\in \mathbb N\) there exists \(w_{n+1} \in \partial _L f(x^{n+1})\) with \(\Vert w_{n+1} \Vert \le K_2 \Vert x^{n+1} - x^{n} \Vert \), where \(\partial _L f\) denotes the Fréchet limiting subdifferential of f.

  3. (iii)

    There exists a convergent subsequence \(\{ x^{n_j} \}_{j \in \mathbb N}\) with limit \(\hat{x}\) and \(f(x^{n_j}) \rightarrow f(\hat{x})\).

Then, the whole sequence \(\{ x^{n} \}_{n \in \mathbb N}\) converges to point \(\hat{x}\) which fulfills \(0 \in \partial _L f(\hat{x})\).

In the following proof we can partially use arguments from [2].

Proof of Theorem 1

We show that the function \(\varPhi : {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2} \rightarrow {\mathbb {R}} \cup \{\infty \}\) in (31) and the sequence \(\{(a^n,\eta ^n)\}_{n \in \mathbb N}\) generated by Algorithm 1 fulfills the properties (i)–(iii) of Theorem 2. (i) By the variational inequality of the proximal operator it holds

$$\begin{aligned} G(a^{n+1})-G(a)&\le \frac{1}{\tau }\langle a^n + \tau K^*\eta ^n-a^{n+1},a^{n+1}-a\rangle \\&=\frac{1}{\tau }\langle a^n-a^{n+1},a^{n+1}-a\rangle \\&\quad +\langle K^*\eta ^n,a^{n+1}-a\rangle ,\\ F^{*}(\eta ^{n+1})-F^{*}(\eta )&\le \frac{1}{\sigma }\langle \eta ^n+(1+\theta )\sigma Ka^{n+1}-\theta \sigma Ka^n \\&\quad -\eta ^{n+1},\eta ^{n+1}-\eta \rangle \\&=\frac{1}{\sigma }\langle \eta ^n-\eta ^{n+1},\eta ^{n+1}-\eta \rangle \\&\quad +\langle (1+\theta ) Ka^{n+1}-\theta Ka^n,\eta ^{n+1}-\eta \rangle . \end{aligned}$$

Choosing \(a=a^n\) and \(\eta =\eta ^n\), this yields

$$\begin{aligned} \varPhi (a^{n+1},\eta ^n)-\varPhi (a^n,\eta ^n)&=G(a^{n+1})-G(a^n) \\&\quad +\langle K^*\eta ^n,a^n-a^{n+1}\rangle \\&\le -\frac{1}{\tau }\Vert a^n-a^{n+1}\Vert _2^2. \end{aligned}$$

Since \(2uv\le \alpha u^2 +\frac{1}{\alpha }v^2\) for an arbitrary \(\alpha > 0\), we get

$$\begin{aligned}&\varPhi (a^{n+1},\eta ^{n+1})-\varPhi (a^{n+1},\eta ^n)\\&=F^{*}(\eta ^{n+1})-F^{*}(\eta ^n)+\langle \eta ^n-\eta ^{n+1},Ka^{n+1}\rangle \\&\le -\frac{1}{\sigma }\Vert \eta ^n-\eta ^{n+1}\Vert _2^2+\theta \langle Ka^{n+1}-Ka^n,\eta ^{n+1}-\eta ^n\rangle \\&\quad - \frac{1}{\tau } \Vert a^n - a^{n+1} \Vert _2^2\\&\le -\frac{1}{\sigma }\Vert \eta ^n-\eta ^{n+1}\Vert _2^2+\theta \Vert K\Vert _2 \Vert a^n-a^{n+1}\Vert _2 \Vert \eta ^n-\eta ^{n+1}\Vert _2\\&\le \Big (\!\!\! -\frac{1}{\sigma }+\frac{\theta \Vert K\Vert _2}{2\alpha }\Big )\Vert \eta ^n-\eta ^{n+1}\Vert _2^2 + \frac{\theta \Vert K\Vert _2\alpha }{2} \Vert a^n-a^{n+1}\Vert _2^2. \end{aligned}$$

Adding the two inequalities gives

$$\begin{aligned}&\varPhi (a^{n+1},\eta ^{n+1})-\varPhi (a^n,\eta ^n)\nonumber \\&\le \underbrace{\Big (-\frac{1}{\sigma }+\frac{\theta \Vert K\Vert _2}{2\alpha }\Big )}_{c_1}\Vert \eta ^n-\eta ^{n+1}\Vert _2^2\nonumber \\&\quad +\underbrace{\Big (-\frac{1}{\tau }+\frac{\theta \Vert K\Vert _2\alpha }{2}\Big )}_{c_2} \Vert a^n-a^{n+1}\Vert _2^2. \end{aligned}$$
(39)

Setting \(\alpha := \sqrt{\sigma /\tau }\) we see by \(\sigma \tau \le 4/(\theta \Vert K\Vert _2)^2\) that both \(c_1 < 0\) and \(c_2 < 0\). Thus, (i) in Theorem 2 is fulfilled with \(K_1 :=\max \left\{ -c_1,-c_2 \right\} \). In particular, we obtain

$$\begin{aligned} \varPhi (a^{n+1},\eta ^{n+1}) \le \varPhi (a^n,\eta ^n). \end{aligned}$$
(40)

Further, summing up (39) for \(n=0,\ldots ,N-1\) we get

$$\begin{aligned} \varPhi (a^N,\eta ^N)-\varPhi (a^0,\eta ^0) \le&c_1 \sum _{n=0}^{N-1} \Vert \eta ^{n+1}-\eta ^n\Vert _2^2 \\&+ c_2 \sum _{n=0}^{N-1} \Vert a^{n+1}-a^n\Vert _2^2 . \end{aligned}$$

Now the facts, that the left-hand side is bounded from below by \(\inf _{a,\eta }\varPhi (a,\eta ) - \varPhi (a^0,\eta ^0)\) and that \(c_j<0\), \(j=1,2\) yields

$$\begin{aligned} \sum _{n=0}^{N-1} \Vert a^{n+1}-a^n\Vert _2^2< \infty \quad \mathrm {and} \quad \sum _{n=0}^{N-1} \Vert \eta ^{n+1}-\eta ^n\Vert _2^2 < \infty . \end{aligned}$$
(41)

(ii) The construction of the iterates in the algorithm fulfill

$$\begin{aligned} \frac{a^{n-1}-a^n}{\tau }+K^*\eta ^{n-1}&\in \partial G(a^n),\\ \frac{\eta ^{n-1}-\eta ^n}{\sigma }+(1+\theta )K a^n-\theta K a^{n-1}&\in \partial F^{*}(\eta ^n). \end{aligned}$$

Consider the function \({\tilde{\varPhi }}(a^n,\eta ^n):{\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2} \rightarrow {\mathbb {R}} \cup \{+\infty \}\), \({\tilde{\varPhi }}(a,\eta )= G(a)+F^{*}(\eta )\). By the calculus of the convex subdifferential and [34, Proposition 8.12] we get

$$\begin{aligned} \partial _L{\tilde{\varPhi }}(a^n,\eta ^n)=\partial G(a^n)\times \partial F^{*}(\eta ^n). \end{aligned}$$

By [34, Exercise 8.8] it holds

$$\begin{aligned} \partial _L\varPhi (a^n,\eta ^n)&=\partial _L{\tilde{\varPhi }}(a^n,\eta ^n)-(K^*\eta ^n,Ka^n)\nonumber \\&=(\partial G(a^n) -K^* \eta ^n)\times (\partial F^{*}(\eta ^n)-K a^n). \end{aligned}$$
(42)

Thus, we have

$$\begin{aligned} \left( \begin{array}{c} w_1^n\\ w_2^n\end{array}\right) :=\left( \begin{array}{c} \frac{a^{n-1}-a^n}{\tau }+K^*(\eta ^{n-1}-\eta ^n)\\ \frac{\eta ^{n-1}-\eta ^n}{\sigma }+\theta K(a^n-a^{n-1}) \end{array} \right) \in \partial _L\varPhi (a^n,\eta ^n). \end{aligned}$$

With \((u+v)^2 \le 2(u^2 + v^2)\) and \(w^n = (w_1^n,w_2^n)^\mathrm {T}\) we obtain

$$\begin{aligned} \Vert w^n\Vert ^2 \le&\left( \frac{2}{\tau ^2} + \theta ^2 \Vert K\Vert _2^2\right) \Vert a^n - a^{n-1}\Vert _2^2 \\&+ \left( \frac{2}{\sigma ^2} + \Vert K\Vert _2^2\right) \Vert \eta ^n - \eta ^{n-1}\Vert _2^2, \end{aligned}$$

Hence, property (ii) in Theorem 2 is fulfilled with \( K_2 := \left( \max \{ \frac{2}{\tau ^2} + \theta ^2 \Vert K\Vert _2^2, \right. \)\( \left. \frac{2}{\sigma ^2} + \Vert K\Vert _2^2\} \right) ^\frac{1}{2} \).

(iii) Since \(F^*\) is the indicator function of a compact set \({\mathcal {H}}_1\), G is coercive and strictly convex and K is a linear operator, we have that the level sets of \(\varPhi \) at \((a^0,\eta ^0)\) given by \(\{(a,\eta ) \in {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}: \varPhi (a,\eta ) \le \varPhi (a^0,\eta ^0)\}\) are bounded from below. By (40), every infinite sequence \(\{(a^n,\eta ^n)\}_{n \in \mathbb N}\) generated by the algorithm is bounded such that there exists a convergent subsequence \(\{(a^{n_j},\eta ^{n_j})\}_{j \in \mathbb N}\) with limit \((\hat{a}, \hat{\eta })\). By Lemma 1, we know that \((\hat{a}, \hat{\eta })\) is a critical point of \(\varPhi \). By (42) and (32) any critical point \((\hat{a},\hat{\eta })\) of \(\varPhi \) fulfills \(0 \in \partial _L\varPhi (\hat{a},\hat{\eta })\) and conversely. This finishes the proof.

Lemma 1

Let \(\{(a^n,\eta ^n)\}_{n \in \mathbb N}\) be the sequence generated by Algorithm 1. Then, we have the following:

  1. (i)

    Any cluster point of \(\{(a^n,\eta ^n)\}_{n \in \mathbb N}\) is a critical point of \(\varPhi \).

  2. (ii)

    \((a^n,\eta ^n)\) critical point of \(\varPhi \) if and only if \((a^n,\eta ^n) = (a^{n+1},\eta ^{n+1})\) if and only if \(\varPhi (a^n,\eta ^n) = \varPhi (a^{n+1},\eta ^{n+1})\).

Proof

(i) Let \((\hat{a}, \hat{\eta })\) be a cluster point and \(\{(a^{n_j},\eta ^{n_j})\}_{j \in \mathbb N}\) a subsequence converging to \((\hat{a}, \hat{\eta })\). By the iteration scheme we have

$$\begin{aligned}&\frac{a^{n_j} - a^{n_j+1}}{\tau } + K^* \eta ^{n_j}&\in \partial G(a^{n_j+1}),\\&\frac{\eta ^{n_j} - \eta ^{n_j+1}}{\sigma } + K \left( (1+\theta ) a^{n_j+1} - \theta a^{n_j} \right)&\in \partial F^*(\eta ^{n_j+1}). \end{aligned}$$

By (41), the first summands in both expressions tend to zero as \(j \rightarrow \infty \). Using the closeness of the graphs of \(\partial G\) and \(\partial F^*\) and passing to the limit, we get \(K^* \hat{\eta }\in \partial G(\hat{a})\) and \(K \hat{a} \in \partial F^*(\hat{\eta })\). Hence, \((\hat{a},\hat{\eta })\) is a critical point of \(\varPhi \).

(ii) The first two statements in (ii) are equivalent by the following reason: Let \((a^n,\eta ^n)=(a^{n+1},\eta ^{n+1})\). Then, the iteration scheme implies

$$\begin{aligned} K^* \eta ^n \in \partial G(a^n) \end{aligned}$$

and

$$\begin{aligned} K((1+\theta ) a^n - \theta a^n) = K a^n \in \partial F^*(\eta ^n), \end{aligned}$$

so that \((a^n,\eta ^n)\) is a critical point of \(\varPhi \). Conversely, if \((a^n,\eta ^n)\) is a critical point of \(\varPhi \), then if fulfills the above relations and together with the uniqueness of the proximum we conclude from the iteration scheme that \((a^n,\eta ^n)=(a^{n+1},\eta ^{n+1})\).

The second two statements in (ii) are equivalent by (39). This proves the second assertion.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Batard, T., Hertrich, J. & Steidl, G. Variational Models for Color Image Correction Inspired by Visual Perception and Neuroscience. J Math Imaging Vis 62, 1173–1194 (2020). https://doi.org/10.1007/s10851-020-00978-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-020-00978-1

Keywords

Navigation