Skip to main content
Log in

Contrast Invariant SNR and Isotonic Regressions

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

We design an image quality measure independent of contrast changes, which are defined as a set of transformations preserving an order between the level lines of an image. This problem can be expressed as an isotonic regression problem. Depending on the definition of a level line, the partial order between adjacent regions can be defined through chains, polytrees or directed acyclic graphs. We provide a few analytic properties of the minimizers and design original optimization procedures together with a full complexity analysis. The methods worst case complexities range from O(n) for chains, to \(O(n\log n )\) for polytrees and \(O(\frac{n^2}{\sqrt{\epsilon }})\) for directed acyclic graphs, where n is the number of pixels and \(\epsilon \) is a relative precision. The proposed algorithms have potential applications in change detection, stereo-vision, image registration, color image processing or image fusion. A C++ implementation with Matlab headers is available at https://github.com/pierre-weiss/contrast_invariant_snr.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. The saturation of a set S is constructed by filling the holes of S. A hole is defined as a connected component of the complementary of S which is in the interior of S.

  2. One should choose the 4 connexity for the upper-level sets and the 8 connexity for the lower level-sets (or the reverse) to satisfy a discrete version of Jordan’s theorem (Monasse and Guichard 2000).

  3. This is a slight abuse of notation since a level-line defined this way can have a nonempty interior.

  4. This result can be strengthened slightly, we refer the interested reader to the example 3.1 in Chambolle and Pock (2016) for more details.

  5. As far as we could judge, there seems to be an inaccuracy in the complexity analysis, which is based on the exact resolution of linear programs using interior point methods (which are inexact in nature). In practice the implementation is based on a simplex-type algorithm which is exact, but with an uncontrolled complexity.

References

  • ApS, M. (2017). The MOSEK optimization toolbox for MATLAB manual. Version 8.1. http://docs.mosek.com/8.1/toolbox/index.html

  • Ballester, C., Cubero-Castan, E., Gonzalez, M., & Morel, J. (2000). Contrast invariant image intersection. In Advanced mathematical methods in measurement and instrumentation (pp. 41–55).

  • Ballester, C., Caselles, V., Igual, L., Verdera, J., & Rougé, B. (2006). A variational model for P+XS image fusion. International Journal of Computer Vision, 69(1), 43–58.

    Article  Google Scholar 

  • Barlow, R., & Brunk, H. (1972). The isotonic regression problem and its dual. Journal of the American Statistical Association, 67(337), 140–147.

    Article  MathSciNet  MATH  Google Scholar 

  • Best, M. J., & Chakravarti, N. (1990). Active set algorithms for isotonic regression; a unifying framework. Mathematical Programming, 47(1–3), 425–439.

    Article  MathSciNet  MATH  Google Scholar 

  • Bovik, A. C. (2010). Handbook of image and video processing. Cambridge: Academic Press.

    MATH  Google Scholar 

  • Boyer, C., Weiss, P., & Bigot, J. (2014). An algorithm for variable density sampling with block-constrained acquisition. SIAM Journal on Imaging Sciences, 7(2), 1080–1107.

    Article  MathSciNet  MATH  Google Scholar 

  • Brunk, H. D. (1955). Maximum likelihood estimates of monotone parameters. The Annals of Mathematical Statistics, 26(4), 607–616.

    Article  MathSciNet  MATH  Google Scholar 

  • Caselles, V., & Monasse, P. (2009). Geometric description of images as topographic maps. Berlin: Springer.

    MATH  Google Scholar 

  • Caselles, V., Coll, B., & Morel, J.-M. (1999). Topographic maps and local contrast changes in natural images. International Journal of Computer Vision, 33(1), 5–27.

    Article  Google Scholar 

  • Caselles, V., Coll, B., & Morel, J.-M. (2002). Geometry and color in natural images. Journal of Mathematical Imaging and Vision, 16(2), 89–105.

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., & Pock, T. (2011). A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1), 120–145.

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., & Pock, T. (2016). An introduction to continuous optimization for imaging. Acta Numerica, 25, 161–319.

    Article  MathSciNet  MATH  Google Scholar 

  • Combettes, P. L., & Pesquet, J.-C. (2011). Proximal splitting methods in signal processing. In H. Bauschke, R. Burachik, P. Combettes, V. Elser, D. Luke, & H. Wolkowicz (Eds.), Fixed-point algorithms for inverse problems in science and engineering (pp. 185–212). New York: Springer.

    Chapter  Google Scholar 

  • Deledalle, C.-A., Papadakis, N., Salmon, J., & Vaiter, S. (2017). Clear: Covariant least-square refitting with applications to image restoration. SIAM Journal on Imaging Sciences, 10(1), 243–284.

    Article  MathSciNet  MATH  Google Scholar 

  • Delon, J. (2004). Midway image equalization. Journal of Mathematical Imaging and Vision, 21(2), 119–134.

    Article  MathSciNet  Google Scholar 

  • Droske, M., & Rumpf, M. (2004). A variational approach to nonrigid morphological image registration. SIAM Journal on Applied Mathematics, 64(2), 668–687.

    Article  MathSciNet  MATH  Google Scholar 

  • Dykstra, R. L., Robertson, T., et al. (1982). An algorithm for isotonic regression for two or more independent variables. The Annals of Statistics, 10(3), 708–716.

    Article  MathSciNet  MATH  Google Scholar 

  • Ehrhardt, M. J., & Arridge, S. R. (2014). Vector-valued image processing by parallel level sets. IEEE Transactions on Image Processing, 23(1), 9–18.

    Article  MathSciNet  MATH  Google Scholar 

  • Felzenszwalb, P. F., & Zabih, R. (2011). Dynamic programming and graph algorithms in computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(4), 721–740.

    Article  Google Scholar 

  • Géraud, T., Carlinet, E., Crozet, S., & Najman, L. (2013). A quasi-linear algorithm to compute the tree of shapes of ND images. In International symposium on mathematical morphology and its applications to signal and image processing (pp. 98–110). Springer.

  • Horowitz, E., et al. (2006). Fundamentals of data structures in C++. New Delhi: Galgotia Publications.

    Google Scholar 

  • Kolmogorov, V., Pock, T., & Rolinek, M. (2016). Total variation on a tree. SIAM Journal on Imaging Sciences, 9(2), 605–636.

    Article  MathSciNet  MATH  Google Scholar 

  • Kronrod, A. S. (1950). On functions of two variables. Uspekhi Matematicheskikh Nauk, 5(1), 24–134.

    MathSciNet  Google Scholar 

  • Kyng, R., Rao, A., & Sachdeva, S. (2015). Fast, provable algorithms for isotonic regression in all \(L_p\)-norms. In Advances in neural information processing systems (pp. 2701–2709).

  • Luss, R., Rosset, S., & Shahar, M. (2010). Decomposing isotonic regression for efficiently solving large problems. In Advances in neural information processing systems (pp. 1513–1521).

  • Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., & Suetens, P. (1997). Multimodality image registration by maximization of mutual information. IEEE Transactions on Medical Imaging, 16(2), 187–198.

    Article  Google Scholar 

  • Matheron, G. (1975). Random sets and integral geometry. Hoboken: Wiley.

    MATH  Google Scholar 

  • Moisan, L. (2012). Modeling and image processing. Lectures notes of ENS Cachan.

  • Monasse, P., & Guichard, F. (2000). Fast computation of a contrast-invariant image representation. IEEE Transactions on Image Processing, 9(5), 860–872.

    Article  Google Scholar 

  • Nesterov, Y. (1983). A method of solving a convex programming problem with convergence rate \(o(1/k^2)\). Soviet Mathematics Doklady, 27(2), 372–376.

    MATH  Google Scholar 

  • Nesterov, Y. (2013). Introductory lectures on convex optimization: A basic course (Vol. 87). New York: Springer.

    MATH  Google Scholar 

  • Nesterov, Y., Nemirovskii, A., & Ye, Y. (1994). Interior-point polynomial algorithms in convex programming (Vol. 13). Philadelphia: SIAM.

    Book  MATH  Google Scholar 

  • Pardalos, P. M., & Xue, G. (1999). Algorithms for a class of isotonic regression problems. Algorithmica, 23(3), 211–222.

    Article  MathSciNet  MATH  Google Scholar 

  • Rudin, L. I., Osher, S., & Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1–4), 259–268.

    Article  MathSciNet  MATH  Google Scholar 

  • Serra, J. (1982). Image analysis and mathematical morphology. Cambridge: Academic Press.

    MATH  Google Scholar 

  • Spielman, D. A., & Teng, S.-H. (2004). Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the thirty-sixth annual ACM symposium on theory of computing (pp. 81–90). ACM.

  • Stout, Q. F. (2014). Fastest isotonic regression algorithms. Retrieved 2014 from http://web.eecs.umich.edu/~qstout/IsoRegAlg_1408HrB12.pdfHrB.

  • Stout, Q. F. (2013). Isotonic regression via partitioning. Algorithmica, 66(1), 93–112.

    Article  MathSciNet  MATH  Google Scholar 

  • Van Eeden, C. (1957). Maximum likelihood estimation of partially or completely ordered parameters. I. Indagationes Mathematicae, 19, 128–136.

    Article  MathSciNet  MATH  Google Scholar 

  • Viola, P., & Wells, W. M, I. I. I. (1997). Alignment by maximization of mutual information. International Journal of Computer Vision, 24(2), 137–154.

    Article  Google Scholar 

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  Google Scholar 

  • Weiss, P., Blanc-Féraud, L., & Aubert, G. (2009). Efficient schemes for total variation minimization under constraints in image processing. SIAM Journal on Scientific Computing, 31(3), 2047–2080.

    Article  MathSciNet  MATH  Google Scholar 

  • Weiss, P., Fournier, A., Blanc-Féraud, L., & Aubert, G. (2011). On the illumination invariance of the level lines under directed light: Application to change detection. SIAM Journal on Imaging Sciences, 4(1), 448–471.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the anonymous reviewers for their excellent reviews which helped them improving the paper. They thank Pascal Monasse for encouraging them to explore this question and for providing the source codes of the FLST (Fast Level Set Transform). P. Weiss wishes to thank Jonas Kahn warmly for helping him to check that dynamic programming could be applied to our problem and to find a hard problem. He also thanks his daughter Anouk for lending her toys to generate the pictures. P. Escande was supported by the PRES of Toulouse University and Midi-Pyrénées region.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierre Weiss.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs of Convergence of the First Order Algorithm

We first prove Proposition 1.

Proof

We only sketch the proof. The idea is to use Fenchel–Rockafellar duality for convex optimization:

$$\begin{aligned}&\min _{A\alpha \ge 0}\frac{1}{2}\langle W(\alpha -\beta ),\alpha -\beta \rangle \\&\quad = \min _{\alpha \in \mathbb {R}^m} \sup _{\lambda \le 0} \frac{1}{2}\langle W(\alpha -\beta ),\alpha -\beta \rangle + \langle A\alpha , \lambda \rangle \\&\quad = \sup _{\lambda \le 0} \min _{\alpha \in \mathbb {R}^m} \frac{1}{2}\langle W(\alpha -\beta ),\alpha -\beta \rangle + \langle A\alpha , \lambda \rangle . \end{aligned}$$

The primal-dual relationship \(\alpha (\lambda )\) is obtained by finding the minimizer of the inner-problem in the last equation. The dual problem is found by replacing \(\alpha \) by \(\alpha (\lambda )\) in the inner-problem.

The function D is obviously differentiable with \(\nabla D(\lambda )= -A W^{-1}A^T\lambda + A\beta \). Therefore, \(\forall (\lambda _1,\lambda _2)\), we get:

$$\begin{aligned} \Vert \nabla D(\lambda _1) - \nabla D(\lambda _2)\Vert _2&= \Vert A W^{-1}A^T (\lambda _1-\lambda _2) \Vert _2 \\&\le \lambda _{\max }(A W^{-1}A^T) \Vert \lambda _1-\lambda _2\Vert _2. \end{aligned}$$

The inequality (30) is a direct consequence of a little known result about the Fenchel–Rockafellar dual of problems involving a strongly convex function. We refer the reader to Lemma D.1 in Boyer et al. (2014) for more details, or to Chambolle and Pock (2016) for a slightly improved bound in the case of \(\ell ^2\) metrics. \(\square \)

Now let us prove Proposition 2.

Proof

Notice that \(\lambda _{\max }(A W^{-1}A^T) = \sigma _{\max }^2(AW^{-1/2})\), where \(\sigma _{\max }\) stands for the largest singular value. Moreover

$$\begin{aligned} \Vert AW^{-1/2} \alpha \Vert _2^2&= \sum _{k=1}^m \left( \frac{\alpha _{\textsc {i}(k)}}{\sqrt{w_{\textsc {i}(k)}}} - \frac{\alpha _{\textsc {j}(k)}}{\sqrt{w_{\textsc {j}(k)}}} \right) ^2 \\&\le \sum _{k=1}^m 2 \left( \frac{\alpha _{\textsc {i}(k)}^2}{w_{\textsc {i}(k)}} + \frac{\alpha _{\textsc {j}(k)}^2}{w_{\textsc {j}(k)}} \right) \\&= 4 \sum _{k=1}^m \frac{\alpha _{\textsc {i}(k)}^2}{w_{\textsc {i}(k)}} \\&= 4 \sum _{i=1}^p n_i \frac{\alpha _{i}^2}{w_i}, \end{aligned}$$

where \(n_i\) denotes the number of edges starting from vertex i (the outdegree). To conclude, notice that each pixel in region \(\varDelta _j\) has at most \(c_{\max }\) neighbors. Therefore \(n_i\le w_i c_{\max }\) and we finally get:

$$\begin{aligned} \Vert AW^{-1/2} \alpha \Vert _2^2&\le 4 c_{\max } \sum _{i=1}^p \alpha _{i}^2&= 4c_{\max } \Vert \alpha \Vert _2^2. \end{aligned}$$
(40)

\(\square \)

Finally, we prove Proposition 3 below.

Proof

Standard convergence results (Nesterov 2013) state that:

$$\begin{aligned} D(\lambda ^{(k)}) - D(\lambda ^\star ) \le \frac{4c_{\max } \Vert \lambda ^{(0)} - \lambda ^{*}\Vert _2^2}{k^2}. \end{aligned}$$

Combining this result with inequality (30) directly yields (31).

To obtain the bound (32), first remark that each iteration of Algorithm 1 requires two matrix-vector products with A and \(A^T\) of complexity O(m). The final result is then a direct consequence of the bound (31) and of the Proposition 2. \(\square \)

Proofs of the Complexity Results

In this paragraph, we analyze the theoretical efficiency of Algorithm 1. We consider the special case \(W=\mathrm {Id}\) for the ease of exposition. In practice, controlling the absolute error\(\Vert \alpha ^{(k)} - \alpha ^\star \Vert _2\) is probably less relevant than the relative error\(\frac{\Vert \alpha ^{(k)} - \alpha ^\star \Vert _2}{\Vert \alpha ^{(0)} - \alpha ^\star \Vert _2}\). This motivates setting \(\epsilon = \eta \Vert \alpha ^{(0)} - \alpha ^\star \Vert _2\) in Eq. (32), where \(\eta \in [0,1)\) is a parameter describing the relative precision of the solution. Setting \(\lambda ^{(0)}=0\) and noticing that:

$$\begin{aligned} \Vert \alpha ^{(0)} - \alpha ^\star \Vert _2&= \Vert \beta - \alpha ^\star \Vert _2 \\&= \Vert A^T\lambda ^\star \Vert _2, \end{aligned}$$

the complexity in terms of \(\eta \) becomes:

$$\begin{aligned} O\left( \frac{m}{\eta } \frac{\Vert \lambda ^\star \Vert _2}{\Vert A^T\lambda ^\star \Vert _2}\right) . \end{aligned}$$
(41)

1.1 Example of a hard problem

An example of a hard graph (a simple line graph) is provided in Fig. 8. For this graph, the Algorithm 1 can be interpreted as a diffusion process, which is known to be extremely slow. In particular, Nesterov shows that diffusions are the worst case problems for the first order methods in Nesterov (2013, p. 59) (Fig. 9).

Proposition 5

Consider a simple line graph as depicted in Fig. 8, with p even and \(W=\mathrm {Id}\). Set

$$\begin{aligned} \beta _i=\left\{ \begin{array}{ll} 1 &{} \text {if } i \le p/2, \\ -1 &{} \text {otherwise}. \end{array} \right. \end{aligned}$$
(42)

Then the primal-dual solution \((\alpha ^\star ,\lambda ^\star )\) of the isotonic regression problem (28) is given by \(\alpha ^\star =0\) and

$$\begin{aligned} \lambda ^\star _{k}= \left\{ \begin{array}{ll} -k &{} \text {if}\quad 1\le k \le p/2, \\ -n + k &{} \text {if} \quad p/2+1\le k \le p. \end{array} \right. \end{aligned}$$
(43)

This implies that

$$\begin{aligned} \frac{\Vert \lambda ^\star \Vert _2}{\Vert A^T\lambda ^\star \Vert _2} \sim m. \end{aligned}$$
(44)

Proof

For this simple graph, \(m=p-1\). To check that (43) is a solution, it suffices to verify the Karush–Kuhn–Tucker conditions:

$$\begin{aligned} A^T\lambda ^\star&= W(\beta - \alpha ^\star ), \\ A\alpha ^\star&\ge 0,&\\ \lambda ^\star&\le 0,&\\ \lambda ^\star _i&= 0&\text {if } (A\alpha ^\star )_i >0. \end{aligned}$$

This is done by direct inspection, using the fact that for this graph:

$$\begin{aligned} (A^T\lambda )_i = \left\{ \begin{array}{ll} -\lambda _1 &{} \text {if} \quad i=1 \\ -\lambda _i + \lambda _{i-1} &{} \text {if} \quad 2\le i \le p-1 \\ \lambda _{p-1} &{} \text {if} \quad i=p. \end{array}\right. \end{aligned}$$
(45)

The relationship (44) is due to the fact that the sum of squares \(\sum _{k=1}^m k^2 = m(m+1)(2m+1)/6 \sim m^3\) so that \(\Vert \lambda ^\star \Vert _2^2\sim m^3\) and \(\Vert A^T\lambda ^\star \Vert _2^2=\Vert \beta \Vert _2^2=m\). \(\square \)

Fig. 8
figure 8

Worst case graph

Fig. 9
figure 9

First 20,000 iterations of the primal-dual pair \((\alpha ^{(k)},\lambda ^{(k)})\). Top: \(\beta \) is displayed in red while \(\alpha ^{(k)}\) varies from green to blue with iterations. Bottom: \(\lambda ^{(k)}\) varies from green to blue with iterations. A new curve is displayed every 1000 iterations. As can be seen, the convergence is very slow (Color figure online)

1.1.1 Example of a nice problem

In order to rehabilitate our approach, let us show that the ratio \(\frac{\Vert \lambda ^\star \Vert _2}{\Vert A^T\lambda ^\star \Vert _2}\) can be bounded independently of m for “nice” graphs.

Proposition 6

For any \(\lambda \le 0\) and for the graph depicted in Fig. 10, we have:

$$\begin{aligned} \frac{1}{2}\le \frac{\Vert \lambda ^\star \Vert _2}{\Vert A^T\lambda ^\star \Vert _2} \le \frac{1}{\sqrt{2}}. \end{aligned}$$
(46)

Proof

For this graph, we get:

$$\begin{aligned} (A^T\lambda )_i = \begin{pmatrix} -\lambda _1 \\ \lambda _1+\lambda _2 \\ -\lambda _2-\lambda _3 \\ \vdots \\ \lambda _{n-2}+\lambda _{n-1} \\ -\lambda _{n-1} \end{pmatrix}. \end{aligned}$$
(47)

Therefore:

$$\begin{aligned} \Vert A^T\lambda \Vert _2^2&= \lambda _1^2 + \lambda _{n-1}^2 + \sum _{k=1}^{n-2} (\lambda _k+\lambda _{k+1})^2 \\&= 2\sum _{k=1}^{n-1}\lambda _k^2 + 2\sum _{k=1}^{n-2} \lambda _k\lambda _{k+1}, \end{aligned}$$

and

$$\begin{aligned} 2\Vert \lambda \Vert _2^2 \le \Vert A^T\lambda \Vert _2^2 \le 4 \Vert \lambda \Vert _2^2. \end{aligned}$$
(48)

\(\square \)

Fig. 10
figure 10

A nice graph

Proof of Local Mean Preservation

We prove Theorem 2 below.

Proof

The Karush–Kuhn–Tucker optimality conditions read:

$$\begin{aligned} w_i (\alpha ^\star _i - \beta _i) + (A^T \lambda )_i&= 0&\forall i \end{aligned}$$
(49)
$$\begin{aligned} A\alpha&\ge 0 \end{aligned}$$
(50)
$$\begin{aligned} \lambda&\le 0 \end{aligned}$$
(51)
$$\begin{aligned} \lambda _{i,j} (A\alpha ^\star )_{i,j}&= 0&\forall (i,j)\in E, \end{aligned}$$
(52)

with \((A^T \lambda )_i = \sum _{j, (i,j)\in E} \lambda _{i,j} - \sum _{j, (j,i)\in E} \lambda _{i,j}\). Hence we get:

$$\begin{aligned} \sum _{i \in B_k} (A^T \lambda )_i = \sum _{i \in B_k} \sum _{j, (i,j)\in E} \lambda _{i,j} - \sum _{j, (j,i)\in E} \lambda _{i,j} =0. \end{aligned}$$

To obtain the last equality, observe that the Lagrange multipliers \(\lambda _{i,j}\) can be separated into those joining \(B_k\) from the exterior and those linking two edges within \(B_k\). The first ones vanish thanks to (52) and to the assumption that \((A\alpha ^\star )_{i,j}\ne 0\). The other ones cancel since the neighborhood is symmetric. To conclude the proof, it suffices to sum the Eq. (49) over \(B_k\). \(\square \)

Proof of the Models Inclusion

To prove Theorem 3, we first need the following preparatory lemma.

Lemma 2

(Directed paths in the tree of shapes) Let \(u_1\) denote an image and (VE) denote the graph associated to its tree of shapes \((\omega _i)_{i\in I}\). Let x and y be adjacent points \(x\overset{\mathcal {N}_4}{\sim } y\) and \(\omega _i\) (resp. \(\omega _j\)) denote the smallest shape containing x (resp. y).

Then there exists a path \((i_0=i,i_1,\ldots , i_{l-1},i_l=j)\) in E linking \(\omega _i\) to \(\omega _j\). In addition \(\mathrm {sign}(s_{i_k})=\mathrm {sign}(u_1(y)-u_1(x))\) for \(1\le k \le l\).

Proof

We have \(x\in \partial \omega _i\) since \(\omega _i\) is the smallest containing x. Otherwise, there would exist a descendant (which would be smaller by definition of the tree) that contains x. Similarly, \(y \in \partial \omega _j\).

Second, we have \(\omega _i \subset \omega _j\) or \(\omega _j\subset \omega _i\), but \(\omega _i\cap \omega _j=\emptyset \) is not possible. If it were the case, then \(\omega _i\) and \(\omega _j\) would be shapes on different branches of the tree. This is impossible since elements on different branches are disconnected.

Note that \(u_1(x) \ne u_1(y)\), otherwise they would be in the same shape. In what follows, we assume that \(\omega _i \subset \omega _j\) and that \(u_1(y)>u_1(x)\). The 3 other cases can be treated similarly. We let \((i_0=i,i_1,\ldots , i_{l-1},i_l=j)\) denote the path in E linking \(\omega _i\) to \(\omega _j\). We claim that along this path \((s_{i_k})_{1\le k\le l}\) is constant and equal to 1. There is necessarily one sign \(s_{i_k}=1\), otherwise this would contradict the hypothesis \(u_1(y)>u_1(x)\), so that the result holds when \(l=1\). When \(l>1\), let us assume that there exists one sign equal to \(-1\). Then, there exists two consecutive indexes, say \(i_{k_0}\) and \(i_{k_0+1}\), with \(1\le k_0 \le l\) such that \(s_{i_{k_0}}=-1\) and \(s_{i_{k_0+1}}=1\) (or the reverse). This implies that \(\omega _{i_{k_0}}\) is a shape from the min-tree and that \(\omega _{i_{k_0+1}}\) is a shape from the max-tree (see Monasse and Guichard 2000 or the introduction of Géraud et al. 2013). Therefore, \(\omega _{i_{k_0+1}}\) is a cavity of \(\omega _{i_{k_0}}\), so that \(\omega _{j}\) is not adjacent to \(\omega _i\), contradicting \(x\overset{\mathcal {N}_4}{\sim } y\). \(\square \)

We are now ready to prove Theorem 3.

Proof

We first prove the inclusion \(\mathcal {U}_{glo}\subseteq \mathcal {U}_{loc1}\). Assume that \(u\in \mathcal {U}_{glo}\). Then for all \((x,y)\in \varOmega ^2\), \((u(x)-u(y))\cdot (u_1(x) - u_1(y))\ge 0\). Therefore, the constraint \(A\alpha \ge 0\) is verified since it describes differences of gray values in adjacent level-lines. In addition \(u_1(x)=u_1(y) \Rightarrow u(x)=u(y)\). Hence, the constant regions of \(u_1\) are preserved.

We now prove the inclusion \(\mathcal {U}_{loc1}\subseteq \mathcal {U}_{loc2}\). The property [\(x \overset{\mathcal {N}_4}{\sim } y\) and \(u_1(x)=u_1(y)\)] \(\Rightarrow \) [\(u(x)=u(y)\)] is obvious since it implies that x and y belong to the level line \(\partial \omega _i\). Let \(u\in \mathcal {U}_{loc1}=\{R\alpha , A\alpha \ge 0\} = \{L\gamma , \gamma \ge 0\}\). By Lemma 2, we deduce that \(x \overset{\mathcal {N}_4}{\sim } y\) implies that \((u(x)-u(y))(u_1(x)-u_1(y))\ge 0\). For instance assume that \(u_1(y)>u_1(x)\). Then \(u(y) = u(x) + \sum _{1\le k \le l} \gamma _{i_{k}}\) with \(\gamma _{i_{k}}\ge 0\), so that \(u(y)-u(x)\ge 0\).

To finish, note that if \(u_1\) is a very simple image such as the one depicted in Fig. 11, \(\mathcal {U}_{glo} = \mathcal {U}_{loc1} = \mathcal {U}_{loc2}\), explaining why the inclusion of sets is not strict in general. \(\square \)

Fig. 11
figure 11

An example of image \(u_1\) where the three models are equivalent

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Weiss, P., Escande, P., Bathie, G. et al. Contrast Invariant SNR and Isotonic Regressions. Int J Comput Vis 127, 1144–1161 (2019). https://doi.org/10.1007/s11263-019-01161-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-019-01161-9

Keywords

Navigation