Skip to main content

Advertisement

Log in

GRMA: Generalized Range Move Algorithms for the Efficient Optimization of MRFs

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Markov random fields (MRF) have become an important tool for many vision applications, and the optimization of MRFs is a problem of fundamental importance. Recently, Veksler and Kumar et al. proposed the range move algorithms, which are some of the most successful optimizers. Instead of considering only two labels as in previous move-making algorithms, they explore a large search space over a range of labels in each iteration, and significantly outperform previous move-making algorithms. However, two problems have greatly limited the applicability of range move algorithms: (1) They are limited in the energy functions they can handle (i.e., only truncated convex functions); (2) They tend to be very slow compared to other move-making algorithms (e.g., \(\alpha \)-expansion and \(\alpha \beta \)-swap). In this paper, we propose two generalized range move algorithms (GRMA) for the efficient optimization of MRFs. To address the first problem, we extend the GRMAs to more general energy functions by restricting the chosen labels in each move so that the energy function is submodular on the chosen subset. Furthermore, we provide a feasible sufficient condition for choosing these subsets of labels. To address the second problem, we dynamically obtain the iterative moves by solving set cover problems. This greatly reduces the number of moves during the optimization. We also propose a fast graph construction method for the GRMAs. Experiments show that the GRMAs offer a great speedup over previous range move algorithms, while yielding competitive solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. In this work, we consider the optimization of arbitrary semimetric energy functions. Here, “semimetric” means that the pairwise function should satisfy \(\theta (\alpha ,\beta )=0\Leftrightarrow \alpha =\beta \) and \(\theta (\alpha ,\beta )=\theta (\beta ,\alpha )\ge 0\).

  2. A function \(g(\cdot )\) is convex if it satisfies \(g(x+1)-2g(x)+g(x-1) \ge 0\) for any integer x. Note that convex is a special case of submodular.

  3. Here, the interval [ab] denotes the set of integers \(\{x|a \le x \le b\}\).

  4. In \(\alpha \beta \)-swap, we call these iteration moves considering all the pairs of labels once as a “cycle”. An \(\alpha \beta \)-swap algorithm usually takes several cycles to converge (Boykov et al. 2001).

  5. We chose this cost function for simplicity, but a better iterative process may be developed with other choices, for example \(c(S_i) = 1+|S_i|\), because a small number of repeated labels may lead to a better solution without a significant increasing in the run time. Another choice is to set \(c(S_i)\) to be the estimate of improvement in energy as Batra and Kohli (2011). However, the experiments show that GRSA yields promising results with the simple choice of cost function made here.

  6. If \({\hat{f}}\) is a local minimum, it means that \(E({\hat{f}})\) cannot be decreased by any of the moves \({\mathcal {L}}_{i}\).

  7. When \(\theta _{pq}\) is a truncated linear function, the graph construction in the GRSA is the same as previous methods.

  8. The edges \((p_i,p_{i+1})\) for all \(i \in [1,m)\) and \((p_m,t)\) are already included in the unary edges . We can add the capacities that represent the pairwise potentials to these edges.

  9. Note that when \(\theta _{pq}\) is a truncated linear function, the numbers of edges in the new construction and in the previous method are the same, because \(c(p_i,q_j)=0\) for \(i\ne j\) in this case.

  10. http://vision.middlebury.edu/stereo/eval/.

  11. http://cvn.ecp.fr/personnel/pawan/research/truncated-moves.html.

  12. Here, the interval [ab] denotes the set of integers \(\{x|a \le x \le b\}\).

References

  • Batra, D., & Kohli, P. (2011). Making the right moves: Guiding alpha-expansion using local primal-dual gaps. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1865–1872).

  • Besag, J. (1986). On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society, 48(3), 259–302.

    MathSciNet  MATH  Google Scholar 

  • Boykov, Y., & Jolly. (2001) Interactive graph cuts for optimal boundary and region segmentation of objects in nd images. In IEEE International Conference on Computer Vision (pp. 105–112).

  • Boykov, Y., & Jolly, M. P. (2000). Interactive organ segmentation using graph cuts. In Medical Image Computing and Computer-Assisted Intervention (pp. 276–286).

  • Boykov, Y., & Kolmogorov, V. (2003). Computing geodesics and minimal surfaces via graph cuts. In IEEE International Conference on Computer Vision (pp. 26–33).

  • Boykov, Y., Veksler, O., & Zabih, R. (2001). Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11), 1222–1239.

    Article  Google Scholar 

  • Chekuri, C., Khanna, S., Naor, J., & Zosin, L. (2004). A linear programming formulation and approximation algorithms for the metric labeling problem. SIAM Journal on Discrete Mathematics, 18(3), 608–625.

    Article  MathSciNet  MATH  Google Scholar 

  • Feige, U. (1998). A threshold of ln n for approximating set cover. Journal of the ACM, 45(4), 634–652.

    Article  MathSciNet  MATH  Google Scholar 

  • Gould, S,, Amat, F., & Koller, D. (2009). Alphabet soup: A framework for approximate energy minimization. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 903–910).

  • Greig, D., Porteous, B., & Seheult, A. H. (1989). Exact maximum a posteriori estimation for binary images. Journal of the Royal Statistical Society, 51, 271–279.

    Google Scholar 

  • Gridchyn, I., & Kolmogorov, V. (2013). Potts model, parametric maxflow and k-submodular functions. In IEEE International Conference on Computer Vision (pp. 2320–2327).

  • Ishikawa, H. (2003). Exact optimization for markov random fields with convex priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(10), 1333–1336.

    Article  Google Scholar 

  • Kappes, J. H., Andres, B., Hamprecht, F. A., Schnörr, C., Nowozin, S., Batra, D., Kim, S., Kausler, B. X., Lellmann, J., Komodakis, N., & Rother, C. (2013). A comparative study of modern inference techniques for discrete energy minimization problem. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1328–1335).

  • Kohli, P., Osokin, A., & Jegelka, S. (2013). A principled deep random field model for image segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1971–1978)

  • Kolmogorov, V. (2006). Convergent tree-reweighted message passing for energy minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10), 1568–1583.

    Article  Google Scholar 

  • Kolmogorov, V., & Rother, C. (2007). Minimizing nonsubmodular functions with graph cuts-a review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(7), 1274–1279.

    Article  Google Scholar 

  • Kolmogorov, V., & Zabin, R. (2004). What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2), 147–159.

    Article  Google Scholar 

  • Kumar, M. P. (2014). Rounding-based moves for metric labeling. In Advances in Neural Information Processing Systems (pp. 109–117).

  • Kumar, M. P., Koller, D. (2009). Map estimation of semi-metric mrfs via hierarchical graph cuts. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (pp. 313–320).

  • Kumar, M. P., & Torr, P. H. (2009). Improved moves for truncated convex models. In Advances in Neural Information Processing Systems (pp. 889–896).

  • Kumar, M. P., Veksler, O., & Torr, P. H. (2011). Improved moves for truncated convex models. The Journal of Machine Learning Research, 12, 31–67.

  • Lempitsky, V., Rother, C., & Blake, A. (2007). Logcut-efficient graph cut optimization for markov random fields. In IEEE International Conference on Computer Vision (pp. 1–8).

  • Lempitsky, V., Rother, C., Roth, S., & Blake, A. (2010). Fusion moves for markov random field optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8), 1392–1405.

    Article  Google Scholar 

  • Liu, K., Zhang, J., Huang, K., & Tan, T. (2014). Deformable object matching via deformation decomposition based 2d label mrf. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2321–2328).

  • Liu, K., Zhang, J., Yang, P., & Huang, K. (2015). GRSA: Generalized range swap algorithm for the efficient optimization of mrfs. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1761–1769).

  • Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. IEEE International Conference on Computer Vision, 2, 416–423.

    Google Scholar 

  • Nagarajan, R. (2003). Intensity-based segmentation of microarray images. IEEE Transactions on Medical Imaging, 22(7), 882–889.

    Article  Google Scholar 

  • Poggio, T., Torre, V., & Koch, C. (1989). Computational vision and regularization theory. Image understanding, 3(1–18), 111.

    Google Scholar 

  • Rother, C., Kolmogorov, V., Lempitsky, V., & Szummer, M. (2007). Optimizing binary mrfs via extended roof duality. In textitIEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8).

  • Scharstein, D., & Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47, 7–42.

    Article  MATH  Google Scholar 

  • Schlesinger, D., & Flach, B. (2006). Transforming an arbitrary minsumproblem into a binary one. TU, Fak. Informatik.

  • Slavłk, P. (1996). A tight analysis of the greedy algorithm for set cover. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing (pp. 435–441).

  • Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., et al. (2008). A comparative study of energy minimization methods for mrfs with smoothness-based priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(6), 1068–1080.

    Article  Google Scholar 

  • Tappen, M. F., & Freeman, W. T. (2003). Comparison of graph cuts with belief propagation for stereo using identical mrf parameters. In IEEE International Conference on Computer Vision (pp. 900–906).

  • Veksler, O. (2007). Graph cut based optimization for mrfs with truncated convex priors. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8).

  • Veksler, O. (2012). Multi-label moves for mrfs with truncated convex priors. International Journal of Computer Vision, 98(1), 1–14.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work is funded by the National Basic Research Program of China (Grant No. 2012CB316302), National Natural Science Foundation of China (Grant No. 61322209, Grant No. 61175007 and Grant No. 61403387). The work is supported by the International Partnership Program of Chinese Academy of Sciences, Grant No. 173211KYSB2016008. We thank Olga Veksler for her great help to this work, and we thank Pushmeet Kohli for his valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaiqi Huang.

Additional information

Communicated by O. Veksler.

Appendices

Proof of Theorem 1

1.1 Related Lemmas and Definitions

Before proving Theorem 1, we first give the following lemmas and the definition of submodular set.

Lemma 5

For \(b_1,b_2>0\), the following conclusion holds.

$$\begin{aligned} \frac{a_1}{b_1}\ge \frac{a_2}{b_2}\;\Leftrightarrow \;\frac{a_1}{b_1}\ge \frac{a_1+a_2}{b_1+b_2}\ge \frac{a_2}{b_2}. \end{aligned}$$
(16)

The proof is straightforward and we omit it.

Lemma 6

Assuming that function g(x) is convex on [ab] and there are three points \(x_1,x,x_2\in [a,b]\) satisfying \(x_1>x>x_2\), there is

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x\right) }{x_1-x}\ge \frac{g\left( x_1\right) -g\left( x_2\right) }{x_1-x_2}\ge \frac{g\left( x\right) -g\left( x_2\right) }{x-x_2}. \end{aligned}$$
(17)

Proof

Since \(x_1>x>x_2\), there exists \(\lambda \in (0,1)\) satisfying \(x=(1-\lambda )x_1+\lambda x_2\). Then by the definition of convex function, there is \((1-\lambda )g(x_1)+\lambda g(x_2)\ge g(x)\) and thus

$$\begin{aligned} \left( 1-\lambda \right) \left( g\left( x_1\right) -g\left( x\right) \right) \ge \lambda \left( g\left( x\right) -g\left( x_2\right) \right) \end{aligned}$$
(18)

Considering that \(x_1>x_2\) and \(0<\lambda <1\), we can divide \(\lambda (1-\lambda )(x_1-x_2)\) on both sides of (18) and obtain

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x\right) }{\lambda \left( x_1-x_2\right) }\ge&\frac{g\left( x\right) -g\left( x_2\right) }{\left( 1-\lambda \right) \left( x_1-x_2\right) } \end{aligned}$$
(19)
$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x\right) }{x_1-x}\ge&\frac{g\left( x\right) -g\left( x_2\right) }{x-x_2}. \end{aligned}$$
(20)

At last, the conclusion (17) can be proved by applying Lemma 5 to 20. \(\square \)

Lemma 7

Assuming that g(x) is convex on [ab] and there are four points \(x_1,x_2,x_3,x_4\in [a,b]\) satisfying \(x_1>x_3\ge x_4\) and \(x_1\ge x_2>x_4\), there is

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x_3\right) }{x_1-x_3}\ge \frac{g\left( x_2\right) -g\left( x_4\right) }{x_2-x_4}. \end{aligned}$$
(21)

Proof

Since g(x) is convex on [ab] and \(x_1>x_3\ge x_4\), it is straightforward to obtain

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x_3\right) }{x_1-x_3}\ge \frac{g\left( x_1\right) -g\left( x_4\right) }{x_1-x_4} \end{aligned}$$
(22)

by Lemma 6 (the case when \(x_3=x_4\) is trivial).

By the same way, there is

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x_4\right) }{x_1-x_4}\ge \frac{g\left( x_2\right) -g\left( x_4\right) }{x_2-x_4}. \end{aligned}$$
(23)

Combining (22) and (23), we obtain the inequality (21) which completes the proof. \(\square \)

Lemma 8

Given a function g(x) \((x = |\alpha - \beta |)\) on domain \(X = [0,c]\), assume g(x) is locally convex on interval \(X_s = [a,b]\) \((0\le a < b \le c)\), and it satisfies \(a\{g(a + 1)-g(a)\}\ge g(a)-g(0)\). Then we have

$$\begin{aligned} \frac{g(x_1)-g(x_3)}{x_1-x_3}\ge \frac{g(x_2)-g(0)}{x_2} \end{aligned}$$
(24)

where \(x_1\), \(x_2\), \(x_3 \in X_s\) where \(x_3<x_1\) and \(x_2<x_1\).

Proof

Since \(x_1>x_3\ge a\) and \(x_1\in {\mathbb {N}}\), we have \(x_1\ge a+1>a\). Then considering that \(x_1>x_3\ge a\), we can use Lemma 7 to obtain

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x_3\right) }{x_1-x_3}\ge \frac{g\left( a+1\right) -g\left( a\right) }{a+1-a}\ge \frac{g\left( a\right) -g\left( 0\right) }{a},\nonumber \\ \end{aligned}$$
(25)

where the second inequality comes from \(a\{g(a+1)-g(a)\}\ge g(a)-g(0)\).

If \(x_2=a\), the conclusion is obtained from (25). Otherwise, there is \(x_1>x_2>a\) and \(x_1>x_3\ge a\). Using Lemma 7, we obtain

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x_3\right) }{x_1-x_3}\ge \frac{g\left( x_2\right) -g\left( a\right) }{x_2-a}. \end{aligned}$$
(26)

Combining (25) and (26), we can obtain

$$\begin{aligned} \begin{aligned} \frac{g\left( x_1\right) -g\left( x_3\right) }{x_1-x_3}\ge&\max \left\{ \frac{g\left( x_2\right) -g\left( a\right) }{x_2-a},\frac{g\left( a\right) -g\left( 0\right) }{a}\right\} \\ \ge&\frac{g\left( x_2\right) -g\left( a\right) +g\left( a\right) -g\left( 0\right) }{x_2-a+a}\\ =&\frac{g\left( x_2\right) -g\left( 0\right) }{x_2}, \end{aligned} \end{aligned}$$

where the second inequality is due to Lemma 5 and this completes the proof. \(\square \)

Definition 1

Given a pairwise potential \(\theta (\alpha ,\beta )\), we call \({\mathcal {L}}_s\) a submodular set of labels, if it satisfies

$$\begin{aligned} \theta \left( l_{i+1},l_j\right) -\theta \left( l_{i+1},l_{j+1}\right) -\theta \left( l_i,l_j\right) +\theta \left( l_i,l_{j+1}\right) \ge 0\nonumber \\ \end{aligned}$$
(27)

for any pair of labels \(l_i\), \(l_j\) \(\in {\mathcal {L}}_s(1 \le i,j<m)\).

1.2 Proof of Theorem 1

Theorem 1

Given a pairwise function \(\theta (\alpha ,\beta )=g(x)\) \((x=|\alpha -\beta |)\), assume there is an intervalFootnote 12 \(X_s=[a,b]\) \((0\le a<b)\) satisfying: (i) g(x) is convex on [ab], and (ii) \(a\cdot (g(a+1)-g(a))\ge g(a)-g(0)\ge 0\). Then \({\mathcal {L}}_s=\{l_1,\cdots ,l_m\}\) is a submodular subset, if \(|l_i-l_j| \in [a,b]\) for any pair of labels \(l_i,l_j\) such that \(l_i\ne l_j\) and \(l_i,l_j \in {\mathcal {L}}_s\).

Proof

Since \(\theta (\alpha ,\beta )\) is semimetric and satisfies \(\theta (\alpha ,\beta )=\theta (\beta ,\alpha )\), we only consider \(l_i, l_{i+1},l_j, l_{j+1} \in {\mathcal {L}}_s\) where \(i\ge j\). Let

$$\begin{aligned} x_1&= l_{i + 1}-l_j,&x_2&= l_{i + 1}-l_{j + 1}, \\ x_3&= l_i-l_j,&x_4&= l_i-l_{j + 1} \end{aligned}$$

We have \(x_1>x_2\ge x_3 > x_4\), and \(x_1 - x_2=x_3 - x_4\). We can define

$$\begin{aligned} \lambda = \frac{x_3-x_4}{x_1-x_4}= \frac{x_1-x_2}{x_1-x_4}, \ \ \left( 0<\lambda <1\right) \end{aligned}$$
(28)

then, we get

$$\begin{aligned} x_3 = \lambda x_1 + \left( 1 - \lambda \right) x_4, \quad x_2=\lambda x_4 + \left( 1 - \lambda \right) x_1. \end{aligned}$$
(29)

If \(a=0\), i.e. \(X_s = [0,b]\) we have \(x_1,x_2,x_3,x_4 \in {X}_s\) according to the assumption in Theorem 1. Since g(x) is convex on \({X}_s\), with Eq. (29) we obtain

$$\begin{aligned} \begin{aligned} g\left( x_3\right)&\le \lambda g\left( x_1\right) +\left( 1-\lambda \right) g\left( x_4\right) , \\ g\left( x_2\right)&\le \lambda g\left( x_4\right) +\left( 1-\lambda \right) g\left( x_1\right) \end{aligned} \end{aligned}$$
(30)

Summing the two equations in Eq. (30), we can get

$$\begin{aligned} g\left( x_2\right) +g\left( x_3\right) \le g\left( x_1\right) +g\left( x_4\right) \end{aligned}$$

Thus, \(\theta (l_{i + 1}, l_j) - \theta (l_{i + 1}, l_{j + 1}) - \theta (l_i, l_j) + \theta (l_i, l_{j + 1})\ge 0\) is satisfied for any pair of labels \(l_i\), \(l_j\in {\mathcal {L}}_s\).

If \(a>0\) (\(X_s = [a,b]\)), we prove the theorem in three cases: 1) \(i = j\); 2) \(i>j+1\); 3) \(i= j+1\).

  1. 1.

    When \(i=j\), we have

    $$\begin{aligned}&\theta \left( l_{i + 1}, l_j\right) -\theta \left( l_{i + 1}, l_{j + 1}\right) -\theta \left( l_i, l_j\right) +\theta \left( l_i, l_{j + 1}\right) \\&\quad = \theta \left( l_{i + 1}, l_i\right) -\theta \left( l_{i + 1}, l_{i + 1}\right) -\theta \left( l_i, l_i\right) +\theta \left( l_i, l_{i + 1}\right) \\&\quad = 2\left( g\left( l_{i + 1}-l_i\right) -g\left( 0\right) \right) \\&\quad \ge 2\left( g\left( a\right) -g\left( 0\right) \right) \ge 0 \end{aligned}$$
  2. 2.

    When \(i>j+1\), we have \(x_1,x_2,x_3,x_4 \in {X}_s\) according to the assumption in Theorem 1. Since g(x) is convex on \({X}_s\), with Eq. (29) we obtain

    $$\begin{aligned} \begin{aligned} g\left( x_3\right)&\le \lambda g\left( x_1\right) +\left( 1-\lambda \right) g\left( x_4\right) , \\ g\left( x_2\right)&\le \lambda g\left( x_4\right) +\left( 1-\lambda \right) g\left( x_1\right) \end{aligned} \end{aligned}$$
    (31)

    Summing the two equations in Eq. (31), we get

    $$\begin{aligned} g\left( x_2\right) +g\left( x_3\right) \le g\left( x_1\right) +g\left( x_4\right) \end{aligned}$$

    Thus, \(\theta (l_{i + 1}, l_j) - \theta (l_{i + 1}, l_{j + 1}) - \theta (l_i, l_j) + \theta (l_i, l_{j + 1})\ge 0\) is satisfied for any pair of label \(l_i\), \(l_j\in {\mathcal {L}}_s\) and \(i>j+1\).

  3. 3.

    When \(i=j+1\), we have

    $$\begin{aligned} x_1&= l_{j + 2}-l_j,&x_2&= l_{j + 2}-l_{j + 1}, \\ x_3&= l_{j + 1}-l_j,&x_4&= 0 \end{aligned}$$

Thus, we have \(x_1=x_2+x_3\), and \(x_1,x_2,x_3\in {X}_s\) but \(x_4 \notin {X}_s\).

With Lemma 8, we have

$$\begin{aligned} \frac{g\left( x_1\right) -g\left( x_3\right) }{x_1-x_3}\ge \frac{g\left( x_2\right) -g\left( 0\right) }{x_2}. \end{aligned}$$
(32)

Thus we can get

$$\begin{aligned} g\left( x_2\right) +g\left( x_3\right) \le g\left( x_1\right) +g\left( x_4\right) \end{aligned}$$

and \(\theta (l_{i + 1}, l_j)-\theta (l_{i + 1}, l_{j + 1}) - \theta (l_i, l_j) + \theta (l_i, l_{j + 1})\ge 0\) is satisfied for any pair of labels \(l_i\), \(l_j\in {\mathcal {L}}_s\) and \(i=j+1\).

Therefore, \(\theta (l_{i + 1}, l_j)-\theta (l_{i + 1}, l_{j + 1}) - \theta (l_i, l_j) +\, \theta (l_i,l_{j + 1})\ge 0\) is satisfied for any pair of labels \(l_i\), \(l_j\in {\mathcal {L}}_s\). The proof is completed. \(\square \)

1.3 Proof of Corollary 1

Corollary 1

(Theorem 1) Assuming the interval [ab] is a candidate interval, then \(\{\alpha , \alpha +x_1, \alpha +x_1+x_2,\cdots ,\alpha +x_1+\cdots +x_m\}\subseteq {\mathcal {L}}\) is a submodular set for any \(\alpha \ge 0\), if \( x_1,\cdots ,x_m \in [a,b]\) and \(x_1+\cdots +x_m\le b\).

Proof

Let \({\mathcal {L}}_s=\{\alpha , \alpha +x_1, \alpha +x_1+x_2,\cdots ,\alpha +x_1+\cdots +x_m\}\). We consider a pair of labels \(\alpha _1\) and \(\alpha _2\), which can be any pair of distinct labels chosen in \({\mathcal {L}}_s\). According to the definition, there always exist p, q (\(1\le p, q\le m\)) such that

$$\begin{aligned} |\alpha _1-\alpha _2| = x_p+x_{p+1}+\cdots +x_q. \end{aligned}$$

Since \(x_i \in [a,b]\) for \(\forall i \in [p, q]\), we have \(|\alpha _1-\alpha _2|\ge a\).

Since \(x_1+\cdots +x_m \le b\), we have \(x_p+x_{p+1}+\cdots +x_q \le b\).

Thus, \(|\alpha _1-\alpha _2| \in [a,b]\) for any pair of labels \(\alpha _1\), \(\alpha _2 \in {\mathcal {L}}_s\)

Thus, \({\mathcal {L}}_s\) is a submodular set according to Theorem 1. \(\square \)

Proof of Proposition 1

Proposition 1

Let \({\mathcal {L}}_1,\cdots ,{\mathcal {L}}_k\) be a set of range swap moves, which cover all pairs of labels \(l_i,l_j\in {\mathcal {L}}\). Let \({\hat{f}}\) be a local minimum obtained by these moves. Then, \({\hat{f}}\) is also a local minimum for \(\alpha \beta \)-swap.

Proof

Firstly, we assume that within one swap move on the pair of labels \(\alpha \), \(\beta \), \(\alpha \beta \)-swap can achieve a better solution \(f^*\) such that \(E(f^*)<E({\hat{f}})\).

Let \({\mathcal {L}}_i\) be a move that covers \(\alpha \), \(\beta \), i.e. \(\alpha \), \(\beta \in {\mathcal {L}}_i\). The range swap move on \({\mathcal {L}}_i\) minimizes:

$$\begin{aligned} E_{s}\left( f\right) = \sum \limits _{p \in {\mathcal {P}}_{{\mathcal {L}}_i}} {\theta _p\left( f_p\right) } + \sum \limits _{\left( p,q\right) \in {\mathcal {E}} ,\{p,q\} \cap {\mathcal {P}}_{{\mathcal {L}}_i} \ne \emptyset } {\theta _{pq}\left( f_p,f_q\right) }\nonumber \\ \end{aligned}$$
(33)

where \({\mathcal {P}}_{{\mathcal {L}}_i}\) denotes the set of vertices whose labels belong to \({\mathcal {L}}_i\). We have \({\mathcal {P}}_{\alpha },{\mathcal {P}}_{\beta }\in {\mathcal {P}}_{{\mathcal {L}}_i}\).

With the assumption, the swap move on \(\alpha \), \(\beta \) can achieve a better solution \(E(f^*)<E({\hat{f}})\). This means that the energy \(E({\hat{f}})\) can be decreased by changing the labels of some vertices \(p \in {\mathcal {P}}_{\alpha }\) to \(\beta \), or changing the labels of vertices \(p \in {\mathcal {P}}_{\beta }\) to \(\alpha \). Therefore, the range swap move on \({\mathcal {L}}_i\) can decrease \(E({\hat{f}})\). This is inconsistent with the fact that \(E({\hat{f}})\) is a local minimum of the range swap moves.

Thus, \(E({\hat{f}})\) cannot be decreased by any move in the \(\alpha \beta \)-swap, and the proof is completed. \(\square \)

Proof of Proposition 2

Proposition 2

Let \({\hat{f}}\) be a local minimum obtained by \(\alpha \beta \)-swap. With the initial labeling \({\hat{f}}\), the range swap moves on \({\mathcal {L}}'=\{{\mathcal {L}}_1,\cdots ,{\mathcal {L}}_k\}\) yield a local minimum \(f^\dag \) such that \(E(f^\dag )<E({\hat{f}})\), unless the labeling \({\hat{f}}\) exactly optimizes the energy:

$$\begin{aligned} E_{s}\left( f\right) = \sum \limits _{p \in {\mathcal {P}}_{{\mathcal {L}}_i}} {\theta _p\left( f_p\right) } + \sum \limits _{\left( p,q\right) \in {\mathcal {E}} ,\{p,q\} \cap {\mathcal {P}}_{{\mathcal {L}}_i} \ne \emptyset } {\theta _{pq}\left( f_p,f_q\right) } \end{aligned}$$

for each \({\mathcal {L}}_i \subseteq {\mathcal {L}}'\).

Proof

Assume the labeling \({\hat{f}}\) does not exactly optimize the energy

$$\begin{aligned} E_{s}\left( f\right) = \sum \limits _{p \in {\mathcal {P}}_{{\mathcal {L}}_i}} {\theta _p\left( f_p\right) } + \sum \limits _{\left( p,q\right) \in {\mathcal {E}} ,\{p,q\} \cap {\mathcal {P}}_{{\mathcal {L}}_i} \ne \emptyset } {\theta _{pq}\left( f_p,f_q\right) }\nonumber \\ \end{aligned}$$
(34)

where \({\mathcal {L}}_i \subseteq {\mathcal {L}}^{'}\).

Obviously, \(E({\hat{f}})\) can be decreased by the range swap move on \({\mathcal {L}}_i\), since this move can obtain a labeling \(f^*\) which is a global minimization of \(E_{s}(f)\) in (34).

Thus, the proof of Proposition 2 is completed. \(\square \)

Proof of Lemma 1

Lemma 1

When edges \((p_a,p_{a+1})\) and \((q_b,q_{b+1})\) are in the st-cut C, that is, \(f_p\), \(f_q\) are assigned the labels \(l_a\), \(l_b\) respectively, let \(cut(l_a,l_b)\) denote the cost of the pairwise edges in in the st-cut. We have the following relationship

$$\begin{aligned} cut\left( l_a,l_b\right) = \left\{ \begin{array}{ll} \sum \limits _{i=b+1}^a\sum \limits _{j=b+1}^ic\left( p_i,q_j\right) , &{} \hbox {if } l_a\ge l_b \hbox { ;}\\ \sum \limits _{i=a+1}^b\sum \limits _{j=a+1}^ic\left( q_i,p_j\right) , &{} \hbox {if } l_a<l_b \hbox { .} \end{array} \right. \end{aligned}$$
Fig. 12
figure 12

The graph construction in the st-mincut problem to solve the range swap move. The edges in the st-cut are marked red when nodes p, q are assigned label \(l_a\) and \(l_b\), respectively (Color figure online)

Proof

We will show the proof of the case where \(l_a\ge l_b\), and the similar argument applies when \(l_a<l_b\).

As the definition of st-cut, an st-cut only consists of edges going from the source’s (s) side to the sink’s (t) side. The cost of an st-cut is the sum of capacity of each edge in the cut. As shown in Fig. 12, if nodes p, q are assigned \(l_a\) and \(l_b\), respectively, the st-cut is specified by the edges

$$\begin{aligned}&\left( p_a,p_{a+1}\right) \cup \left( q_b,q_{b+1}\right) \cup \{\left( p_i,q_j\right) ,\\&\quad b+1\le i \le a, b+1\le j\le i\}. \end{aligned}$$

where \((p_a,p_{a+1})\) and \((q_b,q_{b+1})\) are the unary edges, while \((p_i,q_j)\) denote the pairwise edges.

As a result, the cost of the pairwise edges in the st-cut is

$$\begin{aligned} cut\left( l_a,l_b\right) = \sum \limits _{i=b+1}^a\sum \limits _{j=b+1}^ic\left( p_i,q_j\right) , \text { where } l_a\ge l_b. \end{aligned}$$
(35)

The proof is completed. \(\square \)

Proof of Lemma 2

Lemma 2

For the graph described in Sect.  5.1, Property 2 holds true.

Proof

As described in Sect.  5.1, we construct a directed graph , such that a set of nodes is defined for each \(p\in {\mathcal {P}}_s\). In addition, a set of edges are constructed to model the unary and pairwise potentials.

Assume that p and q are assigned the labels \(l_a\), \(l_b\) respectively. In other word, we have \(f'_p=l_a\) and \(f'_q=l_b\). For brevity, we only consider the case where \(l_a \ge l_b\), and a similar argument applies when \(l_a < l_b\).

We observe that the st-cut will consist of only the following edges:

$$\begin{aligned} \left\{ \left( p_i,p_j\right) , b+1 \le i \le a, b+1 \le j \le i\right\} . \end{aligned}$$

Using Eq. 10 to sum the capacities of the above edges, we obtain the cost of the st-cut

$$\begin{aligned} \begin{aligned}&cut\left( f'_p,f'_q\right) =\sum \limits _{i=b+1}^a\sum \limits _{j=b+1}^ic\left( p_i,q_j\right) \\&\quad = \sum \limits _{i=b+1}^a\sum \limits _{j=i}^ic\left( p_i,q_j\right) +\sum \limits _{i=b+1}^a\sum \limits _{j=b+1}^{i-1}c\left( p_i,q_j\right) \\&\quad = \sum \limits _{i=b+1}^a\sum \limits _{j=i}^i \frac{\psi \left( i,j\right) }{2} +\sum \limits _{i=b+1}^a\sum \limits _{j=b+1}^{i-1}\psi \left( i,j\right) \\ \end{aligned} \end{aligned}$$
(36)

where \(\psi (i,j)=\theta (l_{i}, l_{j-1}) - \theta (l_{i}, l_{j}) - \theta (l_{i-1}, l_{j-1}) + \theta (l_{i-1}, l_{j})\) for \(1 < j \le i \le m\).

Since the pairwise potentials satisfy \(\theta (\alpha ,\beta )=0\Leftrightarrow \alpha =\beta \) and \(\theta (\alpha ,\beta )=\theta (\beta ,\alpha )\ge 0\), when \(i=j\), \(\psi (i,j)\) can be simplified as

$$\begin{aligned} \psi \left( i,j\right) = 2\theta \left( l_{i}, l_{j-1}\right) , i=j. \end{aligned}$$

Using the above equation, we have

$$\begin{aligned} \sum \limits _{i=b+1}^a\sum \limits _{j=i}^i \frac{\psi \left( i,j\right) }{2} = \sum \limits _{i=b+1}^a\theta \left( l_{i}, l_{i-1}\right) , \end{aligned}$$
(37)

and

$$\begin{aligned}&\sum \limits _{i=b+1}^a\sum \limits _{j=b+1}^{i-1}\psi \left( i,j\right) \nonumber \\&\quad =\sum \limits _{i=b+1}^a\{\theta \left( l_{i }, l_{b}\right) - \theta \left( l_{i-1}, l_{b}\right) - \theta \left( l_{i}, l_{i-1}\right) \} \nonumber \\&\quad =\theta \left( l_{b + 1}, l_{b}\right) - \theta \left( l_{b}, l_{b}\right) - \theta \left( l_{b+1}, l_{b}\right) + \nonumber \\&\quad =\theta \left( l_{a }, l_{b}\right) - \sum \limits _{i=b+1}^a\theta \left( l_{i}, l_{i-1}\right) \end{aligned}$$
(38)

Using Eqs. 36, 37 and 38, we have

$$\begin{aligned} cut\left( f'_p,f'_q\right) = \theta \left( l_{a }, l_{b}\right) \end{aligned}$$

This proves that Property 2 holds true. \(\square \)

Proof of Lemma 3

Lemma 3

When the pairwise function is a truncated function \(\theta (f_p,f_q) = \min \{d(|f_p-f_q|),T\}\), for the case \(f'_p\in {\mathcal {L}}_s\) and \(f'_q=f_q\notin {\mathcal {L}}_s\), we have the following properties:

  • If \(f_q > l_m\), we have

    $$\begin{aligned} \theta \left( f'_p,f'_q\right) \le cut\left( f'_p,f'_q\right) \le d\left( |f'_p - l_1|\right) + T. \end{aligned}$$
  • If \(f_q < l_1\) and \(f_p \in {\mathcal {L}}_s\) or \(f_p < l_1\),

    $$\begin{aligned} \theta \left( f'_p,f'_q\right)\le & {} cut\left( f'_p,f'_q\right) \\\le & {} \min \left\{ d\left( |f'_p - f'_q|\right) ,d\left( |f'_p - l_1|\right) + T\right\} \end{aligned}$$
  • If \(f_q < l_1\) and \(f_p > l_m\), we have

    $$\begin{aligned} \theta \left( f'_p,f'_q\right)\le & {} cut\left( f'_p,f'_q\right) \\\le & {} \min \left\{ d\left( |f'_p - f'_q|\right) \right. \\&+ \left. \frac{T}{2},d\left( |f'_p - l_1|\right) + T\right\} \end{aligned}$$

Proof

With Property 6, we have the cost of the st-cut is

$$\begin{aligned}&cut\left( f'_p,f'_q\right) \\&\quad = \left\{ \begin{array}{ll} \theta \left( l_a,l_1\right) +\sum \limits _{i=2}^ac\left( p_i,q_1\right) + \theta \left( l_1,f_q\right) , &{} f_p\in {\mathcal {L}}_s; \\ \theta \left( l_a,l_1\right) +\sum \limits _{i=2}^ac\left( p_i,q_1\right) + \theta \left( l_1,f_q\right) \\ +\delta , &{} f_p\notin {\mathcal {L}}_s. \end{array} \right. \end{aligned}$$

where \(f'_p = l_a\) and \(\delta =\max (0,\frac{\theta (f_p,f_q)-\theta (l_1,f_q)-\theta (f_p,l_1)}{2})\).

For brevity, we define

$$\begin{aligned} \eta = \left\{ \begin{array}{ll} 0, &{} f_p\in {\mathcal {L}}_s; \\ \delta , &{} f_p\notin {\mathcal {L}}_s, \end{array} \right. \end{aligned}$$

and the cost of the st-cut can be rewritten as

$$\begin{aligned} cut\left( f'_p,f'_q\right) = \theta \left( l_a,l_1\right) +\sum \limits _{i=2}^ac\left( p_i,q_1\right) + \theta \left( l_1,f_q\right) +\eta .\nonumber \\ \end{aligned}$$
(39)

As described in Sec.5.2.2, the graph is constructed with the following edges

(40)

where we define \(\theta _\eta =\theta (l_1,f_q)+\eta \) for brevity. We have

$$\begin{aligned} \begin{aligned}&\theta \left( l_1,f_q\right) \le T \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \theta \left( l_1,f_q\right) +\delta&= \max \left( \theta \left( l_1,f_q\right) ,\right. \\&\qquad \qquad \left. \frac{\theta \left( f_p,f_q\right) +\theta \left( l_1,f_q\right) -\theta \left( f_p,l_1\right) }{2}\right) \\&\le T, \end{aligned} \end{aligned}$$

and thus

$$\begin{aligned} \theta _\eta =\theta \left( l_1,f_q\right) +\eta \le T. \end{aligned}$$
(41)

Firstly, we prove the following inequality holds true for the case \(f'_p\in {\mathcal {L}}_s\) and \(f'_q=f_q\notin {\mathcal {L}}_s\).

$$\begin{aligned} \theta \left( l_a,f_q\right) \le cut\left( f'_p,f'_q\right) \le d\left( |l_a-l_1|\right) +T \end{aligned}$$
(42)

where \(f'_p=l_a\) and \(f'_q = f_q\).

When \(a = 1\), we have

$$\begin{aligned} cut\left( l_1,f'_q\right) = \theta \left( l_1,f_q\right) +\eta . \end{aligned}$$

Using the equation above and Eq. 41, we obtain

$$\begin{aligned} \theta \left( l_1,f_q\right) \le cut\left( l_1,f'_q\right) \le d\left( |l_1-l_1|\right) +T \end{aligned}$$

and thus, the inequality (42) is true when \(a=1\).

We assume inequality (42) holds true when \(a=k\) (\(k\ge 2\)), and we obtain the following results

$$\begin{aligned} \theta \left( l_k,f_q\right) \le cut\left( l_k,f'_q\right) \le d\left( |l_k-l_1|\right) +T \end{aligned}$$
(43)

where

$$\begin{aligned} cut\left( l_k,f'_q\right) = \theta \left( l_k,l_1\right) +\sum \limits _{i=2}^kc\left( p_i,q_1\right) + \theta _\eta . \end{aligned}$$
(44)

When \(a = k+1\),

$$\begin{aligned} cut\left( l_{k+1},f'_q\right) = \theta \left( l_{k+1},l_1\right) +\sum \limits _{i=2}^{k+1}c\left( p_i,q_1\right) + \theta _\eta . \end{aligned}$$
(45)

Using Eq. 40,  44 and 45,

$$\begin{aligned} \begin{aligned} cut\left( l_{k+1},f'_q\right)&=cut\left( l_k,f'_q\right) +\theta \left( l_{k+1},l_1\right) + c\left( p_{k+1},q_1\right) \\&\quad - \theta \left( l_k,l_1\right) \\&=cut\left( l_k,f'_q\right) +\theta \left( l_{k+1},l_1\right) - \theta \left( l_k,l_1\right) \\&\quad + \max \left( 0,\theta \left( l_{k+1},f_q\right) -\theta \left( l_{k+1},l_1\right) \right. \\&\quad \left. + \,\theta \left( l_k,l_1\right) -cut\left( l_k,f'_q\right) \right) \end{aligned} \end{aligned}$$

If \(\theta (l_{k+1},f_q) - \theta (l_{k+1},l_1) - cut(l_k,f'_q) + \theta (l_k,l_1) \le 0\), we have

$$\begin{aligned} \begin{aligned} cut\left( l_{k+1},f'_q\right)&=cut\left( l_k,f'_q\right) +\theta \left( l_{k+1},l_1\right) - \theta \left( l_k,l_1\right) ,\\&\le \theta \left( l_{k+1},l_1\right) - \theta \left( l_k,l_1\right) +d\left( |l_k-l_1|\right) +T \\&\le d\left( |l_{k+1}-l_1|\right) + T \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \theta \left( l_{k+1},f_q\right)&\le cut\left( l_k,f'_q\right) +\theta \left( l_{k+1},l_1\right) -\theta \left( l_k,l_1\right) , \\ \theta \left( l_{k+1},f_q\right)&\le cut\left( l_{k+1},f'_q\right) . \end{aligned} \end{aligned}$$

If \(\theta (l_{k+1},f_q) - \theta (l_{k+1},l_1) - cut(l_k,f'_q) + \theta (l_k,l_1)\ge 0\), we have

$$\begin{aligned} \begin{aligned} \theta \left( l_{k+1},f_q\right) \le cut\left( l_{k+1},f'_q\right)&=\theta \left( l_{k+1},f_q\right) \\&\le d\left( |l_{k+1} - l_1|\right) + T. \end{aligned} \end{aligned}$$

Inequality (42) is true and the proof is completed.

Then, we prove if \(f_q < l_1\) it holds true that

$$\begin{aligned} cut\left( f'_p,f'_q\right) \le d\left( |l_a - f'_q|\right) + \eta , \end{aligned}$$
(46)

where \(f'_q =l_a\).

When \(a=1\), it holds true that

$$\begin{aligned} cut\left( l_1,f'_q\right) = \theta \left( l_1,f_q\right) + \eta \le d\left( |l_a - f'_q|\right) + \eta . \end{aligned}$$

We assume when \(a = k\) (\(k\ge 2\)), it hold true that

$$\begin{aligned} \begin{aligned} cut\left( l_k,f'_q\right)&= \theta \left( l_k,l_1\right) +\sum \limits _{i=2}^kc\left( p_i,q_1\right) + \theta \left( l_1,f_q\right) +\eta \\&\le d\left( |l_k - f'_q|\right) +\eta . \end{aligned} \end{aligned}$$

When \(a = k+1\),

$$\begin{aligned} cut\left( l_{k+1},f'_q\right) = \theta \left( l_{k+1},l_1\right) +\sum \limits _{i=2}^{k+1}c\left( p_i,q_1\right) + \theta _\eta . \end{aligned}$$
(47)

Using Eqs. 40 and 47,

$$\begin{aligned} \begin{aligned} cut\left( l_{k+1},f'_q\right) =&cut\left( l_k,f'_q\right) +\theta \left( l_{k+1},l_1\right) - \theta \left( l_k,l_1\right) \\&+ \max \left( 0,\theta \left( l_{k+1},f_q\right) -\theta \left( l_{k+1},l_1\right) \right. \\&\left. + \theta \left( l_k,l_1\right) -cut\left( l_k,f'_q\right) \right) \end{aligned} \end{aligned}$$

If \(\theta (l_{k+1},f_q) - \theta (l_{k+1},l_1) - cut(l_k,f'_q) + \theta (l_k,l_1) \le 0\), we have

$$\begin{aligned} \begin{aligned} cut\left( l_{k+1},f'_q\right)&=cut\left( l_k,f'_q\right) +\theta \left( l_{k+1},l_1\right) - \theta \left( l_k,l_1\right) ,\\&\le \theta \left( l_{k+1},l_1\right) - \theta \left( l_k,l_1\right) \\&\quad +d\left( |l_k-f'_q|\right) +\eta \\ \end{aligned} \end{aligned}$$

As \(l_1,l_k,l_{k+1}\in {\mathcal {L}}_s\), we have \(\theta (l_k,l_1)=d(|l_k-l_1)|\), and \(\theta (l_{k+1},l_1)=d(|l_{k+1}-l_1)|\). As \(d(\cdot )\) is a convex function and \(f'_q<l_1<l_k<l_{k+1}\), using Lemma 7, we have

$$\begin{aligned} \begin{aligned}&\frac{d\left( |l_k-f'_q|\right) -d\Big (|l_k-l_1|\Big )}{l_k-f'_q-l_k-l_1} \\&\quad \le \frac{d\left( |l_{k+1}-f'_q|\right) -d\Big (|l_{k+1}-l_1|\Big ) }{l_{k+1}-f'_q-l_{k+1}+l_1}\\&d\left( |l_k-f'_q|\right) -d\Big (|l_k-l_1|\Big ) \\&\quad \le d\left( |l_{k+1}-f'_q|\right) -d\Big (|l_{k+1}-l_1|\Big ) . \end{aligned} \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{aligned} cut\left( l_{k+1},f'_q\right)&\le \theta \left( l_{k+1},l_1\right) - \theta \left( l_k,l_1\right) +d\left( |l_k-f'_q|\right) \\&\quad +\eta \le d\left( |l_{k+1}-f'_q|\right) +\eta \end{aligned} \end{aligned}$$

If \(\theta (l_{k+1},f_q) - \theta (l_{k+1},l_1) - cut(l_k,f'_q) + \theta (l_k,l_1)\ge 0\), we have

$$\begin{aligned} \begin{aligned} cut\left( l_{k+1},f'_q\right)&=\theta \left( l_{k+1},f_q\right) \le d\left( |l_{k+1}-f'_q|\right) +\eta . \end{aligned} \end{aligned}$$

Therefore, if \(f_q < l_1\) and \(f_p \in {\mathcal {L}}_s\) or \(f_p < l_1\),

$$\begin{aligned} cut\left( f'_p,f'_q\right) \le d\left( |l_a - f'_q|\right) +\eta \text {, where } f'_q =l_a \end{aligned}$$

holds true and the proof is completed.

If \(f_p \in {\mathcal {L}}_s\), we have \(\eta =0\).

If \(f_q < l_1\) and \(f_p < l_1\), we have \(\theta (f_p,f_q)<\theta (l_1,f_q)\) and

$$\begin{aligned} \frac{\theta \left( f_p,f_q\right) -\theta \left( l_1,f_q\right) -\theta \left( f_p,l_1\right) }{2}< 0. \end{aligned}$$

Therefore, \(\delta =0\), and \(\eta =0\).

Using the above results and inequalities (42) and (46), we obtain the following results

$$\begin{aligned} \theta \left( f'_p,f'_q\right)\le & {} cut\left( f'_p,f'_q\right) \\\le & {} \min \left\{ d\left( |f'_p - f'_q|\right) ,d\left( |f'_p - l_1|\right) + T\right\} \end{aligned}$$

for the case where \(f_q < l_1\) and \(f_p \in {\mathcal {L}}_s\) or \(f_p < l_1\).

If \(f_q < l_1\) and \(f_p > l_m\), we have

$$\begin{aligned} \frac{\theta \left( f_p,f_q\right) -\theta \left( l_1,f_q\right) -\theta \left( f_p,l_1\right) }{2}< \frac{T}{2}. \end{aligned}$$

Using the above results and inequalities (42) and (46), we obtain the following results

$$\begin{aligned} \theta \left( f'_p,f'_q\right)\le & {} cut\left( f'_p,f'_q\right) \\\le & {} \min \left\{ d\left( |f'_p - f'_q|\right) + \frac{T}{2},d\left( |f'_p - l_1|\right) + T\right\} \end{aligned}$$

for the case where \(f_q < l_1\) and \(f_p > l_m\).

The proof of Lemma 2 is completed.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, K., Zhang, J., Yang, P. et al. GRMA: Generalized Range Move Algorithms for the Efficient Optimization of MRFs. Int J Comput Vis 121, 365–390 (2017). https://doi.org/10.1007/s11263-016-0944-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-016-0944-z

Keywords

Navigation