Skip to main content

Advertisement

Log in

Difference of convex functions algorithms (DCA) for image restoration via a Markov random field model

  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

In this paper, we introduce a novel approach in the nonconvex optimization framework for image restoration via a Markov random field (MRF) model. While image restoration is elegantly expressed in the language of MRF’s, the resulting energy minimization problem was widely viewed as intractable: it exhibits a highly nonsmooth nonconvex energy function with many local minima, and is known to be NP-hard. The main goal of this paper is to develop fast and scalable approximation optimization approaches to a nonsmooth nonconvex MRF model which corresponds to an MRF with a truncated quadratic (also known as half-quadratic) prior. For this aim, we use the difference of convex functions (DC) programming and DC algorithm (DCA), a fast and robust approach in smooth/nonsmooth nonconvex programming, which have been successfully applied in various fields in recent years. We propose two DC formulations and investigate the two corresponding versions of DCA. Numerical simulations show the efficiency, reliability and robustness of our customized DCAs with respect to the standard GNC algorithm and the Graph-Cut based method—a more recent and efficient approach to image analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Azencott R (1987) Markov fields and image analysis. In: Proceedings on AFCET, Antibes, France

  • Blake A, Zisserman A (1987) Visual reconstruction. MIT Press, Cambridge

    Google Scholar 

  • Bouhamidi A, Jbilou K (2009) An iterative method for Bayesian Gauss–Markov image restoration. Appl Math Model 33:361–372

    Article  MATH  MathSciNet  Google Scholar 

  • Boykov Y, Veksler O, Zabih R (2001) Fast approximate energy minimization via graph cuts. PAMI 23(11):1222–1239

    Article  Google Scholar 

  • Chan TF, Esedoglu S, Nikolova M (2006) Algorithm for finding global minimizers of image segmentation and denoising models. SIAM Appl Math 66(5):1632–1648

    Article  MATH  MathSciNet  Google Scholar 

  • Czyzyk J, Mesnier MP, Mor JJ (1998) The NEOS server. IEEE Comput Sci Eng 5(3):68–75

    Article  Google Scholar 

  • Dolan ED (2001) N.E.O.S. server. 4.0 administrative guide. Technical Memorandum ANL/MCS-TM-250, Argonne National Laboratory, Argonne, IL

  • Geiger D, Girosi F (1991) Parallel and deterministic algorithms for MRFs: surface reconstruction. IEEE Trans Pattern Anal Mach Intell PAMI 13:401–412

    Article  Google Scholar 

  • Geman S, Geman D (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell PAMI 6:721–741

    Article  MATH  Google Scholar 

  • Golub GH, Van Loan CF (1996) Matrix computations. Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  • Gropp W, Moré JJ (1997) Optimization environments and the NEOS server. In: Buhmann MD, Iserles A (eds) Approximation theory and optimization. Cambridge University Press, pp 167–182

  • Jensen JB, Nielsen M (1992) A simple genetic algorithm applied to discontinuous regularization. In: Proceedings IEEE workshop on NNSP, Copenhagen, 29 Aug–2 Sept

  • Komodakis N, Tziritas G (2007) Approximate labeling via graph cuts based on linear programming. IEEE Trans Pattern Anal Mach Intell 29(8):1436–1453

    Article  Google Scholar 

  • Komodakis N, Paragios N, Tziritas G (2008) Performance vs computational efficiency for optimizing single and dynamic MRFs: setting the state of the art with primal dual strategies. In: CVIU

  • Le Thi HA (1997) Contribution à l’optimisation non convexe et l’optimisation globale: Théorie, Algoritmes et Applications, Habilitation à Diriger des Recherches. Université de Rouen

  • Le Thi HA (2000) An efficient algorithm for globally minimizing a quadratic function under convex quadratic constraints. Math Program Ser A 87(3):401–426

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA (2005) DC programming and DCA. http://www.lita.univ-lorraine.fr/~lethi/index.php/dca.html

  • Le Thi HA, Nguyen MC (2014) Self-organizing maps by difference of convex functions optimization. Data Min Knowl Discov 28(5–6):1336–1365

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA, Pham Dinh T (1997) Solving a class of linearly constrained indefinite quadratic problems by DC algorithms. J Glob Optim 11(3):253–285

    Article  MATH  Google Scholar 

  • Le Thi HA, Pham Dinh T (2003) Large scale molecular optimization from distance matrices by a DC optimization approach. SIAM J Optim 14(1):77–116

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA, Pham Dinh T (2005) The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann Oper Res 133:23–46

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA, Le HM, Pham Dinh T (2007a) Optimization based DC programming and DCA for hierarchical clustering. Eur J Oper Res 183(3):1067–1085

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA, Belghiti T, Pham Dinh T (2007b) A new efficient algorithm based on DC programming and DCA for clustering. J Glob Optim 37:593–608

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA, Huynh VN, Pham Dinh T (2009) Convergence Analysis of DC Algorithms for DC programming with subanalytic data. Research Report. National Institute for Applied Sciences, Rouen, France

  • Le Thi HA, Le HM, Pham Dinh T, Huynh VN (2013) Binary classification via spherical separator by DC programming and DCA. J Glob Optim 56(4):1393–1407

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA, Huynh VN, Pham Dinh T (2014a) DC programming and DCA for solving general DC programs. In: Proceedings of 2nd international conference on computer science, applied mathematics and applications (ICCSAMA 2014), advances in intelligent systems and computing, vol 282, pp 15–35. Springer

  • Le Thi HA, Le HM, Pham Dinh T (2014b) New and efficient DCA based algorithms for minimum sum-of-squares clustering. Pattern Recognit 47(1):388–401

    Article  MATH  Google Scholar 

  • Le Thi HA, Vo XT, Pham Dinh T (2014c) Feature selection for linear SVMs under uncertain data: robust optimization based on difference of convex functions algorithms. Neural Netw 59:36–50

    Article  MATH  Google Scholar 

  • Le Thi HA, Nguyen MC, Pham Dinh T (2014d) A DC programming approach for finding communities in networks. Neural Comput 26(12):2827–2854

    Article  MathSciNet  Google Scholar 

  • Le Thi HA, Le HM, Pham Dinh T (2015a) Feature selection in machine learning: an exact penalty approach using a difference of convex function algorithm. Mach Learn 101(1):163–186

    Article  MATH  MathSciNet  Google Scholar 

  • Le Thi HA, Pham Dinh T, Le HM, Vo XT (2015b) DC approximation approaches for sparse optimization. Eur J Oper Res 244(1):26–46

    Article  MATH  MathSciNet  Google Scholar 

  • Liu Y, Shen X (2006) Multicategory \(\psi\)-learning. J Am Stat Assoc 101:500–509

    Article  MATH  MathSciNet  Google Scholar 

  • Liu Y, Shen X, Doss H (2005) Multicategory \(\psi\)-learning and support vector machine. J Comput Graph Stat 14:219–236

    Article  MathSciNet  Google Scholar 

  • Neumann J, Schnörr C, Steidl G (2005) Combined SVM-based feature selection and classification. Mach Learn 61(1–3):129–150

    Article  MATH  Google Scholar 

  • Nielsen M (1993) Graduated non-convexity by smoothness focusing. Technical report 18-5-93. University of Copenhagen, DIKU

  • Nikolova M (2004) Weakly constrained minimization: application to the estimation of images and signals involving constant regions. J Math Imaging Vis 21:155–175

    Article  MathSciNet  Google Scholar 

  • Nikolova M (2005) Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model Simul 4(3):960–991

    Article  MATH  MathSciNet  Google Scholar 

  • Nikolova M, Ng MK (2005) Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J Sci Comput 27(3):937–966

    Article  MATH  MathSciNet  Google Scholar 

  • Pham Dinh T (1984) Méthode de décomposition pour la minimisation d’une forme quadratique convexe en grande dimension. Sé minaire d’Analyse Numérique Univ. Grenoble, France

  • Pham Dinh T, Le Thi Hoai An (1997) Convex analysis approach to DC programming: theory, algorithms and applications. Acta Math Vietnam 22(1):289–357 (Dedicated to Professor Hoang Tuy on the occasion of his 70th birthday)

    MATH  MathSciNet  Google Scholar 

  • Pham Dinh T, Le Thi HA (1998) DC optimization algorithms for solving the trust region subproblem. SIAM J Optim 8:476–505

    Article  MATH  MathSciNet  Google Scholar 

  • Pham Dinh T, Le Thi HA (2002) DC programming. Theory, algorithms, applications: the state of the art. In: First international workshop on global constrained optimization and constraint satisfaction. Valbonne-Sophia Antipolis, 2–4 Oct

  • Pham Dinh T, Le Thi HA (2014) Recent advances in DC programming and DCA. Trans Comput Collect Intell 8342:1–37

    Google Scholar 

  • Pham Dinh T, Le Thi HA, Akoa François (2009) Combining DCA and interior point techniques for large-scale nonconvex quadratic programming. Optim Methods Softw 23(4):609–629

    Article  MATH  MathSciNet  Google Scholar 

  • Portilla J, Simoncelli EP (2000) Image denoising via adjustment of wavelet coefficient magnitude correlation. In: Proceedings of the 7th international conference on image processing, Vancouver, BC, Canada. IEEE Computer Society, 10–13 Sept

  • Rangarajan A, Chellappa R (1990) Generalized graduated non-convexity algorithm for maximum a posteriori image estimation. In: Proceedings on ICPR, pp 127–133

  • Ronan C, Fabian S, Jason W, Bottou L (2006) Trading convexity for scalability. In: International conference on machine learning ICML

  • Saad Y (2003) Iterative methods for sparse linear systems, 2nd edn. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  • Shen X, Tseng GC, Zhang X, Wong WH (2003) \(\psi\)-Learning. J Am Stat Assoc 98:724–734

    Article  MATH  Google Scholar 

  • Simchony T, Chellappa R, Lichtenstein Z (1989) Pyramid implementation of optimal step conjugate search algorithms for some low level vision problems. IEEE Trans Syst Man Cybern 19(6):408–425

    Article  MathSciNet  Google Scholar 

  • Szeliski R, Zabih R, Scharstein D, Veksler O, Kolmogorov V, Agarwala A, Tappen M, Rother C (2006) A comparative study of energy minimization methods for Markov random fields. In: Leonardis A, Bischof H, Prinz A (eds) ECCV 2006, part II, LNCS 3952, pp 16–29. Springer, Berlin

  • Vanderbei RJ (1999) LOQO: an interior point code for quadratic programming. Optim Methods Softw 12:451–484

    Article  MATH  MathSciNet  Google Scholar 

  • Vanderbei RJ (2002) LOQO user’s manual version 4.05, Operations Research and Financial Engineering. Technical report no. ORFE-99

  • Weber S, Schüle T, Schnörr C (2005) Prior learning and convex–concave regularization of binary tomography. Electron Notes Discrete Math 20:313–327

    Article  MATH  MathSciNet  Google Scholar 

  • Winkler G (2006) Image analysis, random fields and Markov chain Monte Carlo methods: a mathematical introduction, vol 2. Springer, Berlin

    Google Scholar 

Download references

Acknowledgements

This research is funded by the Foundation for Science and Technology Development of Ton Duc Thang University (FOSTECT), website: http://fostect.tdt.edu.vn, under Grant FOSTECT.2015.BR.15. The authors would like to thank Drs Nguyen T. Phuc and Nguyen B. Thuy who have performed comparative computational experiments between DCA and GC, as well as Mr Tran Bach for numerical tests on MINOS, and the referees for their valuable comments which helped to improve the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hoai An Le Thi.

Appendices

Appendix 1: Subdifferential of a function being the supremum of a family of convex functions

Lemma 2

Let \(f=\sup \left\{ f_{i}:i\in \varUpsilon \right\} ,\) where \(\left( f_{i}\right) _{i\in \varUpsilon }\) is a family of proper convex functions on \(\mathbb {R}^{n}\) . Then

  1. (i)

    \(\partial f(x)\supset \overline{co}\left\{ \cup \partial f_{i}(x):i\in \varUpsilon (x)\right\}\) where \(\varUpsilon (x)=\left\{ i\in \varUpsilon :f_{i}(x)=f(x)\right\} .\)

  2. (ii)

    If \(\varUpsilon\) is compact and there exists an open set \(\varOmega\) in \(\mathrm { I\!R^{n}}\) such that the map

    $$\begin{aligned} (i,x)\in \varUpsilon \times \varOmega \rightarrow f_{i}(x) \end{aligned}$$

    is finite and continuous on \(\varUpsilon \times \varOmega\) , then f is continuous on \(\varOmega\) and

    $$\begin{aligned} \partial f(x)=\overline{co}\left\{ \cup \partial f_{i}(x):i\in \varUpsilon (x)\right\} {\text { for\,all }}x\in \varOmega . \end{aligned}$$

Appendix 2: The formulation of the matrix B(0, 0)

$$\begin{aligned} B(0,0)= \begin{pmatrix} T_{0} & -I_{n+1} & 0 & 0 & \cdots & 0 & 0 & 0 \\ -I_{n+1} & T_{1} & -I_{n+1} & 0 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \cdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & -I_{n+1} & T_{m-1} & -I_{n+1} \\ 0 & 0 & 0 & 0 & \cdots & 0 & -I_{n+1} & T_{m} \end{pmatrix} \end{aligned}$$
(47)

where

$$\begin{aligned} T_{i}= {\left\{ \begin{array}{ll} \begin{array}{ll} P+I_{n+1} & {\text { if i }}= 0\,{\text { or i }}\,={\text { m;}} \\ P+2I_{n+1} & {\text { if i} }= 1, 2, \ldots , {\text { m}} - 1. \end{array} \end{array}\right. }. \end{aligned}$$
(48)

and

$$\begin{aligned} P= \begin{pmatrix} 1 & -1 & 0 & \cdots & 0 & 0 & 0 \\ -1 & 2 & -1 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & -1 & 2 & -1 \\ 0 & 0 & 0 & \cdots & 0 & -1 & 1 \end{pmatrix} . \end{aligned}$$
(49)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Le Thi, H.A., Pham Dinh, T. Difference of convex functions algorithms (DCA) for image restoration via a Markov random field model. Optim Eng 18, 873–906 (2017). https://doi.org/10.1007/s11081-017-9359-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-017-9359-0

Keywords

Navigation