Optimization Letters

, Volume 6, Issue 7, pp 1251–1264 | Cite as

A Kronecker approximation with a convex constrained optimization method for blind image restoration

  • A. Bouhamidi
  • K. Jbilou
Original Paper


In many problems of linear image restoration, the point spread function is assumed to be known even if this information is usually not available. In practice, both the blur matrix and the restored image should be performed from the observed noisy and blurred image. In this case, one talks about the blind image restoration. In this paper, we propose a method for blind image restoration by using convex constrained optimization techniques for solving large-scale ill-conditioned generalized Sylvester equations. The blur matrix is approximated by a Kronecker product of two matrices having Toeplitz and Hankel forms. The Kronecker product approximation is obtained from an estimation of the point spread function. Numerical examples are given to show the efficiency of our proposed method.


Blind image restoration Convex optimization Linear algebra 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Andrews H., Hunt B.: Digital image restoration. Prentice-Hall, Engelwood Cliffs, NJ (1977)Google Scholar
  2. 2.
    Ayers G.R., Dainty J.C.: Iterative blind deconvolution method and its applications. Optic. Lett. 13(7), 547–549 (1988)CrossRefGoogle Scholar
  3. 3.
    Barzilai J., Borwein J.M.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Bertsekas D.P.: On The Goldstein-Levitin-Polyak gradient projection method. IEEE Trans. Automatic Control 21, 174–184 (1976)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Birgin E.G., Martínez J.M., Raydan M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Opt. 10, 1196–1211 (2001)CrossRefGoogle Scholar
  6. 6.
    Birgin E.G., Martínez J.M., Raydan M.: Algorithm 813: SPG-software for convex-constrained optimization. ACM Trans. Math. Software 27, 340–349 (2001)zbMATHCrossRefGoogle Scholar
  7. 7.
    Birgin E.G., Martínez J.M., Raydan M.: Inexact spectral gradient method for convex-constrained optimization. IMA J. Numer. Anal. 23, 539–559 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Engl H.W.: Regularization methods for the stable solution of inverse problems. Surveys Math. Indust. 3, 71–143 (1993)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Engl H.W., Hanke M., Neubauer A.: Regularization of inverse problems. Kluwer, Dordrecht, The Netherlands (1996)zbMATHCrossRefGoogle Scholar
  10. 10.
    Fletcher R.: Low storage methods for unconstrained optimization. Lect. Appl. Math. (AMS) 26, 165–179 (1990)MathSciNetGoogle Scholar
  11. 11.
    Fletcher, R.: On the Barzilai-Borwein method. In: Qi, L., Teo, K.L., Yang, X.Q. (eds.) Optimization and Control with Applications, pp. 235–256. Springer (2005)Google Scholar
  12. 12.
    Georgiev, P., Pardalos, P.M., Cichocki, A.: Algorithms with high order convergence speed for blind source extraction. Comput. Optim. Appl. 38(1), 123–131 (2007) Google Scholar
  13. 13.
    Georgiev, P., Pardalos, P.M., Theis, F.: A bilinear algorithm for sparse representations. Comput. Optim. Appl. 38(2), 249–259 (2007) Google Scholar
  14. 14.
    Goldstein A.A.: Convex programming in Hilbert space. Bull. Am. Math. Soci. 70, 709–710 (1964)zbMATHCrossRefGoogle Scholar
  15. 15.
    Grippo L., Lampariello F., Lucidi S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Hager W.W., Park S.C.: The gradient projection method with exact line search. J. Glob. Optim. 30, 103–118 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
  18. 18.
    Jain A.K.: Fundamentals of digital image processing. Prentice-Hall, Engelwood Cliffs, NJ (1989)zbMATHGoogle Scholar
  19. 19.
    La Cruz W., Martínez J.M., Raydan M.: Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 75, 1449–1466 (2006)CrossRefGoogle Scholar
  20. 20.
    Lancaster P., Rodman L.: Algebraic Riccati Equations. Clarendon Press, Oxford (1995)zbMATHGoogle Scholar
  21. 21.
    Levitin E.S., Polyak B.T.: Constrained Minimization Problems. USSR Computl. Math. Mathl. Phys. 6, 1–50 (1966)CrossRefGoogle Scholar
  22. 22.
    Liao L.-Z., Qi L.Q., Tam H.W.: A gradient-based continuous method for large-scale optimization problems. J. Glob. Optim. 31, 271–286 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  23. 23.
    Lucy L.B.: An iterative technique for the rectification of observed distributions. Astron. J. 79, 745–754 (1974)CrossRefGoogle Scholar
  24. 24.
    Kamm J., Nagy J.G.: Kronecker product and SVD approximations in image restoration. Linear Algebra Appl. 284, 177–192 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  25. 25.
    Kamm J., Nagy J.G.: Kronecker product approximations for restoration image with reflexive boundary conditions. SIAM J. Matrix Anal. Appl. 25(3), 829–841 (2004)Google Scholar
  26. 26.
    Paige C.C., Sanders M.A.: LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Software 8, 43–71 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  27. 27.
    Pruessner A., O’Leary D.P.: Blind deconvolution using a regularized structured total least norm algorithm. SIAM J. Matrix Anal. Appl. 24(4), 1018–1037 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    Raydan M.: On the Barzilai and Borwein choice of steplength for the gradient method. IMA J. Numer. Anal. 13, 321–326 (1993)MathSciNetzbMATHCrossRefGoogle Scholar
  29. 29.
    Richardson W.H.: Bayesian-based iterative method of image restoration. J. Optic. Soc. Am. A 62, 55–59 (1972)CrossRefGoogle Scholar
  30. 30.
    Van Loan C.F., Pitsianis N.P.: Approximation with Kronecker products. In: Moonen, M.S., Golub, G.H. (eds) Linear Algebra for large scale and real time applications, pp. 293–314. Kluwer Academic Publishers, Dordrecht (1993)Google Scholar
  31. 31.
    Yuan G.L.: Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems. Optim. Lett. 3, 11–21 (2009)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  1. 1.University of Lille-Nord de France, ULCO, LMPACalais CedexFrance

Personalised recommendations