Skip to main content

Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings

  • Chapter
  • First Online:
Book cover Fixed-Point Algorithms for Inverse Problems in Science and Engineering

Part of the book series: Springer Optimization and Its Applications ((SOIA,volume 49))

Abstract

The first aim of this paper is to present a useful toolbox of quasi-nonexpansive mappings for convex optimization from the viewpoint of using their fixed point sets as constraints. Many convex optimization problems have been solved through elegant translations into fixed point problems. The underlying principle is to operate a certain quasi-nonexpansive mapping T iteratively and generate a convergent sequence to its fixed point. However, such a mapping often has infinitely many fixed points, meaning that a selection from the fixed point set Fix(T) should be of great importance. Nevertheless, most fixed point methods can only return an “unspecified” point from the fixed point set, which requires many iterations. Therefore, based on common sense, it seems unrealistic to wish for an “optimal” one from the fixed point set. Fortunately, considering the collection of quasi-nonexpansive mappings as a toolbox, we can accomplish this challenging mission simply by the hybrid steepest descent method, provided that the cost function is smooth and its derivative is Lipschitz continuous. A question arises: how can we deal with “nonsmooth” cost functions? The second aim is to propose a nontrivial integration of the ideas of the hybrid steepest descent method and the Moreau–Yosida regularization, yielding a useful approach to the challenging problem of nonsmooth convex optimization over Fix(T). The key is the use of smoothing of the original nonsmooth cost function by its Moreau–Yosida regularization whose the derivative is always Lipschitz continuous. The field of application of hybrid steepest descent method can be extended to the minimization of the ideal smooth approximation Fix(T). We present the mathematical ideas of the proposed approach together with its application to a combinatorial optimization problem: the minimal antenna-subset selection problem under a highly nonlinear capacity-constraint for efficient multiple input multiple output (MIMO) communication systems.

AMS 2010 Subject Classification: 47H10, 47H09, 49M20, 65K10

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Apostol, T.M.: Mathematical Analysis, 2nd ed. Addison-Wesley (1974)

    Google Scholar 

  2. Ascher, U.M., Haber, E., Huang, H.: On effective methods for implicit piecewise smooth surface recovery. SIAM J. Sci. Comput. 28, 339–358 (2006)

    MathSciNet  MATH  Google Scholar 

  3. Baillon, J.-B., Haddad, G.: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26, 137–150 (1977)

    MathSciNet  MATH  Google Scholar 

  4. Baillon, J.-B., Bruck, R.E., Reich, S.: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 1–9 (1978)

    MathSciNet  Google Scholar 

  5. Barbu, V., Precupanu, Th.: Convexity and Optimization in Banach Spaces, 3rd Ed. D. Reidel Publishing Company (1986)

    Google Scholar 

  6. Bauschke, H.H.: The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 202, 150–159 (1996)

    MathSciNet  MATH  Google Scholar 

  7. Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)

    MathSciNet  MATH  Google Scholar 

  8. Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejér monotone methods in Hilbert space. Math. Oper. Res. 26, 248–264 (2001)

    MathSciNet  MATH  Google Scholar 

  9. Bauschke, H.H., Combettes, P.L.: The Baillon-Haddad theorem revisited. J. Convex Anal. 17, 781–787 (2010)

    MathSciNet  MATH  Google Scholar 

  10. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer (2011)

    Google Scholar 

  11. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sciences 2, 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  12. Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18, 2419–2434 (2009)

    MathSciNet  Google Scholar 

  13. Bertero, M., Boccacci, P.: Introduction to Inverse Problems in Imaging. IOP (1998)

    Google Scholar 

  14. Borwein, J.M., Fitzpatrick, S., Vanderwerff, J.: Examples of convex functions and classifications of normed spaces. J. Convex Anal. 1, 61–73 (1994)

    MathSciNet  MATH  Google Scholar 

  15. Bougeard, M.L.: Connection between some statistical estimation criteria, lower-C 2 functions and Moreau-Yosida approximates. In: Bulletin International Statistical Institute 47th session 1, INSEE Paris Press, pp. 159–160 (1989)

    Google Scholar 

  16. Bougeard, M.L., Caquineau, C.D.: Parallel proximal decomposition algorithms for robust estimation. Ann. Oper. Res. 90, 247–270 (1999)

    MathSciNet  MATH  Google Scholar 

  17. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge Univ. Press (2004)

    MATH  Google Scholar 

  18. Bregman, L.M.: The method of successive projection for finding a common point of convex sets. Soviet Math. Dokl. 6, 688–692 (1965)

    MATH  Google Scholar 

  19. Browder, F.E.: Convergence theorems for sequences of nonlinear operators in Banach spaces. Math. Z. 100, 201–225 (1967)

    MathSciNet  MATH  Google Scholar 

  20. Byrne, C.L.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)

    MathSciNet  MATH  Google Scholar 

  21. Byrne, C.L.: Applied Iterative Methods. A K Peters, Ltd., Wellesley, Massachusettes (2007)

    Google Scholar 

  22. Cai, J.F., Candés, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010)

    MathSciNet  MATH  Google Scholar 

  23. Candés, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25, 21–30 (2008)

    Google Scholar 

  24. Capel, D., Zisserman, A.: Computer vision applied to super resolution. IEEE Signal Process. Mag. 20, 75–86 (2003)

    Google Scholar 

  25. Cavalcante, R., Yamada, I.: Multiaccess interference suppression in orthogonal space-time block coded MIMO systems by adaptive projected subgradient method. IEEE Trans. Signal Process. 56, 1028–1042 (2008)

    MathSciNet  Google Scholar 

  26. Cavalcante, R., Yamada, I.: A flexible peak-to-average power ratio reduction scheme for OFDM systems by the adaptive projected subgradient method. IEEE Trans. Signal Process. 57, 1456–1468 (2009)

    MathSciNet  Google Scholar 

  27. Cavalcante, R., Yamada, I., Mulgrew, B.: An adaptive projected subgradient approach to learning in diffusion networks. IEEE Trans. Signal Process. 57, 2762–2774 (2009)

    MathSciNet  Google Scholar 

  28. Censor, Y., Reich, S.: Iterations of paracontractions and firmly nonexpansive operators with applications to feasibility and optimization. Optimization 37, 323–339 (1996)

    MathSciNet  MATH  Google Scholar 

  29. Censor, Y., Zenios, S.A.: Parallel Optimization: Theory, Algorithm, and Optimization. Oxford University Press (1997)

    Google Scholar 

  30. Censor, Y., Iusem, A.N., Zenios, S.A.: An interior point method with Bregman functions for the variational inequality problem with paramonotone operators. Math. Program. 81, 373–400 (1998)

    MathSciNet  MATH  Google Scholar 

  31. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004)

    MathSciNet  Google Scholar 

  32. Chambolle, A., DeVore, R.A., Lee, N.Y., Lucier, B.J.: Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7, 319–335 (1998)

    MathSciNet  MATH  Google Scholar 

  33. Chidume, C.: Geometric Properties of Banach Spaces and Nonlinear Iterations (Chapter 7: Hybrid steepest descent method for variational inequalities). Lecture Notes in Mathematics 1965, Springer (2009)

    Google Scholar 

  34. Combettes, P.L.: Foundation of set theoretic estimation. Proc. IEEE. 81, 182–208 (1993)

    Google Scholar 

  35. Combettes, P.L.: Inconsistent signal feasibility problems: least squares solutions in a product space. IEEE Trans. Signal Process. 42, 2955–2966 (1994)

    Google Scholar 

  36. Combettes, P.L.: Construction d’un point fixe commun à une famille de contractions fermes. C.R. Acad. Sci.Paris Sèr. I Math. 320, 1385–1390 (1995)

    Google Scholar 

  37. Combettes, P.L.: Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections. IEEE Trans. Image Process. 6, 493–506 (1997)

    Google Scholar 

  38. Combettes, P.L.: Strong convergence of block-iterative outer approximation methods for convex optimization. SIAM J. Control Optim. 38, 538–565 (2000)

    MathSciNet  MATH  Google Scholar 

  39. Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53, 475–504 (2004)

    MathSciNet  MATH  Google Scholar 

  40. Combettes, P.L., Bondon, P.: Hard-constrained inconsistent signal feasibility problems. IEEE Trans. Signal Process. 47, 2460–2468 (1999)

    MATH  Google Scholar 

  41. Combettes, P.L., Pesquet, J.-C.: Image restoration subject to a total variation constraint. IEEE Trans. Image Process. 13, 1213–1222 (2004)

    Google Scholar 

  42. Combettes, P.L., Pesquet, J.-C.: A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1, 564–574 (2007)

    Google Scholar 

  43. Combettes, P.L., Pesquet, J.-C.: A proximal decomposition method for solving convex variational inverse problems. Inverse Probl. 24 (2008)

    Google Scholar 

  44. Combettes, P.L., Pesquet, J.-C.: Split convex minimization algorithm for signal recovery. Proc. 2009 IEEE ICASSP (Taipei), 685–688 (2009)

    Google Scholar 

  45. Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: H. H. Bauschke, R. Burachik, P. L. Combettes, V. Elser, D. R. Luke, H. Wolkowicz (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer (2010)

    Google Scholar 

  46. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. SIAM Multiscale Model. Simul. 4, 1168–1200 (2005)

    MathSciNet  MATH  Google Scholar 

  47. Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 57, 1413–1457 (2004)

    MathSciNet  MATH  Google Scholar 

  48. Deutsch, F., Best Approximation in Inner Product Spaces. Springer, New York (2001)

    MATH  Google Scholar 

  49. Deutsch, F., Yamada, I.: Minimizing certain convex functions over the intersection of the fixed point sets of nonexpansive mappings. Numer. Funct. Anal. Optim. 19, 33–56 (1998)

    MathSciNet  MATH  Google Scholar 

  50. Dolidze, Z.O.: Solutions of variational inequalities associated with a class of monotone maps. Ekonomika i Matem. Metody 18, 925–927 (1982)

    MathSciNet  Google Scholar 

  51. Donoho, D.L.: De-noising by soft-thresholding. IEEE Trans. Inf. Theory 41, 613–627 (1995)

    MathSciNet  MATH  Google Scholar 

  52. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    MathSciNet  Google Scholar 

  53. Donoho, D.L., Johnstone, I.M.: Ideal spatial adaptation via wavelet shrinkage. Biometrika 81, 425–455 (1994)

    MathSciNet  MATH  Google Scholar 

  54. Dotson, Jr, W.G.: On the Mann iterative process. Trans. Amer. Math. Soc. 149, 65–73 (1970)

    MathSciNet  MATH  Google Scholar 

  55. Dua, A., Medepalli, K., Paulraj, A.J.: Receive antenna selection in MIMO systems using convex optimization. IEEE Trans. Wirel. Commun. 5, 2353–2357 (2006)

    Google Scholar 

  56. Dunn, J.C.: Convexity, monotonicity, and gradient processes. J. Math. Anal. Appl. 53, 145–158 (1976)

    MathSciNet  MATH  Google Scholar 

  57. Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachfold splitting method and proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    MathSciNet  MATH  Google Scholar 

  58. Eicke, B.: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numer. Funct. Anal. Optim. 13, 413–429 (1992)

    MathSciNet  MATH  Google Scholar 

  59. Ekeland, I., Temam, R.: Convex Analysis and Variational Problems. Classics in Applied Mathematics 28, SIAM (1999)

    Google Scholar 

  60. Elsner, L., Koltracht, L., Neumann, M.: Convergence of sequential and asynchronous nonlinear paracontractions. Numer. Math. 62, 305–319 (1992)

    MathSciNet  MATH  Google Scholar 

  61. Engle, H.W., Leit\(\tilde{\mbox{ a}}\)o, A.: A Mann iterative regularization method for elliptic Cauchy problems. Numer. Funct. Anal. Optim. 22, 861–884 (2001)

    Google Scholar 

  62. Fadili, M.J., Starck, J.-L.: Monotone operator splitting for optimization problems in sparse recovery. Proc. 2009 IEEE ICIP, Cailo (2009)

    Google Scholar 

  63. Foschini, G.J., Gans, M.J.: On limits of wireless communications in a fading environment when using multiple antennas. Wirel. Pers. Commun. 6, 311–335 (1998)

    Google Scholar 

  64. Fukushima, M.: A relaxed projection method for variational inequalities. Math. Program. 35, 58–70 (1986)

    MathSciNet  MATH  Google Scholar 

  65. Fukushima, M., Qi, L.: A globally and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM J. Optim. 6, 1106–1120 (1996)

    MathSciNet  MATH  Google Scholar 

  66. Gabay, D.: Applications of the method of multipliers to variational inequalities. In : M. Fortin and R. Glowinski (eds.) Augmented Lagrangian Methods: Applications to the solution of boundary value problems, North-Holland, Amsterdam (1983)

    Google Scholar 

  67. Gandy, S., Yamada, I.: Convex optimization techniques for the efficient recovery of a sparsely corrupted low-rank matrix. Journal of Math-for-Industry 2(2010B-5), 147–156 (2010)

    Google Scholar 

  68. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 27(2), 025010 (2011)

    MathSciNet  Google Scholar 

  69. Gharavi-Alkhansari, M., Gershman, A.B.: Fast antenna subset selection in MIMO systems. IEEE Trans. Signal Process. 52, 339–347 (2004)

    MathSciNet  Google Scholar 

  70. Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Univ. Press. (1990)

    Google Scholar 

  71. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. New York and Basel Dekker (1984)

    Google Scholar 

  72. Goldstein, A.A.: Convex programming in Hilbert space. Bull. Amer. Math. Soc. 70, 709–710 (1964)

    MathSciNet  MATH  Google Scholar 

  73. Golshtein, E.G., Tretyakov, N.V.: Modified Lagrangians and Monotone Maps in Optimization. Wiley (1996)

    MATH  Google Scholar 

  74. Gorokhov, A., Gore, D.A., Paulraj, A.J.: Receive antenna selection for MIMO spatial multiplexing: theory and algorithms. IEEE Trans. Signal Process. 51, 2796–2807 (2003)

    MathSciNet  Google Scholar 

  75. Groetsch, C.W.: A note on segmenting Mann iterates. J. Math. Anal. Appl. 40, 369–372 (1972)

    MathSciNet  MATH  Google Scholar 

  76. Groetsch, C.W.: Inverse Problems in Mathematical Sciences. Wiesbaden-Vieweg (1993)

    Google Scholar 

  77. Gubin, L.G., Polyak, B.T., Raik, E.V.: The method of projections for finding the common point of convex sets. USSR Comput. Maths. Phys. 7, 1–24 (1967)

    Google Scholar 

  78. Halpern, B.: Fixed points of nonexpanding maps. Bull. Amer. Math. Soc. 73, 957–961 (1967)

    MathSciNet  MATH  Google Scholar 

  79. Hasegawa, H., Ohtsuka, T., Yamada, I., Sakaniwa, K.: An edge-preserving super-precision for simultaneous enhancement of spacial and grayscale resolutions. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E91-A, 673–681 (2008)

    Google Scholar 

  80. Haugazeau, Y.: Sur les Inéquations variationnelles et la Minimisation de Fonctionnelles Convexes. Thèse, Universite de Paris (1968)

    Google Scholar 

  81. Haykin, S.: Adaptive Filter Theory, 4th edn. Prentice Hall (2002)

    Google Scholar 

  82. Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer (1993)

    Google Scholar 

  83. Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Statist. 35, 73–101 (1964)

    MathSciNet  MATH  Google Scholar 

  84. Iemoto, S., Takahashi, W.: Strong convergence theorems by a hybrid steepest descent method for countable nonexpansive mappings in Hilbert spaces. Sci. Math. Jpn. 69 (online: 2008-49), 227–240 (2009)

    Google Scholar 

  85. Kiwiel, K.C.: Block-iterative surrogate projection methods for convex feasibility problems. Linear Alg. Appl. 215, 225–259 (1995)

    MathSciNet  MATH  Google Scholar 

  86. Kreyszig, E.: Introductory Functional Analysis with Applications. Wiley Classics Library, Wiley, New York (1989)

    MATH  Google Scholar 

  87. Levitin, E.S., Polyak, B.T.: Constrained minimization method. USSR Comput. Maths. Phys. 6, 1–50 (1966)

    Google Scholar 

  88. Li, W., Swetits, J.J.: The linear 1 estimator and the Huber M-estimator. SIAM J. Optim. 8, 457–475 (1998)

    MathSciNet  MATH  Google Scholar 

  89. Lions, P.L.: Approximation de points fixes de contractions. C. R. Acad. Sci. Paris Sèrie A-B 284, 1357–1359 (1977)

    MATH  Google Scholar 

  90. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    MathSciNet  MATH  Google Scholar 

  91. Liu, F., Nashed, M.Z.: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal. 6, 313–344 (1998)

    MathSciNet  MATH  Google Scholar 

  92. Mainge, P.E.: Extension of the hybrid steepest descent method to a class of variational inequalities and fixed point problems with nonself-mappings. Numer. Funct. Anal. Optim. 29, 820–834 (2008)

    MathSciNet  MATH  Google Scholar 

  93. Mangasarian, O.L., Muscicant, D.R.: Robust linear and support vector regression. IEEE Trans. Pattern Anal. Mach. Intell. 22, 950–955 (2000)

    Google Scholar 

  94. Mann, W.: Mean value methods in iteration. Proc. Amer. Math. Soc. 4, 506–510 (1953)

    MathSciNet  MATH  Google Scholar 

  95. Mehta, N.B., Molisch, A.F.: MIMO System Technology for Wireless Communications, chapter 6, CRC Press (2006)

    Google Scholar 

  96. Michelot, C., Bougeard, M.L.: Duality results and proximal solutions of the Huber M-estimator problem. Appl. Math. Optim. 30, 203–221 (1994)

    MathSciNet  MATH  Google Scholar 

  97. Molisch, A.F., Win, M.Z.: MIMO systems with antenna selection. IEEE Microw. Mag. 5, 46–56 (2004)

    Google Scholar 

  98. Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci.  Paris Ser. A Math. 255, 2897–2899 (1962)

    MathSciNet  MATH  Google Scholar 

  99. Moreau, J.J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)

    MathSciNet  MATH  Google Scholar 

  100. Nikolova, M.: Minimizing of cost functions involving nonsmooth data-fidelity terms – Application to the processing of outliers. SIAM J. Numer. Anal. 40, 965–994 (2002)

    MathSciNet  MATH  Google Scholar 

  101. Ogura, N., Yamada, I.: Non-strictly convex minimization over the fixed point set of the asymptotically shrinking nonexpansive mapping. Numer. Funct. Anal. Optim. 23, 113–137 (2002)

    MathSciNet  MATH  Google Scholar 

  102. Ogura, N., Yamada, I.: Non-strictly convex minimization over the bounded fixed point set of nonexpansive mapping. Numer. Funct. Anal. Optim. 24, 129–135 (2003)

    MathSciNet  MATH  Google Scholar 

  103. Ogura, N., Yamada, I.: A deep outer approximating half space of the level set of certain Quadratic Functions. J. Nonlinear Convex Anal. 6, 187–201 (2005)

    MathSciNet  MATH  Google Scholar 

  104. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–390 (1979)

    MathSciNet  MATH  Google Scholar 

  105. Pierra, G.: Eclatement de contraintes en parallèle pour la minimisation d’une forme quadratique. Lecture Notes in Computer Science 41, 200–218, Springer (1976)

    Google Scholar 

  106. Pierra, G.: Decomposition through formalization in a product space. Math. Program. 28, 96–115 (1984)

    MathSciNet  MATH  Google Scholar 

  107. Polyak, B.T.: Minimization of unsmooth functionals. USSR Comput. Maths. Phys. 9, 14–29 (1969)

    Google Scholar 

  108. Rockafellar, R.T.: Monotone operators and proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    MathSciNet  MATH  Google Scholar 

  109. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, 1st edn. Springer (1998)

    Google Scholar 

  110. Sabharwal, A., Potter, L.C.: Convexly constrained linear inverse problems: Iterative least-squares and regularization. IEEE Trans. Signal Process. 46, 2345–2352 (1998)

    MathSciNet  MATH  Google Scholar 

  111. Sanayei, S., Nosratinia, A.: Antenna selection in MIMO systems. IEEE Commun. Mag. 42, 68–73 (2004)

    Google Scholar 

  112. Sayed, A.H.: Fundamentals of Adaptive Filtering. Wiley-IEEE Press (2003)

    Google Scholar 

  113. Slavakis, K., Yamada, I.: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans. Signal Process. 55, 4511–4522 (2007)

    MathSciNet  Google Scholar 

  114. Slavakis, K., Yamada, I., Ogura, N.: The adaptive projected subgradient method over the fixed point set of strongly attracting nonexpansive mappings. Numer. Funct. Anal. Optim. 27, 905–930 (2006)

    MathSciNet  MATH  Google Scholar 

  115. Slavakis, K., Theodoridis, S., Yamada, I.: Online kernel-based classification using adaptive projection algorithms. IEEE Trans. Signal Process. 56, 2781–2796 (2008)

    MathSciNet  Google Scholar 

  116. Slavakis, K., Theodoridis, S., Yamada, I.: Adaptive constrained filtering in reproducing kernel Hilbert spaces: the beamforming case. IEEE Trans. Signal Process. 57, 4744–4764 (2009)

    MathSciNet  Google Scholar 

  117. Starck, J.-L., Murtagh, F.: Astronomical Image and Data Analysis, 2nd.edn. Springer (2006)

    Google Scholar 

  118. Starck, J.-L., Murtagh, F., Fadili, J.M.: Sparse Image and Signal Processing – Wavelets, Curvelets, Morphological Diversity. Cambridge Univ. Press (2010)

    MATH  Google Scholar 

  119. Stark, H., Yang, Y.: Vector Space Projections – A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics. Wiley (1998)

    Google Scholar 

  120. Suzuki, T.: A sufficient and necessary condition for Halpern-type strong convergence to fixed points of nonexpansive mappings. Proc. Amer. Math. Soc. 135, 99–106 (2007)

    MathSciNet  MATH  Google Scholar 

  121. Takahashi, W.: Nonlinear Functional Analysis – Fixed Point Theory and its Applications. Yokohama Publishers (2000)

    Google Scholar 

  122. Takahashi, N., Yamada, I.: Parallel algorithms for variational inequalities over the Cartesian product of the intersections of the fixed point sets of nonexpansive mappings. J. Approx. Theory 153, 139–160 (2008)

    MathSciNet  MATH  Google Scholar 

  123. Takahashi, N., Yamada, I.: Steady-state mean-square performance analysis of a relaxed set-membership NLMS algorithm by the energy conservation argument. IEEE Trans. Signal Process. 57, 3361–3372 (2009)

    MathSciNet  Google Scholar 

  124. Telatar, I.E.: Capacity of multi-antenna Gaussian channels. Eur. Trans. Telecomm. 10, 585–595 (1999)

    Google Scholar 

  125. Theodoridis, S., Slavakis, K., Yamada, I.: Adaptive learning in a world of projections – A unifying framework for linear and nonlinear classification and regression tasks. IEEE Signal Processing Mag. 28, 97–123 (2011)

    Google Scholar 

  126. Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29, 119–138 (1991)

    MathSciNet  MATH  Google Scholar 

  127. Vasin, V.V., Ageev, A.L.: Ill-Posed Problems with A Priori Information. VSP (1995)

    Google Scholar 

  128. Widrow, B., Stearns, S.D.: Adaptive Signal Processing. Prentice Hall (1985)

    Google Scholar 

  129. Wittmann, R.: Approximation of fixed points of nonexpansive mappings. Arch. Math. 58, 486–491 (1992)

    MathSciNet  MATH  Google Scholar 

  130. Xu, H.K., Kim, T.H.: Convergence of hybrid steepest descent methods for variational inequalities. J. Optim. Theory Appl. 119, 185–201 (2003)

    MathSciNet  MATH  Google Scholar 

  131. Yamada, I.: Approximation of convexly constrained pseudoinverse by Hybrid Steepest Descent Method. Proc. 1999 IEEE ISCAS, Florida (1999)

    Google Scholar 

  132. Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: D. Butnariu, Y. Censor, S. Reich (eds.) Inherently Parallel Algorithm for Feasibility and Optimization and Their Applications, Elsevier, 473–504 (2001)

    Google Scholar 

  133. Yamada, I.: Adaptive projected subgradient method: A unified view for projection based adaptive algorithms. The Journal of IEICE 86, 654–658 (2003) (in Japanese)

    Google Scholar 

  134. Yamada, I.: Kougaku no Tameno Kansu Kaiseki (Functional Analysis for Engineering), Suurikougaku-Sha/Saiensu-Sha (2009)

    Google Scholar 

  135. Yamada, I., Ogura, N.: Adaptive projected subgradient method for asymptotic minimization of sequence of nonnegative convex functions. Numer. Funct. Anal. Optim. 25, 593–617 (2004)

    MathSciNet  MATH  Google Scholar 

  136. Yamada, I., Ogura, N.: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 25, 619–655 (2004)

    MathSciNet  MATH  Google Scholar 

  137. Yamada, I., Ogura, N., Yamashita, Y., Sakaniwa, K.: An extension of optimal fixed point theorem for nonexpansive operator and its application to set theoretic signal estimation. Technical Report of IEICE DSP96-106, 63–70 (1996)

    Google Scholar 

  138. Yamada, I., Ogura, N., Yamashita, Y., Sakaniwa, K.: Quadratic optimization of fixed points of nonexpansive mappings in Hilbert space. Numer. Funct. Anal. Optim. 19, 165–190 (1998)

    MathSciNet  MATH  Google Scholar 

  139. Yamada, I., Ogura, N., Shirakawa, N.: A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems. In: Z. Nashed, O. Scherzer (eds.) Inverse Problems, Image Analysis, and Medical Imaging, Contemporary Mathematics 313, 269–305 (2002)

    Google Scholar 

  140. Yamada, I., Slavakis, K., Yamada, K.: An efficient robust adaptive filtering algorithm based on parallel subgradient projection techniques. IEEE Trans. Signal Process. 50, 1091–1101 (2002)

    Google Scholar 

  141. Yamagishi, M., Yamada, I.: A deep monotone approximation operator based on the best quadratic lower bound of convex functions. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E91-A, 1858–1866 (2008)

    Google Scholar 

  142. Yosida, K.: Functional Analysis, 4th edn. Springer (1974)

    Google Scholar 

  143. Youla, D.C., Webb, H.: Image restoration by the method of convex projections: Part 1 – Theory. IEEE Trans. Med. Imaging 1, 81–94 (1982)

    Google Scholar 

  144. Yukawa, M., Yamada, I.: Pairwise optimal weight realization – acceleration technique for set-theoretic adaptive parallel subgradient projection algorithm. IEEE Trans. Signal Process. 54, 4557–4571 (2006)

    Google Scholar 

  145. Yukawa, M., Yamada, I.: Minimal antenna-subset selection under capacity constraint for power-efficient MIMO systems: a relaxed 1-minimization approach. Proc. 2010 IEEE ICASSP, Dallas (2010)

    Google Scholar 

  146. Yukawa, M., Cavalcante, R., Yamada, I.: Efficient blind MAI suppression in DS/CDMA by embedded constraint parallel projection techniques. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E88-A, 2427–2435 (2005)

    Google Scholar 

  147. Yukawa, M., Slavakis, K., Yamada, I.: Adaptive parallel quadratic-metric projection algorithms. IEEE Trans. Audio Speech Lang. Process. 15, 1665–1680 (2007)

    Google Scholar 

  148. Zălinescu, C.: Convex Analysis in General Vector Spaces. World Scientific (2002)

    Google Scholar 

  149. Zeidler, E.: Nonlinear Functional Analysis and its Applications, III – Variational Methods and Optimization. Springer (1985)

    Google Scholar 

  150. Zeidler, E.: Nonlinear Functional Analysis and its Applications, II/B – Nonlinear Monotone Operators. Springer (1990)

    Google Scholar 

  151. Zeng, L.C., Schaible, S., Yao, J.C.: Hybrid steepest descent methods for zeros of nonlinear operators with applications to variational inequalities. J. Optim. Theory Appl. 141, 75–91 (2009)

    MathSciNet  MATH  Google Scholar 

  152. Zhang, B., Fadili, J.M., Starck, J.-L.: Wavelet, ridgelet, and curvelets for Poisson noise removal. IEEE Trans. Image Process. 17, 1093–1108 (2008)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The first author thank Heinz Bauschke, Patrick Combettes and Russell Luke for their kind encouragement and invitation of the first author to the dream meeting: The Interdisciplinary Workshop on Fixed-Point Algorithms for Inverse Problems in Science and Engineering in November 1–6, 2009 at the Banff International Research Station.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Isao Yamada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Yamada, I., Yukawa, M., Yamagishi, M. (2011). Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H. (eds) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications(), vol 49. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9569-8_17

Download citation

Publish with us

Policies and ethics