Advertisement

Constructive Approximation

, Volume 44, Issue 1, pp 121–139 | Cite as

Schwarz Iterative Methods: Infinite Space Splittings

  • Michael Griebel
  • Peter Oswald
Article

Abstract

We prove the convergence of greedy and randomized versions of Schwarz iterative methods for solving linear elliptic variational problems based on infinite space splittings of a Hilbert space. For the greedy case, we show a squared error decay rate of \(O((m+1)^{-1})\) for elements of an approximation space \(\mathscr {A}_1\) related to the underlying splitting. For the randomized case, we show an expected squared error decay rate of \(O((m+1)^{-1})\) on a class \(\mathscr {A}_{\infty }^{\pi }\subset \mathscr {A}_1\) depending on the probability distribution.

Keywords

Infinite space splitting Subspace correction Multiplicative Schwarz Block coordinate descent Greedy  Randomized Convergence rates 

Mathematics Subject Classification

65F10 65F08 65N22 65H10 

Notes

Acknowledgments

M. Griebel was partially supported by the project EXAHD of the DFG priority program 1648 Software for Exascale Computing” (SPPEXA) and by the Sonderforschungsbereich 1060 The Mathematics of Emergent Effects funded by the Deutsche Forschungsgemeinschaft.

References

  1. 1.
    Barron, A.R.: Universal approximation bounds for superposition of n sigmoidal functions. IEEE Trans. Inf. Theory 39, 930–945 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Barron, A., Cohen, A., Dahmen, W., DeVore, R.: Approximation and learning by greedy algorithms. Ann. Stat. 3(1), 64–94 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Beck, A., Tetruashvili, L.: On the convergence of block coordinate descent type methods. SIAM J. Optim. 23, 2037–2060 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Christensen, O.: An Introduction to Frames and Riesz Bases. Birkhäuser, Basel (2003)CrossRefzbMATHGoogle Scholar
  5. 5.
    Casazza, P., Kutyniok, G.: Frames of subspaces. In: Heil, C., Jorgensen, P.E.T., Larsen, D.R. (eds.) Wavelets, Frames, and Operator Theory. Contemporary Mathematics, vol. 345, pp. 87–113. AMS, Providence (2004)CrossRefGoogle Scholar
  6. 6.
    DeVore, R., Temlyakov, V.: Some remarks on greedy algorithms. Adv. Comput. Math. 5, 173–187 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    DeVore, R.A., Temlyakov, V.N.: Convex optimization on Banach spaces. Found. Comput. Math. (2015). doi: 10.1007/s10208-015-9248-x
  8. 8.
    Galantai, A.: Projectors and Projection Methods. Kluwer, Dordrecht (2004)CrossRefzbMATHGoogle Scholar
  9. 9.
    Griebel, M.: Multilevelmethoden als Iterationsverfahren über Erzeugendensystemen. Teubner-Skripte Numer. Math., Teubner (1994)Google Scholar
  10. 10.
    Griebel, M., Oswald, P.: Remarks on the abstract theory of additive and multiplicative Schwarz methods. Numer. Math. 70, 163–180 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Griebel, M., Oswald, P.: Greedy and randomized versions of the multiplicative Schwarz method. Linear Algebr. Appl. 437, 1596–1610 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Jones, L.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Stat. 20(1), 608–613 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Leventhal, D., Lewis, A.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35(3), 641–654 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Luo, Z., Tseng, P.: On the convergence of the coordinate descent method for convex differentiable minimization. J. Optim. Theory Appl. 72(1), 7–35 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Mairal, J.: Optimization with first-order surrogate functions. In: Proceedings 30th International Conference Machine Learning, pp. 783–791. Atlanta, GA, USA (2013)Google Scholar
  16. 16.
    Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Nguyen, H., Petrova, G.: Greedy strategies for convex optimization (submitted, 2014). http://www.math.tamu.edu/~gpetrova/
  18. 18.
    Oswald, P.: Multilevel Finite Element Approximation—Theory & Applications. Teubner-Skripte Numer. Math., Teubner (1994)Google Scholar
  19. 19.
    Oswald, P.: Stable space splittings and fusion frames in wavelets XIII In: Goyal, V.K., Papadakis, M., Van de Ville, D. (eds.) Proceedings of the SPIE, vol. 7446, p. 744611. SPIE, Bellingham (2009)Google Scholar
  20. 20.
    Oswald, P.: On the convergence rate of SOR: a worst case estimate. Computing 52, 245–255 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Oswald, P., Zhou, W.: Convergence analysis for Kaczmarz-type iterations in a Hilbert space framework. Linear Algebr. Appl. 478, 131–161 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Richtarik, P., Takac, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 144(2), 1–38 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Southwell, R.: Relaxation Methods in Engineering Science—A Treatise in Approximate Computation. Oxford Univ. Press, Oxford (1940)zbMATHGoogle Scholar
  24. 24.
    Southwell, R.: Relaxation Methods in Theoretical Physics. Clarendon Press, Oxford (1946)zbMATHGoogle Scholar
  25. 25.
    Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15, 262–278 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Tappenden, R., Richtarik, P., Gondzio, J.: Inexact coordinate descent: complexity and preconditioning (2013). arXiv:1304.5530v1
  27. 27.
    Temlyakov, V.: Nonlinear methods of approximation. Found. Comput. Math. 3, 33–107 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Temlyakov, V.: On greedy algorithms with restricted depth search. In: Proceedings of the Steklov Institute of Mathematics, 248, pp. 255–267. Univ. South Carolina (2005). Also IMI-Report2004:27Google Scholar
  29. 29.
    Temlyakov, V.: Greedy approximation. Acta Numer. 17, 235–409 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Temlyakov, V.: Relaxation in greedy approximation. Constr. Approx. 28(1), 1–25 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Temlyakov, V.: Greedy approximation. Cambridge Univ. Press, Cambridge (2011)CrossRefzbMATHGoogle Scholar
  32. 32.
    Temlyakov, V.: Greedy approximation in convex optimization. Constr. Approx. 41(2), 269–296 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Tseng, P.: Dual ascent methods for problems with strictly convex cost and linear constraints. SIAM J. Control Optim. 28(1), 214–242 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Tseng, P.: Convergence of a block coordinate descent method for non-differentiable minimization. J. Optim. Theory Appl. 109(3), 475–494 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Xu, J.: Iterative methods by space decomposition and subspace correction. SIAM Rev. 34, 581–613 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Zhang, T.: Sequential greedy approximation for certain convex optimization problems. IEEE Trans. Inf. Theory 49(3), 682–691 (2003)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Institute for Numerical SimulationUniversität BonnBonnGermany
  2. 2.Fraunhofer Institute for Algorithms and Scientific Computing (SCAI)Schloss BirlinghovenSankt AugustinGermany
  3. 3.Jacobs University BremenBremenGermany

Personalised recommendations