Efficient Parallel Computation of the Stochastic MV-PURE Estimator by the Hybrid Steepest Descent Method

  • Tomasz Piotrowski
  • Isao Yamada
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7268)


In this paper we consider the problem of efficient computation of the stochastic MV-PURE estimator which is a reduced-rank estimator designed for robust linear estimation in ill-conditioned inverse problems. Our motivation for this result stems from the fact that the reduced-rank estimation by the stochastic MV-PURE estimator, while avoiding the problem of regularization parameter selection appearing in a common regularization technique used in inverse problems and machine learning, presents computational challenge due to nonconvexity induced by the rank constraint. To combat this problem, we propose a recursive scheme for computation of the general form of the stochastic MV-PURE estimator which does not require any matrix inversion and utilize the inherently parallel hybrid steepest descent method. We verify efficiency of the proposed scheme in numerical simulations.


stochastic MV-PURE estimator reduced-rank approach parallel processing subspace extraction hybrid steepest descent method 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Poggio, T., Girosi, F.: Networks for Approximation and Learning. Proceedings of the IEEE 78(9), 1481–1497 (1990)CrossRefGoogle Scholar
  2. 2.
    Evgeniou, T., Pontil, M., Poggio, T.: Regularization Networks and Support Vector Machines. Advances in Computational Mathematics 13(1), 1–50 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    De Vito, E., Rosasco, L., Caponetto, A., De Giovannini, U., Odone, F.: Learning from Examples as an Inverse Problem. Journal of Machine Learning Research 6, 883–904 (2005)zbMATHGoogle Scholar
  4. 4.
    Tikhonov, A.N.: Solution of Incorrectly Formulated Problems and the Regularization Method. Soviet Math. Doktl. 5, 1035–1038 (1963)zbMATHGoogle Scholar
  5. 5.
    Marquardt, D.W.: Generalized Inverses, Ridge Regression, Biased Linear Estimation, and Nonlinear Estimation. Technometrics 12, 591–612 (1970)zbMATHCrossRefGoogle Scholar
  6. 6.
    Scharf, L.L., Thomas, J.K.: Wiener Filters in Canonical Coordinates for Transform Coding, Filtering, and Quantizing. IEEE Transactions on Signal Processing 46(3), 647–654 (1998)CrossRefGoogle Scholar
  7. 7.
    Yamashita, Y., Ogawa, H.: Relative Karhunen-Loeve Transform. IEEE Transactions on Signal Processing 44(2), 371–378 (1996)CrossRefGoogle Scholar
  8. 8.
    Kulis, B., Sustik, M., Dhillon, I.: Learning Low-Rank Kernel Matrices. In: ICML 2006, pp. 505–512 (2006)Google Scholar
  9. 9.
    Smola, A.J., Schölkopf, B.: Sparse Greedy Matrix Approximation for Machine Learning. In: ICML 2000, pp. 911–918 (2000)Google Scholar
  10. 10.
    Bach, F.R., Jordan, M.I.: Predictive Low-Rank Decomposition for Kernel Methods. In: ICML 2005, pp. 33–40 (2005)Google Scholar
  11. 11.
    Ben-Israel, A., Greville, T.N.E.: Generalized Inverses: Theory and Applications, 2nd edn. Springer, New York (2003)zbMATHGoogle Scholar
  12. 12.
    Yamada, I., Elbadraoui, J.: Minimum-Variance Pseudo-Unbiased Low-Rank Estimator for Ill-Conditioned Inverse Problems. In: IEEE ICASSP 2006, pp. 325–328 (2006)Google Scholar
  13. 13.
    Piotrowski, T., Yamada, I.: MV-PURE Estimator: Minimum-Variance Pseudo-Unbiased Reduced-Rank Estimator for Linearly Constrained Ill-Conditioned Inverse Problems. IEEE Transactions on Signal Processing 56(8), 3408–3423 (2008)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)zbMATHGoogle Scholar
  15. 15.
    Piotrowski, T., Cavalcante, R.L.G., Yamada, I.: Stochastic MV-PURE Estimator: Robust Reduced-Rank Estimator for Stochastic Linear Model. IEEE Transactions on Signal Processing 57(4), 1293–1303 (2009)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Piotrowski, T., Yamada, I.: Why the Stochastic MV-PURE Estimator Excels in Highly Noisy Situations? In: IEEE ICASSP 2009, pp. 3081–3084 (2009)Google Scholar
  17. 17.
    Chen, T., Amari, S., Lin, Q.: A Unified Algorithm for Principal and Minor Components Extraction. Neural Networks 11, 385–390 (1998)CrossRefGoogle Scholar
  18. 18.
    Yamada, I.: Hybrid Steepest Descent Method for Variational Inequality Problem Over the Intersection of Fixed Point Sets of Nonexpansive Mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithm for Feasibility and Optimization and Their Applications, pp. 473–504. Elsevier (2001)Google Scholar
  19. 19.
    Yamada, I., Ogura, N., Shirakawa, N.: A Numerically Robust Hybrid Steepest Descent Method for the Convexly Constrained Generalized Inverse Problems. In: Nashed, Z., Scherzer, O. (eds.) Inverse Problems, Image Analysis and Medical Imaging, vol. 313, pp. 269–305. American Mathematical Society, Contemporary Mathematics (2002)CrossRefGoogle Scholar
  20. 20.
    Yamada, I., Yukawa, M., Yamagishi, M.: Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol. 49, pp. 345–390 (2011)Google Scholar
  21. 21.
    Piotrowski, T., Yamada, I.: Convex Formulation of the Stochastic MV-PURE Estimator and Its Relation to the Reduced Rank Wiener Filter. In: IEEE ICSES 2008, pp. 397–400 (2008)Google Scholar
  22. 22.
    Kailath, T., Sayed, A.H., Hassibi, B.: Linear Estimation. Prentice Hall, New Jersey (2000)Google Scholar
  23. 23.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)zbMATHGoogle Scholar
  24. 24.
    Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, New York (1985)zbMATHGoogle Scholar
  25. 25.
    Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1996)zbMATHGoogle Scholar
  26. 26.
    Badeau, R., Richard, G., David, B.: Fast and Stable YAST Algorithm for Principal and Minor Subspace Tracking. IEEE Transactions on Signal Processing 57(4), 3437–3446 (2008)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Stark, H., Yang, Y.: Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics. John Wiley & Sons, New York (1998)zbMATHGoogle Scholar
  28. 28.
    Hua, Y., Nikpour, M.: Computing the Reduced-Rank Wiener Filter by IQMD. IEEE Signal Processing Letters 6(9), 240–242 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Tomasz Piotrowski
    • 1
  • Isao Yamada
    • 2
  1. 1.Dept. of InformaticsNicolaus Copernicus UniversityToruǹPoland
  2. 2.Dept. of Communications and Integrated SystemsTokyo Institute of TechnologyTokyoJapan

Personalised recommendations