Advertisement

Relative Newton and Smoothing Multiplier Optimization Methods for Blind Source Separation

  • Michael Zibulevsky
Part of the Signals and Communication Technology book series (SCT)

We study a relative optimization framework for quasi-maximum likelihood blind source separation and relative Newton method as its particular instance. The structure of the Hessian allows its fast approximate inversion. In the second part we present Smoothing Method of Multipliers (SMOM) for minimization of sum of pairwise maxima of smooth functions, in particular sum of absolute value terms. Incorporating Lagrange multiplier into a smooth approximation of max-type function, we obtain an extended notion of nonquadratic augmented Lagrangian. Our approach does not require artificial variables, and preserves the sparse structure of Hessian. Convergence of the method is further accelerated by the Frozen Hessian strategy. We demonstrate efficiency of this approach on an example of blind separation of sparse sources. The nonlinearity in this case is based on the absolute value function, which provides superefficient source separation.

Keywords

Outer Iteration Blind Source Separation Smooth Approximation Newton Step Short Time Fourier Transform 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Pham and P. Garat, “Blind separation of a mixture of independent sources through a quasi-maximum likelihood approach,” IEEE Transactions on Signal Processing, vol. 45, no. 7, pp. 1712-1725, 1997.MATHCrossRefGoogle Scholar
  2. 2.
    A. J. Bell and T. J. Sejnowski, “An information-maximization approach to blind separation and blind deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129-1159, 1995.CrossRefGoogle Scholar
  3. 3.
    J.-F. Cardoso, “On the performance of orthogonal source separation algo-rithms,” in EUSIPCO, Edinburgh, Sept. 1994, pp. 776-779.Google Scholar
  4. 4.
    .——, “Blind signal separation: statistical principles,” Proceedings of the IEEE, vol.9, no.10, pp.2009-2025, Oct.1998.[Online]. Available:ftp://sig.enst.fr/pub/jfc/Papers/ProcIEEE.us.ps.gz
  5. 5.
    S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput., vol. 20, no. 1, pp. 33-61, 1998.CrossRefMathSciNetGoogle Scholar
  6. 6.
    B. A. Olshausen and D. J. Field, “Sparse coding with an overcomplete basis set: A strategy employed by v1?” Vision Research, vol. 37, pp. 3311-3325, 1997.CrossRefGoogle Scholar
  7. 7.
    M. S. Lewicki and B. A. Olshausen, “A probabilistic framework for the adapta-tion and comparison of image codes,” Journal of the Optical Society of America, vol. 16, no. 7, pp. 1587-1601, 1999.CrossRefGoogle Scholar
  8. 8.
    M. Zibulevsky and B. A. Pearlmutter, “Blind source separation by sparse decomposition in a signal dictionary,” Neural Computations, vol. 13, no. 4, pp. 863-882, 2001.MATHCrossRefGoogle Scholar
  9. 9.
    M. Zibulevsky, B. A. Pearlmutter, P. Bofill, and P. Kisilev, “Blind source separation by sparse decomposition,” in Independent Components Analysis: Princeiples and Practice, S. J. Roberts and R. M. Everson, Eds. Cambridge University Press, 2001.Google Scholar
  10. 10.
    M. Zibulevsky, P. Kisilev, Y. Y. Zeevi, and B. A. Pearlmutter, “Blind source separation via multinode sparse representation,” in Advances in Neural Infor-mation Processing Systems 12. MIT Press, 2002.Google Scholar
  11. 11.
    A. Hyvärinen, “Fast and robust fixed-point algorithms for independent com-ponent analysis,” IEEE Transactions on Neural Networks, vol. 10, no. 3, pp. 626-634, 1999.CrossRefGoogle Scholar
  12. 12.
    T. Akuzawa and N. Murata, “Multiplicative nonholonomic Newton-like algo-rithm,” Chaos, Solitons and Fractals, vol. 12, p. 785, 2001.MATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    T. Akuzawa, “Extended quasi-Newton method for the ICA,” Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute, Tech. Rep., 2000, http://www.mns.brain.riken.go.jp/˜akuzawa/publ.html.
  14. 14.
    D. Pham, “Joint approximate diagonalization of positive definite matrices,” SIAM J. on Matrix Anal. and Appl., vol. 22, no. 4, pp. 1136-1152, 2001.MATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    D. Pham and J.-F. Cardoso, “Blind separation of instantaneous mixtures of non-stationary sources,” IEEE Transactions on Signal Processing, vol. 49, no. 9, pp. 1837-1848, 2001.CrossRefMathSciNetGoogle Scholar
  16. 16.
    M. Joho and K. Rahbar, “Joint diagonalization of correlation matrices by using Newton methods with application to blind signal separation,” SAM 2002, 2002, http://www.phonak.uiuc.edu/˜joho/research/publications/sam 2002 2.pdf.
  17. 17.
    A. Ziehe, P. Laskov, G. Nolte, and K.-R. Mueller, “A fast algorithm for joint diagonalization with non-orthogonal transformations and its application to blind source separation,” Journal of Machine Learning Research, vol. 5, pp. 801-818, July 2004.Google Scholar
  18. 18.
    B. Kort and D. Bertsekas, “Multiplier methods for convex programming,” Proc 1073 IEEE Conf. Decision Control, San-Diego, Calif., pp. 428-432, 1973.Google Scholar
  19. 19.
    D. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods. New York: Academic Press, 1982.MATHGoogle Scholar
  20. 20.
    R. Polyak, “Modified barrier functions: Theory and methods,” Math. Program-ming, vol. 54, pp. 177-222, 1992.MATHCrossRefMathSciNetGoogle Scholar
  21. 21.
    A. Ben-Tal, I. Yuzefovich, and M. Zibulevsky, “Penalty/barrier multiplier meth-ods for min-max and constrained smooth convex programs,” Opt. Lab., Dept. of Indust. Eng., Technion, Haifa, Israel, Tech. Rep. 9, 1992.Google Scholar
  22. 22.
    P. Tseng and D. Bertsekas, “Convergence of the exponential multiplier method for convex programming,” Math. Programming, vol. 60, pp. 1-19, 1993.CrossRefMathSciNetGoogle Scholar
  23. 23.
    M. G. Breitfeld and D. Shanno, “Computational experience with penalty/bar-rier methods for nonlinear programming,” Annals of Operations Research, vol. 62, pp. 439-464, 1996.MATHCrossRefMathSciNetGoogle Scholar
  24. 24.
    M. Zibulevsky, “Penalty/barrier multiplier methods for large-scale nonlinear and semidefinite programming,” Ph.D. dissertation, Technion - Israel Institute of Technology, 1996, http://ie.technion.ac.il/˜mcib/.
  25. 25.
    A. Ben-Tal and M. Zibulevsky, “Penalty/barrier multiplier methods for con-vex programming problems,” SIAM Journal on Optimization, vol. 7, no. 2, pp. 347-366, 1997.MATHCrossRefMathSciNetGoogle Scholar
  26. 26.
    L. Mosheyev and M. Zibulevsky, “Penalty/barrier multiplier algorithm for semi-definite programming,” Optimization Methods and Software, vol. 13, no. 4, pp. 235-261, 2000.MATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    M. Kocvara and M. Stingl, “PENNON - a code for convex nonlinear and semidefinite programming,” Optimization Methods and Softwarte, vol. 18(3), pp. 317-333, 2003.MATHCrossRefMathSciNetGoogle Scholar
  28. 28.
    A. Ben-Tal and M. Teboulle, “A smoothing technique for nondifferentiable optimization problems,” Fifth French German Conference, Lecture Notes in Math. 1405, Springer-Verlag, New York, pp. 1-11, 1989.Google Scholar
  29. 29.
    C. Chen and O. L. Mangasarian, “A class of smoothing functions for nonlin-ear and mixed complementarity problems,” Computational Optimization and Applications, vol. 5, pp. 97-138, 1996.MATHCrossRefMathSciNetGoogle Scholar
  30. 30.
    A. Cichocki, R. Unbehauen, and E. Rummert, “Robust learning algorithm for blind separation of signals,” Electronics Letters, vol. 30, no. 17, pp. 1386-1387, 1994.CrossRefGoogle Scholar
  31. 31.
    S. Amari, A. Cichocki, and H. H. Yang, “A new learning algorithm for blind signal separation,” in Advances in Neural Information Processing Systems 8. MIT Press, 1996. [Online]. Available: http://www.cs.cmu.edu/Groups/NIPS/ NIPS95/Papers.html
  32. 32.
    J.-F. Cardoso and B. Laheld, “Equivariant adaptive source separation,” IEEE Transactions on Signal Processing, vol. 44, no. 12, pp. 3017-3030, 1996.CrossRefGoogle Scholar
  33. 33.
    P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization. New York: Academic Press, 1981.MATHGoogle Scholar
  34. 34.
    R. Rockafellar, Convex Analysis. Princeton, NJ: Princeton University Press, 1970.MATHGoogle Scholar
  35. 35.
    M. Zibulevsky,“Smoothing method of multipliers for sum-max prob-lems,” Dept. of Elec. Eng., Technion, Tech. Rep., 2003, http://ie.technion.ac.il/˜mcib/.
  36. 36.
    A. Cichocki, S. Amari, and K. Siwek,“ICALAB toolbox for im- age processing- benchmarks,”2002, http://www.bsp.brain.riken.go.jp/ICALAB/ICALABImageProc/benchmarks/.
  37. 37.
    J.-F. Cardoso, “High-order contrasts for independent component analysis,” Neural Computation, vol. 11, no. 1, pp. 157-192, 1999.CrossRefMathSciNetGoogle Scholar
  38. 38.
    S. Makeig, “ICA toolbox for psychophysiological research,” Computational Neurobiology Laboratory, the Salk Institute for Biological Studies, 1998, http://www.cnl.salk.edu/˜ ica.html.
  39. 39.
    A. Hyvärinen,“The Fast-ICA MATLAB package,” 1998, http://www.cis.hut.fi/˜aapo/.
  40. 40.
    J.-F. Cardoso, “JADE for real-valued data,” 1999, http://sig.enst.fr:80/∼car-doso/guidesepsou.html.

Copyright information

© Springer 2007

Authors and Affiliations

  • Michael Zibulevsky
    • 1
  1. 1.Department of Computer ScienceTechnionIsrael

Personalised recommendations