Advertisement

Group Sparse Inverse Covariance Selection with a Dual Augmented Lagrangian Method

  • Satoshi Hara
  • Takashi Washio
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7665)

Abstract

Sparse Inverse Covariance Selection (SICS) is a popular tool identifying an intrinsic relationship between continuous random variables. In this paper, we treat the extension of SICS to the grouped feature model in which the state-of-the-art SICS algorithm is no longer applicable. Such an extended model is essential when we aim to find a group-wise relationships between sets of variables, e.g. unknown interactions between groups of genes. We tackle the problem with a technique called Dual Augmented Lagrangian (DAL) that provides an efficient method for grouped sparse problems. We further improve the DAL framework by combining the Alternating Direction Method of Multipliers (ADMM), which dramatically simplifies the entire procedure of DAL and reduce the computational cost. We also provide empirical comparisons of the proposed DAL–ADMM algorithm against existing methods.

Keywords

Sparse Inverse Covariance Selection Dual Augmented Lagrangian Alternating Direction Method of Multipliers 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dempster, A.P.: Covariance selection. Biometrics 28(1), 157–175 (1972)CrossRefGoogle Scholar
  2. 2.
    Meinshausen, N., Bühlmann, P.: High-dimensional graphs and variable selection with the lasso. The Annals of Statistics 34(3), 1436–1462 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Yuan, M., Lin, Y.: Model selection and estimation in the gaussian graphical model. Biometrika 94, 19–35 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Banerjee, O., El Ghaoui, L., d’Aspremont, A.: Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine Learning Research 9, 485–516 (2008)zbMATHGoogle Scholar
  5. 5.
    Scheinberg, K., Ma, S., Goldfarb, D.: Sparse inverse covariance selection via alternating linearization methods. In: Advances in Neural Information Processing Systems, vol. 23, pp. 2101–2109 (2010)Google Scholar
  6. 6.
    Duchi, J., Gould, S., Koller, D.: Projected subgradient methods for learning sparse gaussians. In: Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, pp. 145–152 (2008)Google Scholar
  7. 7.
    Turlach, B., Venables, W., Wright, S.: Simultaneous variable selection. Technometrics 47(3), 349–363 (2005)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B 68(1), 49–67 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Friedman, J., Hastie, T., Tibshirani, R.: Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(3), 432–441 (2008)zbMATHCrossRefGoogle Scholar
  10. 10.
    Yuan, X.: Alternating direction methods for sparse covariance selection (preprint 2009), http://www.optimization-online.org/DB_FILE/2009/09/2390.pdf
  11. 11.
    Scheinberg, K., Rish, I.: Learning Sparse Gaussian Markov Networks Using a Greedy Coordinate Ascent Approach. In: Balcázar, J.L., Bonchi, F., Gionis, A., Sebag, M. (eds.) ECML PKDD 2010, Part III. LNCS, vol. 6323, pp. 196–212. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  12. 12.
    Hsieh, C., Sustik, M., Dhillon, I., Ravikumar, P.: Sparse inverse covariance matrix estimation using quadratic approximation. In: Advances in Neural Information Processing Systems, vol. 24, pp. 2330–2338 (2011)Google Scholar
  13. 13.
    Tomioka, R., Suzuki, T., Sugiyama, M.: Super-linear convergence of dual augmented lagrangian algorithm for sparsity regularized estimation. The Journal of Machine Learning Research 12, 1537–1586 (2011)MathSciNetGoogle Scholar
  14. 14.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 3(1), 1–122 (2011)CrossRefGoogle Scholar
  15. 15.
    Schmidt, M., Van Den Berg, E., Friedlander, M., Murphy, K.: Optimizing costly functions with simple constraints: A limited-memory projected quasi-newton algorithm. In: Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, pp. 456–463 (2009)Google Scholar
  16. 16.
    He, B., Yuan, X.: On the o(1/n) convergence rate of the douglas–rachford alternating direction method. SIAM Journal on Numerical Analysis 50, 700–709 (2012)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Satoshi Hara
    • 1
  • Takashi Washio
    • 1
  1. 1.The Institute of Scientific and Industrial Research (ISIR)Osaka UniversityJapan

Personalised recommendations