Logistic Regression with the Nonnegative Garrote

  • Enes Makalic
  • Daniel F. Schmidt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7106)


Logistic regression is one of the most commonly applied statistical methods for binary classification problems. This paper considers the nonnegative garrote regularization penalty in logistic models and derives an optimization algorithm for minimizing the resultant penalty function. The search algorithm is computationally efficient and can be used even when the number of regressors is much larger than the number of samples. As the nonnegative garrote requires an initial estimate of the parameters, a number of possible estimators are compared and contrasted. Logistic regression with the nonnegative garrote is then compared with several popular regularization methods in a set of comprehensive numerical simulations. The proposed method attained excellent performance in terms of prediction rate and variable selection accuracy on both real and artificially generated data.


Logistic Regression Ridge Regression Royal Statistical Society True Regression Data Generate Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Breiman, L.: Better subset regression using the nonnegative garrote. Technometrics 37, 373–384 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Xiong, S.: Some notes on the nonnegative garrote. Technometrics 52(3), 349–361 (2010)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Albert, A., Anderson, J.A.: On the existence of maximum likelihood estimates in logistic regression models. Biometrika 71(1), 1–10 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Yuan, M., Lin, Y.: On the non-negative garrotte estimator. Journal of the Royal Statistical Society (Series B) 69(2), 143–161 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Genkin, A., Lewis, D.D., Madigan, D.: Large-scale Bayesian logistic regression for text categorization. Technometrics 49(3), 291–304 (2007)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Park, M.Y., Hastie, T.: L1-regularization path algorithm for generalized linear models. Journal of the Royal Statistical Society (Series B) 69(4), 659–677 (2007)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Friedman, J., Hastie, T., Tibshirani, R.: Regularized paths for generalized linear models via coordinate descent. Journal of Statistical Software 33(1) (2010)Google Scholar
  8. 8.
    Cessie, S.L., Houwelingen, J.C.V.: Ridge estimators in logistic regression. Journal of the Royal Statistical Society (Series C) 41(1), 191–201 (1992)zbMATHGoogle Scholar
  9. 9.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B) 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society (Series B) 67(2), 301–320 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Huang, J., Ma, S., hui Zhang, C.: The iterated lasso for high-dimensional logistic regression. Technical Report 392, The University of Iowa (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Enes Makalic
    • 1
  • Daniel F. Schmidt
    • 1
  1. 1.Centre for MEGA EpidemiologyThe University of MelbourneCarltonAustralia

Personalised recommendations