Advertisement

Large-Margin Thresholded Ensembles for Ordinal Regression: Theory and Practice

  • Hsuan-Tien Lin
  • Ling Li
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4264)

Abstract

We propose a thresholded ensemble model for ordinal regression problems. The model consists of a weighted ensemble of confidence functions and an ordered vector of thresholds. We derive novel large-margin bounds of common error functions, such as the classification error and the absolute error. In addition to some existing algorithms, we also study two novel boosting approaches for constructing thresholded ensembles. Both our approaches not only are simpler than existing algorithms, but also have a stronger connection to the large-margin bounds. In addition, they have comparable performance to SVM-based algorithms, but enjoy the benefit of faster training. Experimental results on benchmark datasets demonstrate the usefulness of our boosting approaches.

Keywords

Sigmoid Function Benchmark Dataset Base Learner Ordinal Regression Decision Stump 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Herbrich, R., Graepel, T., Obermayer, K.: Large margin rank boundaries for ordinal regression. In: Advances in Large Margin Classifiers, pp. 115–132. MIT Press, Cambridge (2000)Google Scholar
  2. 2.
    Shashua, A., Levin, A.: Ranking with large margin principle: Two approaches. In: Advances in Neural Information Processing Systems 15, pp. 961–968. MIT Press, Cambridge (2003)Google Scholar
  3. 3.
    Chu, W., Keerthi, S.S.: New approaches to support vector ordinal regression. In: Proceedings of ICML 2005, pp. 145–152. Omnipress (2005)Google Scholar
  4. 4.
    Crammer, K., Singer, Y.: Online ranking by projecting. Neural Computation 17, 145–175 (2005)MATHCrossRefGoogle Scholar
  5. 5.
    Li, L., Lin, H.T.: Ordinal regression by extended binary classification. Under review (2007)Google Scholar
  6. 6.
    Rudin, C., Cortes, C., Mohri, M., Schapire, R.E.: Margin-based ranking meets boosting in the middle. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, pp. 63–78. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  7. 7.
    Freund, Y., Iyer, R., Shapire, R.E., Singer, Y.: An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research 4, 933–969 (2003)CrossRefGoogle Scholar
  8. 8.
    Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Mendelson, S., Smola, A.J. (eds.) Advanced Lectures on Machine Learning. LNCS, vol. 2600, pp. 118–183. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Machine Learning: ICML 1996, pp. 148–156. Morgan Kaufmann, San Francisco (1996)Google Scholar
  10. 10.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)MATHGoogle Scholar
  11. 11.
    Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics 26, 1651–1686 (1998)MATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Lin, H.-T., Li, L.: Infinite ensemble learning with support vector machines. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS, vol. 3720, pp. 242–254. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. 13.
    Mason, L., Baxter, J., Bartlett, P., Frean, M.: Functional gradient techniques for combining hypotheses. In: Advances in Large Margin Classifiers, pp. 221–246. MIT Press, Cambridge (2000)Google Scholar
  14. 14.
    Robertson, T., Wright, F.T., Dykstra, R.L.: Order Restricted Statistical Inference. John Wiley & Sons, Chichester (1988)MATHGoogle Scholar
  15. 15.
    Rätsch, G., Mika, S., Schölkopf, B., Müller, K.R.: Constructing boosting algorithms from SVMs: An application to one-class classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1184–1199 (2002)CrossRefGoogle Scholar
  16. 16.
    Li, L.: Perceptron learning with random coordinate descent. Technical Report CaltechCSTR:2005.006, California Institute of Technology (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hsuan-Tien Lin
    • 1
  • Ling Li
    • 1
  1. 1.Learning Systems GroupCalifornia Institute of TechnologyUSA

Personalised recommendations