Advertisement

A Cost Based Reweighted Scheme of Principal Support Vector Machine

Conference paper
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 74)

Abstract

Principal Support Vector Machine (PSVM) is a recently proposed method that uses Support Vector Machines to achieve linear and nonlinear sufficient dimension reduction under a unified framework. In this work, a reweighted scheme is used to improve the performance of the algorithm. We present basic theoretical results and demonstrate the effectiveness of the reweighted algorithm through simulations and real data application.

Keywords

Support vector machine Sufficient dimension reduction Inverse regression Misclassification penalty Imbalanced data 

Notes

Acknowledgements

Andreas Artemiou is supported in part by NSF grant DMS-12-07651. The authors would like to help the editors and the referees for their valuable comments.

References

  1. 1.
    Bache, K., Lichman, M.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2013). http://archive.ics.uci.edu/ml
  2. 2.
    Cook, R.D.: Principal Hessian directions revisited (with discussion). J. Am. Stat. Assoc. 93, 84–100 (1998a)CrossRefzbMATHGoogle Scholar
  3. 3.
    Cook, R.D.: Regression Graphics: Ideas for Studying Regressions through Graphics. Wiley, New York (1998b)CrossRefzbMATHGoogle Scholar
  4. 4.
    Cook, R.D., Weisberg, S.: Discussion of “Sliced inverse regression for dimension reduction”. J. Am. Stat. Assoc. 86, 316–342 (1991)CrossRefGoogle Scholar
  5. 5.
    Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. 20, 1–25 (1995)Google Scholar
  6. 6.
    Ein-Dor, P., Feldmesser, J.: Attributes of the performance of central processing units: a relative performance prediction model. Commun. ACM 30(4), 308–317 (1987)CrossRefGoogle Scholar
  7. 7.
    Fukumizu, K., Bach, F.R., Jordan, M.I.: Kernel dimension reduction in regression. Ann. Stat. 4, 1871–1905 (2009)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Lee, K.K., Gunn, S.R., Harris, C.J., Reed, P.A.S.: Classification of imbalanced data with transparent kernels. In: Proceedings of International Joint Conference on Neural Networks (IJCNN ’01), vol. 4, pp. 2410–2415, Washington, D.C. (2001)Google Scholar
  9. 9.
    Li, K.-C.: Sliced inverse regression for dimension reduction (with discussion). J. Am. Stat. Assoc. 86, 316–342 (1991)CrossRefzbMATHGoogle Scholar
  10. 10.
    Li, K.-C.: On principal Hessian directions for data visualization and dimension reduction: another application of Stein’s lemma. J. Am. Stat. Assoc. 86, 316–342 (1992)CrossRefGoogle Scholar
  11. 11.
    Li, B., Wang, S.: On directional regression for dimension reduction. J. Am. Stat. Assoc. 102, 997–1008 (2007)CrossRefzbMATHGoogle Scholar
  12. 12.
    Li, B., Zha, H., Chiaromonte, F.: Contour regression: a general approach to dimension reduction. Ann. Stat. 33, 1580–1616 (2005)CrossRefzbMATHMathSciNetGoogle Scholar
  13. 13.
    Li, B., Artemiou, A., Li, L.: Principal support vector machine for linear and nonlinear sufficient dimension reduction. Ann. Stat. 39, 3182–3210 (2011)CrossRefzbMATHMathSciNetGoogle Scholar
  14. 14.
    Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)zbMATHGoogle Scholar
  15. 15.
    Veropoulos, K., Campbell, C., Cristianini, N.: Controlling the sensitivity of support vector machines. In: Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI ’99), Workshop ML3, Stockholm, pp. 55–60Google Scholar
  16. 16.
    Weisberg, S.: Dimension reduction regression in R. J. Stat. Softw. 7(1) (2002) (Online)Google Scholar
  17. 17.
    Wu, H.M.: Kernel sliced inverse regression with applications on classification. J. Comput. Graph. Stat. 17, 590–610 (2008)CrossRefGoogle Scholar
  18. 18.
    Yeh, Y.-R., Huang, S.-Y., Lee, Y.-Y.: Nonlinear dimension reduction with Kernel sliced inverse regression. IEEE Trans. Knowl. Data Eng. 21, 1590–1603 (2009)CrossRefGoogle Scholar
  19. 19.
    Zhu, L.X., Miao, B., Peng, H.: On sliced inverse regression with large dimensional covariates. J. Am. Stat. Assoc. 101, 630–643 (2006)CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of Mathematical SciencesMichigan Technological UniversityHoughtonUSA
  2. 2.Department of Applied Mathematics and StatisticsStony Brook UniversityStony BrookUSA

Personalised recommendations