Advertisement

Supervised Nonlinear Factorizations Excel In Semi-supervised Regression

Conference paper
  • 2.6k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8443)

Abstract

Semi-supervised learning is an eminent domain of machine learning focusing on real-life problems where the labeled data instances are scarce. This paper innovatively extends existing factorization models into a supervised nonlinear factorization. The current state of the art methods for semi-supervised regression are based on supervised manifold regularization. In contrast, the latent data constructed by the proposed method jointly reconstructs both the observed predictors and target variables via generative-style nonlinear functions. Dual-form solutions of the nonlinear functions and a stochastic gradient descent technique which learns the low dimensionality data are introduced. The validity of our method is demonstrated in a series of experiments against five state-of-art baselines, clearly improving the prediction accuracy in eleven real-life data sets.

Keywords

Supervised Matrix Factorization Nonlinear Dimensionality Reduction Feature Exctraction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hastie, T., Tibshirani, R., Friedman, J.H.: The Elements of Statistical Learning. Corrected edn. Springer (July 2003)Google Scholar
  2. 2.
    Zhu, X.: Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison (2008)Google Scholar
  3. 3.
    Singh, A., Nowak, R.D., Zhu, X.: Unlabeled data: Now it helps, now it doesn’t. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) NIPS, pp. 1513–1520. Curran Associates, Inc. (2008)Google Scholar
  4. 4.
    Sinha, K., Belkin, M.: Semi-supervised learning using sparse eigenfunction bases. In: NIPS, pp. 1687–1695 (2009)Google Scholar
  5. 5.
    Belkin, M., Niyogi, P., Sindhwani, V.: Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research 7, 2399–2434 (2006)zbMATHMathSciNetGoogle Scholar
  6. 6.
    Melacci, S., Belkin, M.: Laplacian support vector machines trained in the primal. Journal of Machine Learning Research 12, 1149–1184 (2011)zbMATHMathSciNetGoogle Scholar
  7. 7.
    Kim, K.I., Steinke, F., Hein, M.: Semi-supervised regression using hessian energy with an application to semi-supervised dimensionality reduction. In: NIPS, pp. 979–987 (2009)Google Scholar
  8. 8.
    Lin, B., Zhang, C., He, X.: Semi-supervised regression via parallel field regularization. In: NIPS, pp. 433–441 (2011)Google Scholar
  9. 9.
    Menon, A.K., Elkan, C.: Predicting labels for dyadic data. Data Min. Knowl. Discov. 21(2), 327–343 (2010)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Mao, K., Liang, F., Mukherjee, S.: Supervised dimension reduction using bayesian mixture modeling. Journal of Machine Learning Research - Proceedings Track 9, 501–508 (2010)Google Scholar
  11. 11.
    Lawrence, N.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. The Journal of Machine Learning Research 6, 1783–1816 (2005)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers. Neural Processing Letters 9(3), 293–300 (1999)CrossRefMathSciNetGoogle Scholar
  13. 13.
    Ye, J., Xiong, T.: Svm versus least squares svm. Journal of Machine Learning Research - Proceedings Track 2, 644–651 (2007)Google Scholar
  14. 14.
    Lin, T., Xue, H., Wang, L., Zha, H.: Total variation and euler’s elastica for supervised learning. In: ICML (2012)Google Scholar
  15. 15.
    Ji, M., Yang, T., Lin, B., Jin, R., Han, J.: A simple algorithm for semi-supervised learning with improved generalization error bound. In: ICML (2012)Google Scholar
  16. 16.
    Nilsson, J., Sha, F., Jordan, M.I.: Regression on manifolds using kernel dimension reduction. In: ICML, pp. 697–704 (2007)Google Scholar
  17. 17.
    Ye, J.: Least squares linear discriminant analysis. In: ICML, pp. 1087–1093 (2007)Google Scholar
  18. 18.
    Pereira, F., Gordon, G.: The support vector decomposition machine. In: Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, pp. 689–696. ACM, New York (2006)Google Scholar
  19. 19.
    Rish, I., Grabarnik, G., Cecchi, G., Pereira, F., Gordon, G.J.: Closed-form supervised dimensionality reduction with generalized linear models. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 832–839. ACM, New York (2008)Google Scholar
  20. 20.
    Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Convolutional neural network committees for handwritten character classification. In: ICDAR, pp. 1135–1139. IEEE (2011)Google Scholar
  21. 21.
    Urtasun, R., Darrell, T.: Discriminative gaussian process latent variable models for classification. In: International Conference in Machine Learning (2007)Google Scholar
  22. 22.
    Navaratnam, R., Fitzgibbon, A.W., Cipolla, R.: The joint manifold model for semi-supervised multi-valued regression. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8 (2007)Google Scholar
  23. 23.
    Hoffmann, H.: Kernel pca for novelty detection. Pattern Recogn. 40(3), 863–874 (2007)CrossRefzbMATHGoogle Scholar
  24. 24.
    Lee, H., Cichocki, A., Choi, S.: Kernel nonnegative matrix factorization for spectral eeg feature extraction. Neurocomputing 72(13-15), 3182–3190 (2009)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.ISML LabUniversity of HildesheimHildesheimGermany
  2. 2.Department of Mathematics and InformaticsUniversity of ElbasanElbasanAlbania

Personalised recommendations