Skip to main content

Supervised Nonlinear Factorizations Excel In Semi-supervised Regression

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8443))

Abstract

Semi-supervised learning is an eminent domain of machine learning focusing on real-life problems where the labeled data instances are scarce. This paper innovatively extends existing factorization models into a supervised nonlinear factorization. The current state of the art methods for semi-supervised regression are based on supervised manifold regularization. In contrast, the latent data constructed by the proposed method jointly reconstructs both the observed predictors and target variables via generative-style nonlinear functions. Dual-form solutions of the nonlinear functions and a stochastic gradient descent technique which learns the low dimensionality data are introduced. The validity of our method is demonstrated in a series of experiments against five state-of-art baselines, clearly improving the prediction accuracy in eleven real-life data sets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hastie, T., Tibshirani, R., Friedman, J.H.: The Elements of Statistical Learning. Corrected edn. Springer (July 2003)

    Google Scholar 

  2. Zhu, X.: Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison (2008)

    Google Scholar 

  3. Singh, A., Nowak, R.D., Zhu, X.: Unlabeled data: Now it helps, now it doesn’t. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) NIPS, pp. 1513–1520. Curran Associates, Inc. (2008)

    Google Scholar 

  4. Sinha, K., Belkin, M.: Semi-supervised learning using sparse eigenfunction bases. In: NIPS, pp. 1687–1695 (2009)

    Google Scholar 

  5. Belkin, M., Niyogi, P., Sindhwani, V.: Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research 7, 2399–2434 (2006)

    MATH  MathSciNet  Google Scholar 

  6. Melacci, S., Belkin, M.: Laplacian support vector machines trained in the primal. Journal of Machine Learning Research 12, 1149–1184 (2011)

    MATH  MathSciNet  Google Scholar 

  7. Kim, K.I., Steinke, F., Hein, M.: Semi-supervised regression using hessian energy with an application to semi-supervised dimensionality reduction. In: NIPS, pp. 979–987 (2009)

    Google Scholar 

  8. Lin, B., Zhang, C., He, X.: Semi-supervised regression via parallel field regularization. In: NIPS, pp. 433–441 (2011)

    Google Scholar 

  9. Menon, A.K., Elkan, C.: Predicting labels for dyadic data. Data Min. Knowl. Discov. 21(2), 327–343 (2010)

    Article  MathSciNet  Google Scholar 

  10. Mao, K., Liang, F., Mukherjee, S.: Supervised dimension reduction using bayesian mixture modeling. Journal of Machine Learning Research - Proceedings Track 9, 501–508 (2010)

    Google Scholar 

  11. Lawrence, N.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. The Journal of Machine Learning Research 6, 1783–1816 (2005)

    MATH  MathSciNet  Google Scholar 

  12. Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers. Neural Processing Letters 9(3), 293–300 (1999)

    Article  MathSciNet  Google Scholar 

  13. Ye, J., Xiong, T.: Svm versus least squares svm. Journal of Machine Learning Research - Proceedings Track 2, 644–651 (2007)

    Google Scholar 

  14. Lin, T., Xue, H., Wang, L., Zha, H.: Total variation and euler’s elastica for supervised learning. In: ICML (2012)

    Google Scholar 

  15. Ji, M., Yang, T., Lin, B., Jin, R., Han, J.: A simple algorithm for semi-supervised learning with improved generalization error bound. In: ICML (2012)

    Google Scholar 

  16. Nilsson, J., Sha, F., Jordan, M.I.: Regression on manifolds using kernel dimension reduction. In: ICML, pp. 697–704 (2007)

    Google Scholar 

  17. Ye, J.: Least squares linear discriminant analysis. In: ICML, pp. 1087–1093 (2007)

    Google Scholar 

  18. Pereira, F., Gordon, G.: The support vector decomposition machine. In: Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, pp. 689–696. ACM, New York (2006)

    Google Scholar 

  19. Rish, I., Grabarnik, G., Cecchi, G., Pereira, F., Gordon, G.J.: Closed-form supervised dimensionality reduction with generalized linear models. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 832–839. ACM, New York (2008)

    Google Scholar 

  20. Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Convolutional neural network committees for handwritten character classification. In: ICDAR, pp. 1135–1139. IEEE (2011)

    Google Scholar 

  21. Urtasun, R., Darrell, T.: Discriminative gaussian process latent variable models for classification. In: International Conference in Machine Learning (2007)

    Google Scholar 

  22. Navaratnam, R., Fitzgibbon, A.W., Cipolla, R.: The joint manifold model for semi-supervised multi-valued regression. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8 (2007)

    Google Scholar 

  23. Hoffmann, H.: Kernel pca for novelty detection. Pattern Recogn. 40(3), 863–874 (2007)

    Article  MATH  Google Scholar 

  24. Lee, H., Cichocki, A., Choi, S.: Kernel nonnegative matrix factorization for spectral eeg feature extraction. Neurocomputing 72(13-15), 3182–3190 (2009)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Grabocka, J., Bedalli, E., Schmidt-Thieme, L. (2014). Supervised Nonlinear Factorizations Excel In Semi-supervised Regression. In: Tseng, V.S., Ho, T.B., Zhou, ZH., Chen, A.L.P., Kao, HY. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2014. Lecture Notes in Computer Science(), vol 8443. Springer, Cham. https://doi.org/10.1007/978-3-319-06608-0_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-06608-0_16

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-06607-3

  • Online ISBN: 978-3-319-06608-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics