Skip to main content

Conclusion and Future Work

  • Chapter
Machine Learning

Part of the book series: Advanced Topics in Science and Technology in China ((ATSTC))

  • 6151 Accesses

Abstract

In this chapter, a summary of this book is provided. We will review the whole journey of this book, which starts from two schools of learning thoughts in the literature of machine learning, and then motivate the resulting combined learning thought including Maxi-Min Margin Machine, Minimum Error Minimax Probability Machine and their extensions. Following that, we then present both future perspectives within the proposed models and beyond the developed approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allwein EL, Schapire RE, Singer Y (2000) Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research 1:113–141

    Article  MathSciNet  Google Scholar 

  2. Altun Y, McAllester D, Belkin M (2005) Maximum margin semi-supervised learning for structured variables. In Advances in Neural Information Processing Systerm (NIPS 18)

    Google Scholar 

  3. Ando R, Zhang T (2005) A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research 6:1817C1853

    Google Scholar 

  4. Bach FR, Lanckriet GRG, Jordan MI (2004) Multiple kernel learning, conic duality, and the SMO algorithm. In Proceedings of International Conference on Machine Learning (ICML-2004)

    Google Scholar 

  5. Bartlett PL (1998) Learning theory and generalization for neural networks and other supervised learning techniques. In Neural Information Processing Systems Tutorial

    Google Scholar 

  6. Cerny V (1985) Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. J. Opt. Theory Appl. 45(1):41–51

    Article  MATH  MathSciNet  Google Scholar 

  7. Chapelle O, Zien A, Scholkopf B (2006) Semi-supervised learning. Cambridge, MA: The MIT Press

    Google Scholar 

  8. Chawla NV, Karakoulas G (2005) Learning from labeled and unlabeled data: An empirical study across techniques and domains. Journal of Artificial Intelligence Research 23:331C366

    Google Scholar 

  9. Dietterich TG (1997) Machine learning research: Four current directions. AI Magazine 18(4):97–136

    Google Scholar 

  10. Dougherty James, Kohavi Ron, Sahami Mehran (1995) Supervised and unsupervised discretization of continuous features. In International Conference on Machine Learning 194–202

    Google Scholar 

  11. Dueck G (1993) New optimization heuristics:the great deluge algorithm and the record-to-record travel. Journal of Computational Physics 104:86–92

    Article  MATH  Google Scholar 

  12. Dueck G, Scheurer T (1990) Threshold accepting: A general purpose optimization algorithm. Journal of Computational Physics 90:161–175

    Article  MATH  MathSciNet  Google Scholar 

  13. Figueiredo M, Jain AK (2002) Unsupervised learning of finite mixture models. Transaction on Pattern Analysis and Machine Intelligence 24(3):381–396

    Article  Google Scholar 

  14. Kirkpatrick S, Gelatt Jr CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220:671–680

    Article  MathSciNet  Google Scholar 

  15. Lanckriet GRG, Cristianini N, Ghaoui LEI, Bartlett PL, Jordan MI (2004) Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research

    Google Scholar 

  16. Rifkin R, Klautau A (2004) In Defense of one vs. all classification. Journal of Machine Learning Research 5:101–141

    MathSciNet  Google Scholar 

  17. Steck H, Jaakkola T (2002) Unsupervised active learning in large domains. In Proceedings of the Eighteenth Annual Conference on Uncertainty in Artificial Intelligence

    Google Scholar 

  18. Wettig Hannes, Grunwald Peter, Roos Teemu (2002) Supervised naive Bayes parameters. In Alasiuru P, Kasko S, editors, The Art of Natural and Artificial: Proceedings of the 10th Finnish Artificial Intelligence Conference 72–83

    Google Scholar 

Download references

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Zhejiang University Press, Hangzhou and Springer-Verlag GmbH Berlin Heidelberg

About this chapter

Cite this chapter

(2008). Conclusion and Future Work. In: Machine Learning. Advanced Topics in Science and Technology in China. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79452-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-79452-3_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-79451-6

  • Online ISBN: 978-3-540-79452-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics