Abstract
In this chapter, a summary of this book is provided. We will review the whole journey of this book, which starts from two schools of learning thoughts in the literature of machine learning, and then motivate the resulting combined learning thought including Maxi-Min Margin Machine, Minimum Error Minimax Probability Machine and their extensions. Following that, we then present both future perspectives within the proposed models and beyond the developed approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Allwein EL, Schapire RE, Singer Y (2000) Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research 1:113–141
Altun Y, McAllester D, Belkin M (2005) Maximum margin semi-supervised learning for structured variables. In Advances in Neural Information Processing Systerm (NIPS 18)
Ando R, Zhang T (2005) A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research 6:1817C1853
Bach FR, Lanckriet GRG, Jordan MI (2004) Multiple kernel learning, conic duality, and the SMO algorithm. In Proceedings of International Conference on Machine Learning (ICML-2004)
Bartlett PL (1998) Learning theory and generalization for neural networks and other supervised learning techniques. In Neural Information Processing Systems Tutorial
Cerny V (1985) Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. J. Opt. Theory Appl. 45(1):41–51
Chapelle O, Zien A, Scholkopf B (2006) Semi-supervised learning. Cambridge, MA: The MIT Press
Chawla NV, Karakoulas G (2005) Learning from labeled and unlabeled data: An empirical study across techniques and domains. Journal of Artificial Intelligence Research 23:331C366
Dietterich TG (1997) Machine learning research: Four current directions. AI Magazine 18(4):97–136
Dougherty James, Kohavi Ron, Sahami Mehran (1995) Supervised and unsupervised discretization of continuous features. In International Conference on Machine Learning 194–202
Dueck G (1993) New optimization heuristics:the great deluge algorithm and the record-to-record travel. Journal of Computational Physics 104:86–92
Dueck G, Scheurer T (1990) Threshold accepting: A general purpose optimization algorithm. Journal of Computational Physics 90:161–175
Figueiredo M, Jain AK (2002) Unsupervised learning of finite mixture models. Transaction on Pattern Analysis and Machine Intelligence 24(3):381–396
Kirkpatrick S, Gelatt Jr CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220:671–680
Lanckriet GRG, Cristianini N, Ghaoui LEI, Bartlett PL, Jordan MI (2004) Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research
Rifkin R, Klautau A (2004) In Defense of one vs. all classification. Journal of Machine Learning Research 5:101–141
Steck H, Jaakkola T (2002) Unsupervised active learning in large domains. In Proceedings of the Eighteenth Annual Conference on Uncertainty in Artificial Intelligence
Wettig Hannes, Grunwald Peter, Roos Teemu (2002) Supervised naive Bayes parameters. In Alasiuru P, Kasko S, editors, The Art of Natural and Artificial: Proceedings of the 10th Finnish Artificial Intelligence Conference 72–83
Rights and permissions
Copyright information
© 2008 Zhejiang University Press, Hangzhou and Springer-Verlag GmbH Berlin Heidelberg
About this chapter
Cite this chapter
(2008). Conclusion and Future Work. In: Machine Learning. Advanced Topics in Science and Technology in China. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79452-3_8
Download citation
DOI: https://doi.org/10.1007/978-3-540-79452-3_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-79451-6
Online ISBN: 978-3-540-79452-3
eBook Packages: Computer ScienceComputer Science (R0)