Advertisement

Abstract

The main objective of this chapter is to explain the machine learning concepts, mainly modeling and algorithms; batch learning and online learning; and supervised learning (regression and classification) and unsupervised learning (clustering) using examples. Modeling and algorithms will be explained based on the domain division characteristics, batch learning and online learning will be explained based on the availability of the data domain, and supervised learning and unsupervised learning will be explained based on the labeling of the data domain. This objective will be extended to the comparison of the mathematical models, hierarchical models, and layered models, using programming structures, such as control structures, modularization, and sequential statements.

References

  1. 1.
    T. G. Dietterich, “Machine-learning research: Four current directions,” AI Magazine, vol. 18, no. 4, pp. 97–136,1997.Google Scholar
  2. 2.
    T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. New York: Springer, 2009.MATHCrossRefGoogle Scholar
  3. 3.
    S. Suthaharan. “Big data classification: Problems and challenges in network intrusion prediction with machine learning,” ACM SIGMETRICS Performance Evaluation Review, vol. 41, no. 4, pp. 70–73, 2014.CrossRefGoogle Scholar
  4. 4.
    A. K. Jain. “Data clustering: 50 years beyond K-means.” Pattern recognition letters, vol. 31, no. 8, pp. 651–666, 2010.CrossRefGoogle Scholar
  5. 5.
    S. B. Kotsiantis. “Supervised machine learning: A review of classification techniques,” Informatica 31, pp. 249–268, 2007.MATHMathSciNetGoogle Scholar
  6. 6.
    O. Okun, and G. Valentini (Eds.), “Supervised and unsupervised ensemble methods and their applications,” Studies in Computational Intelligence series, vol. 126, 2008.Google Scholar
  7. 7.
    M. Ji, T. Yang, B. Lin, R. Jin, and J. Han. “A simple algorithm for semi-supervised learning with improved generalization error bound,” in Proceedings of the 29th International Conference on Machine Learning, pp. 1223–1230, 2012.Google Scholar
  8. 8.
    M.G. Lagoudakis and R. Parr. “Reinforcement learning as classification: Leveraging modern classifiers,” in Proceedings of the 20th International Conference on Machine Learning, vol. 3, pp. 424–431, 2003.Google Scholar
  9. 9.
    M. A. Hearst, S. T. Dumais, E. Osman, J. Platt, and B. Scholkopf. “Support vector machines.” Intelligent Systems and their Applications, IEEE, vol. 13, no. 4, pp. 18–28, 1998.CrossRefGoogle Scholar
  10. 10.
    L. Rokach, and O. Maimon. “Top-down induction of decision trees classifiers-a survey.” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 35, no. 4, pp. 476–487, 2005.CrossRefGoogle Scholar
  11. 11.
    L. Breiman, “Random forests.” Machine learning 45, pp. 5–32, 2001.MATHCrossRefGoogle Scholar
  12. 12.
    G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012.Google Scholar
  13. 13.
    D. Meyer, F. Leisch, and K. Hornik. “The support vector machine under test.” Neurocomputing 55, pp. 169–186, 2003.CrossRefGoogle Scholar
  14. 14.
    O. L. Mangasarian and D. R. Musicant. 2000. “LSVM Software: Active set support vector machine classification software.” Available online at http://research.cs.wisc.edu/dmi/lsvm/.
  15. 15.
    M. Dunbar, J. M. Murray, L. A. Cysique, B. J. Brew, and V. Jeyakumar. “Simultaneous classification and feature selection via convex quadratic programming with application to HIV-associated neurocognitive disorder assessment.” European Journal of Operational Research 206(2): pp. 470–478, 2010.MATHCrossRefGoogle Scholar
  16. 16.
    V. Jeyakumar, G. Li, and S. Suthaharan. “Support vector machine classifiers with uncertain knowledge sets via robust optimization.” Optimization, pp. 1–18, 2012.Google Scholar
  17. 17.
    G. Huang, H. Chen, Z. Zhou, F. Yin and K. Guo. “Two-class support vector data description.” Pattern Recognition, 44, pp. 320–329, 2011.MATHCrossRefGoogle Scholar
  18. 18.
    V. Franc, and V. Hlavac. “Multi-class support vector machine.” In Proceedings of the IEEE 16th International Conference on Pattern Recognition, vol. 2, pp. 236–239, 2002.Google Scholar
  19. 19.
    D. Wang, J. Zheng, Y. Zhou, and J. Li. “A scalable support vector machine for distributed classification in ad hoc sensor networks.” Neurocomputing, vol. 74, no. 1, pp. 394–400, 2010.CrossRefGoogle Scholar
  20. 20.
    L. Breiman. “Bagging predictors.” Machine learning 24, pp. 123–140, 1996.MATHMathSciNetGoogle Scholar
  21. 21.
    L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. “Regularization of neural networks using dropconnect.” In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 1058–1066, 2013.Google Scholar
  22. 22.
    W. Tu, and S. Sun, “Cross-domain representation-learning framework with combination of class-separate and domain-merge objectives,” In: Proceedings of the CDKD 2012 Conference, pp. 18–25, 2012.Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Shan Suthaharan
    • 1
  1. 1.Department of Computer ScienceUNC GreensboroGreensboroUSA

Personalised recommendations