Skip to main content

Fundamental Knowledge of Machine Learning

  • Chapter
  • First Online:
Visual Quality Assessment by Machine Learning

Part of the book series: SpringerBriefs in Electrical and Computer Engineering ((BRIEFSSIGNAL))

  • 1405 Accesses

Abstract

This chapter introduces the basic concepts and methods of machine learning that are related to this book. The classical machine learning methods, like neural network (CNN), support vector machine (SVM), clustering, Bayesian networks, sparse learning, Boosting, and deep learning, are presented in this chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mitchell T (1997) Machine learning. McGraw Hill, New York. ISBN 0-07-042807-7

    Google Scholar 

  2. Duda RO, Hart PE, Stork DG (2009) Unsupervised learning and clustering, chapter 10 in pattern classification. Wiley, New York, p 571. ISBN 0-471-05669-3

    Google Scholar 

  3. Bishop CM (2006) Pattern recognition and machine learning. Springer, New York. ISBN 0-387-31073-8

    Google Scholar 

  4. Golovko V, Imada A (1990) Neural networks in artificial intelligence. Ellis Horwood Limited, Chichester. ISBN 0-13-612185-3

    Google Scholar 

  5. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273

    MATH  Google Scholar 

  6. Support Vector Machine (2015) In: Wikipedia, The free encyclopedia. http://en.wikipedia.org/w/index.php?title=Support_vector_machine&oldid=654587935. Accessed 8 Apr 2015

  7. Drucker H et al (1997) Support vector regression machines In: Advances in neural information processing systems 9, NIPS 1997. MIT Press, Cambridge, pp 155–161

    Google Scholar 

  8. Suykens JAK, Vandewalle J, Joos PL (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  Google Scholar 

  9. Pearl J, Russel S (2001) Bayesian networks, report (R-277), November 2000, In: Arbib M (ed) Handbook of brain theory and neural networks. MIT Press, Cambridge, pp 157–160

    Google Scholar 

  10. Bengioy Y, Courville A, Vincent P (2014) Representation learning: a review and new perspectives. 1206.5538v3[cs.LG]. Accessed 23 April 2014

  11. Jolliffe IT (2002) Principal component analysis. In: Series: springer series in statistics, 2nd ed. Springer, New York, XXIX, 487, p. 28 illus

    Google Scholar 

  12. Smith LI ( 2002) A tutorial on pricipal component analysis. http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf. Accessed Feb 2002

  13. Coates A, YN Andrew (2012) Learning feature representations with k-means. In: Neural networks: tricks of the trade. Springer LNCS, Heidelberg (reloaded)

    Google Scholar 

  14. Kenneth K-D et al (2003) Dictionary learning algorithms for sparse representation. Neural Comput 15.2: 349–396

    Google Scholar 

  15. Aharon M, Elad M, Bruckstein A, Katz Y (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54(11):4311–4322

    Article  Google Scholar 

  16. Freund Y, Schapire RE (1999) A short introduction to boosting. In: Proceedings of the 16-th international joint conference on artificial intelligence, vol 2, pp 1401–1406

    Google Scholar 

  17. AdaBoost (2015) In: Wikipedia, The free encyclopedia. http://en.wikipedia.org/w/index.php?title=AdaBoost&oldid=647686369. Accessed 8 Apr 2015

  18. Deep Learning (2015) In: Wikipedia, The free encyclopedia. http://en.wikipedia.org/w/index.php?title=Deep_learningoldid=655313266. Accessed 8 Apr 2015

  19. Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36:193–202

    Article  MATH  MathSciNet  Google Scholar 

  20. Werbos P (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard University

    Google Scholar 

  21. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1:541–551

    Article  Google Scholar 

  22. Hinton GE (2007) Learning multiple layers of representation. Trends Cogn Sci 11(10):428–434

    Google Scholar 

  23. Ciresan DC, Meier U, Gambardella LM, Schmidhuber J (2010) Deep big simple neural nets for handwritten digit recognition. Neural Comput 22:3207–3220

    Article  Google Scholar 

  24. Raina R, Madhavan A, YN Andrew (2009) Large-scale deep unsupervised learning using graphics processors. Proceedings of 26th international conference on machine learning

    Google Scholar 

  25. Mikolov T, Karafiat M, Burget L, Cernnocky J, Khudanpur S (2010) Recurrent neural network based language model. In: Proceedings of NTERSPEECH 2010, 11th annual conference of the international speech communication association, Makuhari, Chiba, Japan, 26–30 Sept 2010

    Google Scholar 

  26. Hinton GE, Li D, Dong Y, Dahl GE et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–87

    Article  Google Scholar 

  27. Bengio Y, Boulanger-Lewandowski N, Pascanu R (2013) Advances in optimizing recurrent networks. In: Proceedings of IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 8624–8628, May 2013

    Google Scholar 

  28. Dahl GE, Sainath TN, Hinton GE (2013) Improving deep neural networks for LVCSR using rectified linear units and dropout. In: Proceedings of IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 8609–8623, May 2013

    Google Scholar 

  29. Hinton GE (2010) A practical guide to training restricted Boltzmann machines. Technical report, UTML TR 2010–003. Universityof Toronto, Department of Computer Science

    Google Scholar 

  30. Hinton GE (2009) Deep belief networks. Scholarpedia 4(5):5947

    Article  Google Scholar 

  31. Larochelle H, Erhan D, Courville A, Bergstra J, Bengio Y (2007) An empirical evaluation of deep architectures on problems with many factors of variation. In: Proceedings of 24th international conference machine learning, pp 473–480

    Google Scholar 

  32. Hinton GE (2002) Training product of experts by minimizing contrastive divergence. Neural Comput 14:1771–1800

    Google Scholar 

  33. Fischer A, Igel C (2014) Training restricted Boltzmann machines: an introduction. Pattern Recogn 47(1):25–39

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Long Xu .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 The Author(s)

About this chapter

Cite this chapter

Xu, L., Lin, W., Kuo, CC.J. (2015). Fundamental Knowledge of Machine Learning. In: Visual Quality Assessment by Machine Learning. SpringerBriefs in Electrical and Computer Engineering(). Springer, Singapore. https://doi.org/10.1007/978-981-287-468-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-287-468-9_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-287-467-2

  • Online ISBN: 978-981-287-468-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics