Skip to main content

Introduction to the Theory of Randomized Machine Learning

  • Chapter
  • First Online:
Learning Systems: From Theory to Practice

Part of the book series: Studies in Computational Intelligence ((SCI,volume 756))

Abstract

We propose a new machine learning concept called Randomized Machine Learning, in which model parameters are assumed random and data are assumed to contain random errors. Distinction of this approach from “classical” machine learning is that optimal estimation deals with the probability density functions of random parameters and the “worst” probability density of random data errors. As the optimality criterion of estimation, randomized machine learning employs the generalized information entropy maximized on a set described by the system of empirical balances. We apply this approach to text classification and dynamic regression problems. The results illustrate capabilities of the approach.

This work is partially supported by Russian Fund for Basic Research (project no. 16-07-00743).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In particular, option transactions employ the mean values of financial tools having power dependence on random parameters [1].

  2. 2.

    This treatment differs from the classical definition of robustness given in [7].

  3. 3.

    Distribution of objects among n classes is reduced to \(C^2_n\) distributions among 2 classes of n.

References

  1. Avellaneda, M.: Minimum-relative-entropy calibration of asset-pricing models. Int. J. Theor. Appl. Finance 1(04), 447–472 (1998)

    Article  MATH  Google Scholar 

  2. Bishop, C.: Pattern Recognition and Machine Learning (Information Science and Statistics), 1st edn. 2006. corr. 2nd printing edn. Springer, New York (2007)

    Google Scholar 

  3. Boucheron, S., Bousquet, O., Lugosi, G.: Theory of classification: A survey of some recent advances. ESAIM: probab. Stat. 9, 323–375 (2005)

    Google Scholar 

  4. Flach, P.: Machine Learning: The Art and Science of Algorithms that Make Sense of Data. Cambridge University Press (2012)

    Google Scholar 

  5. Friedman, J., Hastie, T., Tibshirani, R.: The elements of statistical learning. In: Springer Series in Statistics, vol. 1. Springer, Berlin (2001)

    Google Scholar 

  6. Gonzalo, J.A., Muñoz, F.F., Santos, D.J.: Using a rate equations approach to model world population trends. Simulation 89(2), 192–198 (2013)

    Article  Google Scholar 

  7. Huber, P.J.: Robust Statistics. Springer (2011)

    Google Scholar 

  8. Ioffe, A.D., Tikhomirov, V.M.: Teoriya ekstremal’nykh zadach (Theory of Extremal Problems). Nauka, Moscow (1974)

    Google Scholar 

  9. Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106(4), 620–630 (1957)

    Article  MathSciNet  MATH  Google Scholar 

  10. Jaynes, E.T.: Probability Theory: The Logic of Science. Cambridge University Press (2003)

    Google Scholar 

  11. Kaashoek, M.A., et al.: Recent Advances in Operator Theory and its Applications: the Israel Gohberg Anniversary Volume, vol. 160. Springer Science & Business Media (2005)

    Google Scholar 

  12. Kapur, J.N.: Maximum-Entropy Models in Science and Engineering. Wiley (1989)

    Google Scholar 

  13. Kolmogorov, F.N., Fomin, S.V.: Elements of the Theory of Functions and Functional Analysis, vol 1. Courier Corporation (1999)

    Google Scholar 

  14. Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  15. Popkov, Y.S.: Macrosystems theory and its applications (equilibrium models). In: Lecture Notes in Control and Information Sciences. Springer (1995)

    Google Scholar 

  16. Popkov, Y.S., Dubnov, Y.A., Popkov, A.Y.: New method of randomized forecasting using entropy-robust estimation: application to the world population prediction. Mathematics 4, 1–16 (2016a)

    Article  MATH  Google Scholar 

  17. Popkov, Y.S., Popkov, A.Y., Darkhovsky, B.S.: Parallel monte carlo for entropy robust estimation. Math. Models Comput. Simul. 8(1), 27–39 (2016b)

    Article  MathSciNet  Google Scholar 

  18. Racine, J.S., Maasoumi, E.: A versatile and robust metric entropy test of time-reversibility, and other hypotheses. J. Econom. 138(2), 547–567 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  19. Rosenblatt, F.: The perceptron, a Perceiving and Recognizing Automaton Project Para. Cornell Aeronautical Laboratory (1957)

    Google Scholar 

  20. Rubinstein, R.Y., Kroese, D.P.: Simulation and the Monte Carlo Method, vol. 707. Wiley (2011)

    Google Scholar 

  21. Tsypkin, Y.Z.: Osnovy teorii obuchayushchikhsya sistem (Foundations of Theory of Learning Systems). Nauka, Moscow (1970)

    Google Scholar 

  22. Tsypkin, Y.Z., Popkov, Y.S.: Teoriya nelineinykh impul’snykh sistem (Theory of Nonlinear Impulse Systems). Nauka, Moscow (1973)

    Google Scholar 

  23. Vapnik, V.N.: Vosstanovlenie zavisimostei po empiricheskim dannym (Restoration of Dependencies Using Emprirical Data). Nauka, Moscow (1979)

    Google Scholar 

  24. Vapnik, V.N., Chervonenkis, A.Y.: Teoriya raspoznavaniya obrazov. Nauka, Moscow (1974)

    MATH  Google Scholar 

  25. Volterra, V.: Theory of Functionals and of Integral and Integro–Differential Equations. Dover Publications (2005)

    Google Scholar 

  26. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann (2005)

    Google Scholar 

  27. Yzerman, M.A., Braverman, E.M., Rozonoer, L.I.: Metod potentsialnykh funktsii v teorii obucheniya mashin. Nauka, Moscow (1970)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuri S. Popkov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Popkov, Y.S., Dubnov, Y.A., Popkov, A.Y. (2018). Introduction to the Theory of Randomized Machine Learning. In: Sgurev, V., Piuri, V., Jotsov, V. (eds) Learning Systems: From Theory to Practice. Studies in Computational Intelligence, vol 756. Springer, Cham. https://doi.org/10.1007/978-3-319-75181-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-75181-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-75180-1

  • Online ISBN: 978-3-319-75181-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics