Predictive Learning, Knowledge Discovery and Philosophy of Science

  • Vladimir Cherkassky
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7311)

Abstract

Various disciplines, such as machine learning, statistics, data mining and artificial neural networks, are concerned with estimation of data-analytic models. A common theme among all these methodologies is estimation of predictive models from data. In our digital age, an abundance of data and cheap computing power offers hope of knowledge discovery via application of statistical and machine learning algorithms to empirical data. This data-analytic knowledge has similarities and differences with classical scientific knowledge. For example, any scientific theory can be viewed as an inductive theory because it generalizes over a finite number of observations (or experiments). The philosophical aspects of induction and knowledge discovery have been thoroughly explored in Western philosophy of science. This philosophical analysis dates back to Kant and Hume. Any knowledge involves a combination of hypotheses/ideas and empirical data. In the modern digital age, the balance between ideas (mental constructs) and observed data (facts) has completely shifted. Classical scientific knowledge was produced mainly by a stroke of genius (e.g., Newton, Maxwell, and Einstein). In contrast, much of modern knowledge in life sciences and social sciences is derived via data-analytic modeling. We argue that such data-driven knowledge can be properly described following the methodology of predictive learning originally developed in VC-theory. This paper presents a brief survey of the philosophical concepts related to inductive inference, and then extends these ideas to predictive data-analytic knowledge discovery. We contrast the differences between classical first-principle knowledge, data-analytic knowledge and beliefs. Several application examples are used to illustrate the differences between classical statistical and predictive learning approaches to data-analytic modeling. Finally, we discuss interpretation of data-analytic models under predictive learning framework.

Keywords

Support Vector Machine Knowledge Discovery Mutual Fund Support Vector Machine Model Empirical Knowledge 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    The End of Science. Wired Magazine 16 (2007), http://www.wired.com/wired/issue/16-07
  2. 2.
    Vapnik, V.N.: Estimation of Dependencies Based on Empirical Data. In: Empirical Inference Science: Afterword of 2006. Springer, New York (2006)Google Scholar
  3. 3.
    Cherkassky, V., Mulier, F.: Learning from Data: Concepts, Theory and Methods. Wiley, NY (2007)CrossRefMATHGoogle Scholar
  4. 4.
    Popper, K.: Objective Knowledge. An Evolutionary Approach. Oxford University Press (1979)Google Scholar
  5. 5.
    Popper, K.: Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge Press, London (2000)Google Scholar
  6. 6.
    Einstein, A.: Ideas and Opinions. Bonanza Books, New York (1988)Google Scholar
  7. 7.
    Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)MATHGoogle Scholar
  8. 8.
    Fisher, R.A.: Contributions to Mathematical Statistics. Wiley, New York (1952)MATHGoogle Scholar
  9. 9.
    Cherkassky, V., Muller, F.: Learning from Data: Concepts, Theory, and Methods. Wiley, NY (1998)MATHGoogle Scholar
  10. 10.
    Hastie, T.J., Tibshirani, R.J., Friedman, J.: The Elements of Statistical Learning. Springer, New York (2001)CrossRefMATHGoogle Scholar
  11. 11.
    Cherkassky, V.: Introduction to Predictive Learning (to appear, 2012) Google Scholar
  12. 12.
    Breiman, L.: Statistical Modeling: the Two Cultures. Statistical Science 16(3), 199–231 (2001)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Popper, K.: The Logic of Scientific Discovery, 2nd edn. Harper Torch Books, New York (1968)MATHGoogle Scholar
  14. 14.
    Ioannidis, J.P.A.: Contradicted and initially stronger effects in highly cited clinical research. JAMA 294(2), 218–228 (2005)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Breiman, L.: Random Forests. Machine Learning 45, 5–32 (2001)CrossRefMATHGoogle Scholar
  16. 16.
    Diederich, J.: Rule Extraction from Support Vector Machines. Springer (2008)Google Scholar
  17. 17.
    Dhar, S., Cherkassky, V.: Understanding Black Box Data-Analytic Models. Neural Networks (2011) (submitted) Google Scholar
  18. 18.
    Fisher, R.: The use of multiple measurements in Taxonomic problems. Annals of Eugenics 7, 179–188 (1936)CrossRefGoogle Scholar
  19. 19.
    Cherkassky, V., Dhar, S.: Simple Method for Interpretation of High-Dimensional Nonlin-ear SVM Classification Models. In: The 6th International Conference on Data Mining (July 2010)Google Scholar
  20. 20.
    Cherkassky, V., Dhar, S., Dai, W.: Practical Conditions for Effectiveness of the Universum Learning. IEEE Trans. on Neural Networks 22, 1241–1255 (2011)CrossRefGoogle Scholar
  21. 21.
    Ahn, J., Marron, J.S.: The Maximal Data Piling Direction for Discrimination. Biometrika 97, 254–259 (2010)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Zitzewitz, E.: Who cares about shareholders? Arbitrage proofing mutual funds. Journal of Law, Economics and Organization 19(4), 245–280 (2003)CrossRefGoogle Scholar
  23. 23.
    Frankel, T., Cunningham, L.A.: The mysterious ways of mutual funds: market timing. Annual Review of Banking and Financial Law 25(1) (2006)Google Scholar
  24. 24.
    Cherkassky, V., Dhar, S.: Market Timing of International Mutual Funds: A Decade after the Scandal. In: Proc. CIFEr (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Vladimir Cherkassky
    • 1
  1. 1.Electrical & Computer EngineeringUniversity of MinnesotaMinneapolisUSA

Personalised recommendations