Advertisement

A Note on Artificial Intelligence and Statistics

  • Katharina MorikEmail author
Chapter
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

Now that data science receives a lot of attention, the three disciplines of data analysis, databases, and sciences are discussed with respect to the roles they play. In several discussions, I observed misunderstandings of Artificial Intelligence. Hence, it might be the right time to give a personal view of AI and the part of machine learning therein. Since the relation between machine learning and statistics is so close that sometimes the boundaries are blurred, explicit pointers to statistical research are made. Although not at all complete, the references are intended to support further interdisciplinary understanding of the fields.

Notes

Acknowledgements

This work builds upon research in the collaborative research centre SFB 876 Providing Information by Resource-Constrained Analysis funded by the Deutsche Forschungsgemeinschaft (DFG), projects A1, B3, and C3—http://sfb876.tu-dortmund.de.

References

  1. Abujabal, A., Saha Roy, R., Yahya, M., & Weikum, G. (2018). Never-ending learning for open-domain question answering over knowledge bases. In Proceedings of the WWWW (pp. 1053–1062). Switzerland: Republic and Canton of Geneva.Google Scholar
  2. Barbieri, N., Manco, G., Ritacco, E., Carnuccio, M., & Bevacqua, A. (2013). Probabilistic topic models for sequence data. Machine Learning, 93(1), 5–29.MathSciNetCrossRefGoogle Scholar
  3. Belkacem, A.N., Nishio, S., Suzuki, T., Ishiguro, H., & Hirata, M. (2018). Neuromagnetic decoding of simultatenous bilateral hand movements for multidimensional brain-machine interfaces. IEEE TNSRE, 26(6), 1301–1310.Google Scholar
  4. Bieri, P. (Ed.). (1981). Analytische philosophie des Geistes. Hain Verlag.Google Scholar
  5. Bordes, A., Weston, J., Collobert, R. & Bengio, Y., et al. (2011). Learning structured embeddings of knowledge bases. In AAAI (Vol. 6).Google Scholar
  6. Brill, E. (1992). A simple rule-based part of speech tagger. In Proceedings of the 3rd Conference on Applied Natural Language Processing (pp. 152–155). ACL.Google Scholar
  7. Bunse, M., Bockermann, C., Buss, J., Morik, K., Rhode, W., & Ruhe, T. (2017). Smart control of monte carlo simulations for astroparticle physics. In ADASS XXVII.Google Scholar
  8. Bunse, M., Piatkowski, N., Ruhe, T., Rhode, W., & Morik, K. (2018). Unification of deconvolution algorithms for cherenkov astronomy. In 5th IEEE DSAA.Google Scholar
  9. Calders, T., & Goethals, T. (2002). Mining all non-derivable frequent itemsets. In T. Elomaa, H. Mannil, & H. Toivonen (Eds.), Proceedings of the 6th ECML PKDD (Vol. 2431, pp. 74–85). Berlin: SpringerGoogle Scholar
  10. Dorner, D., Lauer, R.J., FACT Collaboration:, Adam, J., Ahnen, M., & Baack, D., et al. (2017). First study of combined blazar light curves with FACT and HAWC. In Proceedings of 6th GAMMA (Vol. 1792).Google Scholar
  11. Falkner, S., Klein, A., & Hutter, F. (2018). BOHB: Robust and efficient hyperparameter optimization at scale. In Proceedings of the 35th ICML (pp. 1436–1445).Google Scholar
  12. Getoor, L., & Taskar, B. (2007). Introduction to statistical relational learning. MIT Press.Google Scholar
  13. Glasmachers, T. (2017). Limits of end-to-end learning. In Proceedings of The 9th ACML (pp. 17–32).Google Scholar
  14. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.Google Scholar
  15. Hastie, T., Tibshirani, R., & Friedman, R. (2008). The elements of statistical learning: data mining, inference, and prediction. Berlin: Springer.Google Scholar
  16. Hoeppner, W., Morik, K., & Marburger, H. (1986). Talking it over: The natural language dialog system HAM-ANS (pp. 189–258). Berlin: Springer.Google Scholar
  17. Holder, L. B., Caceres, R., Gleich, D. F., Riedy, J., Eliassi-Rad, T., et al. (2016). Current and future challenges in mining large networks: report on the second SDM workshop on mining networks and graphs. KDD Explorations Newsletter, 18(1), 39–45.CrossRefGoogle Scholar
  18. Horn, D., Schork, K., & Wagner, T. (2016). Multi-objective selection of algorithm portfolios: Experimental validation. In J. Handl, E. Hart, P.R. Lewis, M. López-Ibáñez, G. Ochoa, & B. Paechter (Eds.), Parallel Problem solving from nature – PPSN XIV (pp. 421–430). Berlin: Springer.Google Scholar
  19. Ikegami, T., Mototake, Y.I., Kobori, S., Oka, M., & Hashimoto, Y. (2017). Life as an emergent phenomenon: Studies from a large-scale boid simulation and web data. Philosophical transactions of the royal society a: Mathematical, physical and engineering sciences, 375(2109).Google Scholar
  20. Joachims, T. (2002). Optimizing search engines using clickthrough data. In Proceedings of KDD (pp. 133–142).Google Scholar
  21. Kersting, K. (2006). An inductive logic programming approach to statistical relational learning. AI Communications, 19(4), 389–390.MathSciNetzbMATHGoogle Scholar
  22. Keshmiri, S., Sumioka, H., Nakanishi, J., & Ishiguro, H. (2018). Bodily-contact communication medium induces relaxed mode of brain activity while increasing its dynamical complexity: A pilot study. Frontiers in Psychology, 9, 7.CrossRefGoogle Scholar
  23. Kietz, J.-U., & Wrobel, S. (1992). Controlling the complexity of learning in logic through syntactic and task–Oriented models. In S. Muggleton (Ed.), Inductive logic programming (Vol. 38, Chap. 16, pp. 335–360). Academic.Google Scholar
  24. Kietz, J.-U., & Morik, K. (1994). A polynomial approach to the constructive induction of structural knowledge. Machine Learning Journal, 14(2), 193–217.CrossRefGoogle Scholar
  25. Kotthaus, H., Richter, J., Lang, A., Thomas, J., Bischl, B., & Marwedel, P., et al. (2017). Rambo: Resource-aware model-based optimization with scheduling for heterogeneous runtimes and a comparison with asynchronous model-based optimization. In Proceedings of the 11th LION (pp. 180–195).Google Scholar
  26. Kutzkov, K., Bifet, A., Bonchi, F., & Gionis, A. (2013). Strip: Stream learning of influence probabilities. In Proceedings of the 19th KDD (pp. 275–283). ACM.Google Scholar
  27. Lang, M., Kotthaus, H., Marwedel, P., Weihs, C., Rahnenführer, J., & Bischl, B. (2014). Automatic model selection for high-dimensional survival analysis. Journal of Statistical Computation and Simulation, 85(1), 62–76.MathSciNetCrossRefGoogle Scholar
  28. LHCb Collaboration, Aaij, R., Schellenberg, M., Spaan, B., Stevens, H., & et al. (2017). Measurement of \(cp\) violation in \(b^0\rightarrow j/\psi k^0_{{\rm s}}\) and \(b^0\rightarrow \psi (2s) k^0_{{\rm S}}\) decays. Journal of High Energy Physics, 11(170).Google Scholar
  29. Maennel, H., Bousquet, O., & Gelly, S. (2018). Gradient descent quantizes relu network features. arXiv e-prints (p. 3). arXiv:1803.08367.
  30. Michalski, R.S., Stepp, R.E. (1986). Conceptual clustering: Inventing goal-oriented classifications of structured objects. In R.S. Michalski, J.G. Carbonell, and T.M. Mitchell (Eds.), Machine learning - an artificial intelligence approach (Vol. II, pp. 471–498). Tioga Publishing Company.Google Scholar
  31. Mierswa, I., & Morik, K. (2005). Automatic feature extraction for classifying audio data. Machine Learning Journal, 58, 127–149.CrossRefGoogle Scholar
  32. Morik K., & Muehlenbrock, M. (1999). Conceptual change in the explanations of phenomena in astronomy (pp. 138–167). Pergamon.Google Scholar
  33. Muggleton, S., Srinivasan, A., King, R., & Sternberg, M. (1998). Biochemical knowledge discovery using inductive logic programming. In H. Motoda (Ed.), Proceedings of the 1st International Conference on Discovery Science. Berlin: Springer.Google Scholar
  34. Nickel, M., Tresp, V., & Kriegel, H.-P. (2011). A three-way model for collective learning on multi-relational data. In Proceedings of the 28th ICML, (pp. 809–816).Google Scholar
  35. Nordhausen, B., & Langley, P. (1990). An integrated approach to empirical discovery. In J. Shrager & P. Langley (Eds.), Computational models of scientific discovery formation (Chap. 4, pp. 97–128). Morgan Kaufmann.Google Scholar
  36. Pearl, J. (1996). A casual calculus for statistical research. In D. Fisher & H.-J. Lenz (Eds.), Learning from data (Chap. 1, pp. 23–34). Berlin: Springer.Google Scholar
  37. Pearl, J. (2019). The seven tools of causal inference, with reflections on machine learning. Communications of the ACM, 62(3), 54–60.CrossRefGoogle Scholar
  38. Peters, J., Janzing, D., & Schölkopf, B. (2017). Elements of causal inference. MIT Press.Google Scholar
  39. Pumplün, C., Rüping, S., Morik, K., & Weihs, C. (2005). D-optimal plans in observational studies. Techreport, Sonderforschungsbereich 475 Komplexitätsreduktion in Multivariaten Datenstrukturen, Universität Dortmund.Google Scholar
  40. Pumplün, C., Weihs, C., & Preusser, A. (2004). Experimental design for variable selection in data bases. In Classification - the ubiquitous challenge, proceedings of the 28th annual conference of the gesellschaft für klassifikation e.V. (pp. 192–199).Google Scholar
  41. Schmidhuber, J., Thóisson, K.R., & Looks, M. (Eds.). (2011). Proceedings of the 4th AGI, (Vol. 6830). Berlin: Springer.Google Scholar
  42. Schneider, G. (2017). Automating drug discovery. Nature reviews drug discovery, 17(2), 97–113.CrossRefGoogle Scholar
  43. Shortliffe, E.H., & Buchanan, B.G. (1990). A model of inexact reasoning in medicine. In G. Shafer & J. Pearl (Eds.), Readings in uncertain reasoning (Chapter V, pp. 259–273). Morgan Kaufmann.Google Scholar
  44. Shwartz-Ziv, R., & Tishby, N. (2017). Opening the black box of deep neural networks via information, 3. arXiv e-prints.Google Scholar
  45. Sparkes, A., Aubrey, W., Byrne, E., Clare, A., Khan, M., Liakata, M., et al. (2010). Towards robot scientists for autonomous scientific discovery. Automated experimentation, 2(1).CrossRefGoogle Scholar
  46. Steels, L. (1990). Towards a theory of emergent functionality. In Proceedings of the 1st SAB (pp. 451–461). MIT Press: Cambridge, MA, USA.Google Scholar
  47. Stolpe, M., Blom, H., & Morik, K. (2016). Sustainable industrial processes by embedded real-time quality prediction. In K. Kersting, J. Lässig, & K. Morik (Eds.), Computational sustainability (pp. 201–243). Berlin: Springer.Google Scholar
  48. Surmann, D., Ligges, U., & Weihs, C. (2018). Predicting measurements at unobserved locations in an electrical transmission system. Computational Statistics, 33(3), 1159–1172.MathSciNetCrossRefGoogle Scholar
  49. Swaminathan, A., & Joachims, T. (2015). Counterfactual risk minimization: Learning from logged bandit feedback. In Proceedings of the 32nd ICML (pp. 814–823).Google Scholar
  50. Tillmann, W., Vogli, E., Baumann, I., Kopp, G., & Weihs, C. (2010). Desirability-based multi-criteria optimization of HVOF spray experiments to manufacture fine structured wear-resistant \(75cr_3 c_2-25\) (nicr20) coatings. Journal of thermal spray technology, 19(1–2), 392–408.Google Scholar
  51. Tsalouchidou, I., Bonchi, F., Morales, G., & Baeza-Yates, R. (2018). Scalable dynamic graph summarization. IEEE TKDE.Google Scholar
  52. Wainwright, M. J., & Jordan, M. I. (2008). Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2), 1–305.CrossRefGoogle Scholar
  53. Weihs, C., Mersmann, O., & Ligges, U. (2013). Foundations of statistical algorithms: With references to r packages. Chapman and Hall/CRC.Google Scholar
  54. Wild, C. J., & Pfannkuch, M. (1999). Statistical thinking in empirical enquiry. International Statistical Review, 67(3), 97–113.zbMATHGoogle Scholar
  55. Wirth, R., & Hipp, J. (2000). Crisp-dm: Towards a standard process model for data mining. In Proceedings of the 4th PADD (pp. 29–39).Google Scholar
  56. Yang, B., & Mitchell, T. (2016). Joint extraction of events and entities within a document context. In Proceedings of the 15th NAACL.Google Scholar
  57. Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society B, 67, 301–320.MathSciNetCrossRefGoogle Scholar
  58. Zytkow, J.M. (2002). Automated scientific discovery. In W. Klösgen & J.M. Zytkow (Eds.), Handbook of data mining and knowledge discovery (pp. 671–679). Oxford University Press, Inc.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Fakultät für InformatikTechnische Universität DortmundDortmundGermany

Personalised recommendations