How Theories of Induction Can Streamline Measurements of Scientific Performance

  • Slobodan PerovićEmail author
  • Vlasta Sikimić


We argue that inductive analysis (based on formal learning theory and the use of suitable machine learning reconstructions) and operational (citation metrics-based) assessment of the scientific process can be justifiably and fruitfully brought together, whereby the citation metrics used in the operational analysis can effectively track the inductive dynamics and measure the research efficiency. We specify the conditions for the use of such inductive streamlining, demonstrate it in the cases of high energy physics experimentation and phylogenetic research, and propose a test of the method’s applicability.


Induction Formal learning theory Scientometrics Bibliometrics High energy physics Phylogenetics 



This work was presented at the conference “Formal Methods of Scientific Inquiry” held at the Ruhr-University, Bochum in 2017. We are greatful to the participants of the conference, audience at the Center for Formal Epistemology at the Carnegie Mellon University, Kevin T. Kelly, Oliver Schulte, Konstantine (Casey) Genin, anonymous referees and guest editors of the special issue for a number of comments and constructive criticisms. This work was supported by grant #179041 of the Ministry of Education, Science, and Technological Development of the Republic of Serbia.


  1. Alexander, J. M., Himmelreich, J., & Thompson, C. (2015). Epistemic landscapes, optimal search, and the division of cognitive labor. Philosophy of Science, 82(3), 424–453.Google Scholar
  2. Allen, L., Brand, A., Scott, J., Altman, M., & Hlava, M. (2014). Credit where credit is due. Nature, 508(7496), 312–313.Google Scholar
  3. Baltag, A., Gierasimczuk, N., Smets, S. (2015). On the solvability of inductive problems: A study in epistemic topology. In R. Ramanumam (Ed.), Proceedings of the 15th conference on theoretical aspects of rationality and knowledge (pp. 65–74), TARK 2015.Google Scholar
  4. Ben-Gal, I. (2005). Outlier detection. In O. Maimon & L. Rockach (Eds.), Data mining and knowledge discovery handbook: A complete guide for practitioners and researchers (pp. 131–146). Dordrecht/Berlin: Kluwer/Springer.Google Scholar
  5. Bonaccorsi, A., & Daraio, C. (2005). Exploring size and agglomeration effects on public research productivity. Scientometrics, 63(1), 87–120.Google Scholar
  6. Borg, A. M., Frey, D., Šešelja, D., & Straßer, C. (2017). An Argumentative agent-based model of scientific inquiry. In S. Benferhat, K. Tabia, & C. Straßer (Eds.), Advances in artificial intelligence: From theory to practice. IEA/AIE 2017. Lecture notes in computer science, Vol. 10350 (pp. 507–510). Cham: Springer.Google Scholar
  7. Bornmann, L. (2017). Measuring impact in research evaluations: A thorough discussion of methods for, effects of, and problems with impact measurements. Higher Education, 73(5), 775–787.Google Scholar
  8. Bornmann, L., & Daniel, H. D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 45–80.Google Scholar
  9. Brainard, J., & You, J. (2018). What a massive database of retracted papers reveals about science publishing’s ‘death penalty’. Science. Scholar
  10. Braun, T. (2010). How to improve the use of metrics. Nature, 465, 870–872.Google Scholar
  11. Campanario, J. M. (1993). Consolation for the scientist: Sometimes it is hard to publish papers that are later highly-cited. Social Studies of Science, 23(2), 342–362.Google Scholar
  12. Carillo, M. R., Papagni, E., & Sapio, A. (2013). Do collaborations enhance the high-quality output of scientific institutions? Evidence from the Italian Research Assessment Exercise. The Journal of Socio-Economics, 47, 25–36.Google Scholar
  13. Chickering, D. M. (2002). Optimal structure identification with greedy search. Journal of Machine Learning Research, 3(Nov), 507–554.Google Scholar
  14. Contopoulos-Ioannidis, D. G., Alexiou, G. A., Gouvias, T. C., & Ioannidis, J. P. A. (2008). Life cycle of translational research for medical interventions. Science, 321(5894), 1298–1299.Google Scholar
  15. Corley, E. A., Boardman, P. C., & Bozeman, B. (2006). Design and the management of multi-institutional research collaborations: Theoretical implications from two case studies. Research Policy, 35(7), 975–993.Google Scholar
  16. Darriulat, P. (2004). The discovery of W & Z, a personal recollection. European Physical Journal C, 34(1), 33–40.Google Scholar
  17. Dissertori, G., Knowles, I. G., & Schmelling, M. (2003). Quantum chromodynamics: High energy experimetns and theory. Oxford: Clarendon Press.Google Scholar
  18. Genin, K., & Kelly, K. T. (2015). Theory choice, theory change, and inductive truth-conduciveness. In R. Ramanumam (Ed.), Proceedings of the 15th conference on theoretical aspects of rationality and knowledge (pp. 111–119), TARK 2015.Google Scholar
  19. Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine, 8(341), 341ps12.Google Scholar
  20. Henikoff, S., & Henikoff, J. G. (1992). Amino acid substitution matrices from protein blocks. Proceedings of the National Academy of Sciences, 89(22), 10915–10919.Google Scholar
  21. Kelly, K. T. (2004). Justification as truth-finding efficiency: How Ockham’s razor works. Minds and Machines, 14(4), 485–505.Google Scholar
  22. Kelly, K. T. (2007). A new solution to the puzzle of simplicity. Philosophy of Science, 74(5), 561–573.Google Scholar
  23. Kelly, K. T., Genin, K., & Lin, H. (2016). Realism, rhetoric, and reliability. Synthese, 193(4), 1191–1223.Google Scholar
  24. Kelly, K. T., Schulte, O., & Juhl, C. (1997). Learning theory and the philosophy of science. Philosophy of Science, 64(2), 245–267.Google Scholar
  25. Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5–22.Google Scholar
  26. Kocabas, S. (1991). Conflict resolution as discovery in particle physics. Machine Learning, 6(3), 277–309.Google Scholar
  27. Koonin, E. (2016). Horizontal gene transfer: essentiality and evolvability in prokaryotes, and roles in evolutionary transitions. F1000Research, 5, 1805.Google Scholar
  28. MacRoberts, M. H., & MacRoberts, B. R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342–349.Google Scholar
  29. Martin, B. R., & Irvine, J. (1984a). CERN: Past performance and future prospects: I. CERN’s position in world high-energy physics. Research Policy, 13(4), 183–210.Google Scholar
  30. Martin, B. R., & Irvine, J. (1984b). CERN: past performance and future prospects: III. CERN and the future of world high-energy physics. Research Policy, 13(4), 311–342.Google Scholar
  31. Maruyama, K., Shimizu, H., & Nirei, M. (2015). Management of science, serendipity, and research performance: Evidence from scientists’ survey in the US and Japan. Research Policy, 44(4), 862–873.Google Scholar
  32. Mayo, D. G., & Spanos, A. (2006). Severe testing as a basic concept in a Neyman–Pearson philosophy of induction. The British Journal for the Philosophy of Science, 57(2), 323–357.Google Scholar
  33. Peltonen, T. (2016). Organization theory: Critical and philosophical engagements. Bingley, UK: Emerald Group Publishing.Google Scholar
  34. Perović, S., Radovanović, S., Sikimić, V., & Berber, A. (2016). Optimal research team composition: Data envelopment analysis of Fermilab experiments. Scientometrics, 108(1), 83–111.Google Scholar
  35. Prusiner, S. (1982). Novel proteinaceous infectious particles cause scrapie. Science, 216(4542), 136–144.Google Scholar
  36. Pusztai, L., Hatzis, C., & Andre, F. (2013). Reproducibility of research and preclinical validation: Problems and solutions. Nature Reviews Clinical Oncology, 10, 720–724.Google Scholar
  37. Rosenstock, S., O’Connor, C., & Bruner, J. (2017). In epistemic networks, is less really more? Philosophy of Science, 84(2), 234–252.Google Scholar
  38. Schulte, O. (2000). Inferring conservation laws in particle physics: A case study in the problem of induction. The British Journal for the Philosophy of Science, 51(4), 771–806.Google Scholar
  39. Schulte, O. (2018). Causal learning with Occam’s razor. Studia Logica. Scholar
  40. Schulte, O., & Drew, M. S. (2010). Discovery of conservation laws via matrix search. In O. Schulte & M. S. Drew (Eds.), Discovery science. DS 2010. Lecture notes in computer science, Vol. 6332 (pp. 236–250). Berlin/Heidelberg: Springer.Google Scholar
  41. Soto, C. (2011). Prion hypothesis: The end of the controversy? Trends in Biochemical Sciences, 36(3), 151–158.Google Scholar
  42. Thagard, P., Holyoak, K. J., Nelson, G., & Gochfeld, D. (1990). Analog retrieval by constraint satisfaction. Artificial Intelligence, 46(3), 259–310.Google Scholar
  43. Ting, Samuel C. C. (1977). The discovery of the J particle: A personal recollection. Reviews of Modern Physics, 49(2), 235–249.Google Scholar
  44. Valdés-Pérez, R. E., & Żytkow, J. M. (1996). A new theorem in particle physics enabled by machine discovery. Artificial Intelligence, 82(1–2), 331–339.Google Scholar
  45. van der Wal, R., Fischer, A., Marquiss, M., Redpath, S., & Wanless, S. (2009). Is bigger necessarily better for environmental research? Scientometrics, 78(2), 317–322.Google Scholar
  46. Van Noorden, R. (2014). Transparency promised for vilified impact factor. Nature News, 29, 2014.Google Scholar
  47. Voinnet, O., Rivas, S., Mestre, P., & Baulcombe, D. (2003). Retracted: An enhanced transient expression system in plants based on suppression of gene silencing by the p19 protein of tomato bushy stunt virus. The Plant Journal, 33(5), 949–956.Google Scholar
  48. Warner, J. (2000). A critical review of the application of citation studies to the Research Assessment Exercises. Journal of Information Science, 26(6), 453–459.Google Scholar
  49. Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy of Science, 76(2), 225–252.Google Scholar
  50. Yang, Z., & Rannala, B. (2012). Molecular phylogenetics: Principles and practice. Nature Reviews Genetics, 13, 303–314.Google Scholar
  51. Yang, B., Wang, Y., & Qian, P. Y. (2016). Sensitivity and correlation of hypervariable regions in 16S rRNA genes in phylogenetic analysis. BMC Bioinformatics, 17(1), Article number 135.Google Scholar
  52. Zollman, K. J. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17–35.Google Scholar
  53. Zur Hausen, H. (2009). The search for infectious causes of human cancers: Where and why. Virology, 392(1), 1–10.Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of PhilosophyUniversity of BelgradeBelgradeSerbia

Personalised recommendations