How Theories of Induction Can Streamline Measurements of Scientific Performance

Abstract

We argue that inductive analysis (based on formal learning theory and the use of suitable machine learning reconstructions) and operational (citation metrics-based) assessment of the scientific process can be justifiably and fruitfully brought together, whereby the citation metrics used in the operational analysis can effectively track the inductive dynamics and measure the research efficiency. We specify the conditions for the use of such inductive streamlining, demonstrate it in the cases of high energy physics experimentation and phylogenetic research, and propose a test of the method’s applicability.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

Notes

  1. 1.

    At the same time, meeting the conditions for achieving it will take care of some of the difficulties OA typically encounters.

  2. 2.

    We define this sort of efficiency more precisely in the next section.

  3. 3.

    Citation patterns happen to supervene on the patterns of reasoning in the network in HEP case because of the external conditions we will specify. That is not always the case, because citation metrics can be messy and out of tune with the actual patterns of reasoning.

  4. 4.

    The results of this sort of research are typically published in science and research policy journals with some recent overlaps with social epistemology. Notable examples, relevant to our argument, include Maruyama et al. (2015), Carillo et al. (2013), Corley et al. (2006), and Martin and Irvine (1984a, b). All these methods of analysis, including computer simulations, were originally developed in Organization Theory in industrial economics (Peltonen 2016).

  5. 5.

    In other words, these hypothesis-driven simulations are based on theoretical considerations and can be used to show that a hypothesis about the efficiency of a scientific network is plausible. They stand in contrast to data-driven models, which are calibrated and tested with data.

  6. 6.

    See e.g. historical accounts of the major discovery of J/psi in the 1970s (Ting 1977), or W and Z bosons in the 1980s (Darriulat 2004), or those of a number of other particles and their properties.

  7. 7.

    The only recent significant exceptions are journals in astroparticle physics where HEP results are relevant and cited by physicists outside HEP laboratories.

  8. 8.

    It is also significant that the citations are tracked in the most advanced tracking system of that sort; INSPIRE-HEP categorizes citations into six categories, and has been in place for decades, preceding any currently used citation trackers such as Google or Thomson Reuter’s WoS.

  9. 9.

    See also Bornmann and Daniel (2008) for various reasons researchers cite papers for reasons other than acknowledgement of the quality of the results.

  10. 10.

    This is analogous to the statistical significance in Neyman–Pearson hypothesis testing. This fact could be exploited further, but it is not one of the goals of our analysis.

  11. 11.

    In disciplines in which several inductive methods are formally justified, the disagreement in the field will be justified as well. Thus, we will not be able to talk about reliable convergence of opinions.

  12. 12.

    See e.g. Dissertori et al. (2003).

  13. 13.

    Simplicity is defined as the number of constituents and the number of constituents per particle (Valdés-Pérez and Żytkow 1996, 54).

  14. 14.

    There is no need to spell out the proofs here; they can be found in Schulte (2000).

  15. 15.

    We can thus identify a temporal constraint on the applicability of the citation metrics and the reasons behind it: the long expiry dates of citation-metric analysis in certain cases (e.g. HEP) are determined by the justifiably long-term convergence on the results in the pursuit, as the revision of beliefs is justifiably minimized. Apart from establishing reliability of the results, IA has the potential to establish the computational properties of a scientific pursuit. For instance, Schulte has investigated the NP hardness of finding a simplest linear causal network from conditional correlations.

  16. 16.

    Experiments are similar—i.e. homogenous in terms of techniques and other traits of the experimental process—yet varied in terms of their efficiency.

  17. 17.

    Both are constructed in accord with even higher level of physical theory, Quantum Field Theory and Quantum Electrodynamics.

  18. 18.

    Most experiments do not purport to establish the existence of new particles; rather, they explore properties of the known ones. The Standard Model is a null hypothesis in the vast majority of experiments; it provides the expected background interactions, so the exploratory experiments that do not turn up new particles will be null experiments—but they will also provide important information on their properties (e.g. energy scales) that the model does not deliver. Even if an experiment that does not have any results of significance is reported, it will not result in the number or quality of citations that accompany experiments with confirmatory results.

  19. 19.

    This was certainly true of the citation patterns of the experiments from the late 1960s to the mid-1990s—the period analysed by the above-outlined studies; now research has become so centralized that essentially all particle physicists are engaged in one mega-project.

  20. 20.

    Historically, researchers constructed trees solely based on the 19S RNA, because of the difficulties obtaining sequence information (Yang et al. 2016).

  21. 21.

    This use accords with an account of parsimony in Kelly (2004, 2007).

References

  1. Alexander, J. M., Himmelreich, J., & Thompson, C. (2015). Epistemic landscapes, optimal search, and the division of cognitive labor. Philosophy of Science,82(3), 424–453.

    Google Scholar 

  2. Allen, L., Brand, A., Scott, J., Altman, M., & Hlava, M. (2014). Credit where credit is due. Nature,508(7496), 312–313.

    Google Scholar 

  3. Baltag, A., Gierasimczuk, N., Smets, S. (2015). On the solvability of inductive problems: A study in epistemic topology. In R. Ramanumam (Ed.), Proceedings of the 15th conference on theoretical aspects of rationality and knowledge (pp. 65–74), TARK 2015.

  4. Ben-Gal, I. (2005). Outlier detection. In O. Maimon & L. Rockach (Eds.), Data mining and knowledge discovery handbook: A complete guide for practitioners and researchers (pp. 131–146). Dordrecht/Berlin: Kluwer/Springer.

    Google Scholar 

  5. Bonaccorsi, A., & Daraio, C. (2005). Exploring size and agglomeration effects on public research productivity. Scientometrics,63(1), 87–120.

    Google Scholar 

  6. Borg, A. M., Frey, D., Šešelja, D., & Straßer, C. (2017). An Argumentative agent-based model of scientific inquiry. In S. Benferhat, K. Tabia, & C. Straßer (Eds.), Advances in artificial intelligence: From theory to practice. IEA/AIE 2017. Lecture notes in computer science, Vol. 10350 (pp. 507–510). Cham: Springer.

  7. Bornmann, L. (2017). Measuring impact in research evaluations: A thorough discussion of methods for, effects of, and problems with impact measurements. Higher Education,73(5), 775–787.

    Google Scholar 

  8. Bornmann, L., & Daniel, H. D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation,64(1), 45–80.

    Google Scholar 

  9. Brainard, J., & You, J. (2018). What a massive database of retracted papers reveals about science publishing’s ‘death penalty’. Science. https://doi.org/10.1126/science.aav8384.

    Article  Google Scholar 

  10. Braun, T. (2010). How to improve the use of metrics. Nature,465, 870–872.

    Google Scholar 

  11. Campanario, J. M. (1993). Consolation for the scientist: Sometimes it is hard to publish papers that are later highly-cited. Social Studies of Science,23(2), 342–362.

    Google Scholar 

  12. Carillo, M. R., Papagni, E., & Sapio, A. (2013). Do collaborations enhance the high-quality output of scientific institutions? Evidence from the Italian Research Assessment Exercise. The Journal of Socio-Economics,47, 25–36.

    Google Scholar 

  13. Chickering, D. M. (2002). Optimal structure identification with greedy search. Journal of Machine Learning Research,3(Nov), 507–554.

    Google Scholar 

  14. Contopoulos-Ioannidis, D. G., Alexiou, G. A., Gouvias, T. C., & Ioannidis, J. P. A. (2008). Life cycle of translational research for medical interventions. Science,321(5894), 1298–1299.

    Google Scholar 

  15. Corley, E. A., Boardman, P. C., & Bozeman, B. (2006). Design and the management of multi-institutional research collaborations: Theoretical implications from two case studies. Research Policy,35(7), 975–993.

    Google Scholar 

  16. Darriulat, P. (2004). The discovery of W & Z, a personal recollection. European Physical Journal C,34(1), 33–40.

    Google Scholar 

  17. Dissertori, G., Knowles, I. G., & Schmelling, M. (2003). Quantum chromodynamics: High energy experimetns and theory. Oxford: Clarendon Press.

    Google Scholar 

  18. Genin, K., & Kelly, K. T. (2015). Theory choice, theory change, and inductive truth-conduciveness. In R. Ramanumam (Ed.), Proceedings of the 15th conference on theoretical aspects of rationality and knowledge (pp. 111–119), TARK 2015.

  19. Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine,8(341), 341ps12.

    Google Scholar 

  20. Henikoff, S., & Henikoff, J. G. (1992). Amino acid substitution matrices from protein blocks. Proceedings of the National Academy of Sciences, 89(22), 10915–10919.

    Google Scholar 

  21. Kelly, K. T. (2004). Justification as truth-finding efficiency: How Ockham’s razor works. Minds and Machines,14(4), 485–505.

    Google Scholar 

  22. Kelly, K. T. (2007). A new solution to the puzzle of simplicity. Philosophy of Science,74(5), 561–573.

    Google Scholar 

  23. Kelly, K. T., Genin, K., & Lin, H. (2016). Realism, rhetoric, and reliability. Synthese,193(4), 1191–1223.

    Google Scholar 

  24. Kelly, K. T., Schulte, O., & Juhl, C. (1997). Learning theory and the philosophy of science. Philosophy of Science,64(2), 245–267.

    Google Scholar 

  25. Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy,87(1), 5–22.

    Google Scholar 

  26. Kocabas, S. (1991). Conflict resolution as discovery in particle physics. Machine Learning,6(3), 277–309.

    Google Scholar 

  27. Koonin, E. (2016). Horizontal gene transfer: essentiality and evolvability in prokaryotes, and roles in evolutionary transitions. F1000Research,5, 1805.

    Google Scholar 

  28. MacRoberts, M. H., & MacRoberts, B. R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science,40(5), 342–349.

    Google Scholar 

  29. Martin, B. R., & Irvine, J. (1984a). CERN: Past performance and future prospects: I. CERN’s position in world high-energy physics. Research Policy,13(4), 183–210.

    Google Scholar 

  30. Martin, B. R., & Irvine, J. (1984b). CERN: past performance and future prospects: III. CERN and the future of world high-energy physics. Research Policy,13(4), 311–342.

    Google Scholar 

  31. Maruyama, K., Shimizu, H., & Nirei, M. (2015). Management of science, serendipity, and research performance: Evidence from scientists’ survey in the US and Japan. Research Policy,44(4), 862–873.

    Google Scholar 

  32. Mayo, D. G., & Spanos, A. (2006). Severe testing as a basic concept in a Neyman–Pearson philosophy of induction. The British Journal for the Philosophy of Science,57(2), 323–357.

    Google Scholar 

  33. Peltonen, T. (2016). Organization theory: Critical and philosophical engagements. Bingley, UK: Emerald Group Publishing.

    Google Scholar 

  34. Perović, S., Radovanović, S., Sikimić, V., & Berber, A. (2016). Optimal research team composition: Data envelopment analysis of Fermilab experiments. Scientometrics,108(1), 83–111.

    Google Scholar 

  35. Prusiner, S. (1982). Novel proteinaceous infectious particles cause scrapie. Science,216(4542), 136–144.

    Google Scholar 

  36. Pusztai, L., Hatzis, C., & Andre, F. (2013). Reproducibility of research and preclinical validation: Problems and solutions. Nature Reviews Clinical Oncology,10, 720–724.

    Google Scholar 

  37. Rosenstock, S., O’Connor, C., & Bruner, J. (2017). In epistemic networks, is less really more? Philosophy of Science, 84(2), 234–252.

    Google Scholar 

  38. Schulte, O. (2000). Inferring conservation laws in particle physics: A case study in the problem of induction. The British Journal for the Philosophy of Science,51(4), 771–806.

    Google Scholar 

  39. Schulte, O. (2018). Causal learning with Occam’s razor. Studia Logica. https://doi.org/10.1007/s11225-018-9829-1.

    Article  Google Scholar 

  40. Schulte, O., & Drew, M. S. (2010). Discovery of conservation laws via matrix search. In O. Schulte & M. S. Drew (Eds.), Discovery science. DS 2010. Lecture notes in computer science, Vol. 6332 (pp. 236–250). Berlin/Heidelberg: Springer.

    Google Scholar 

  41. Soto, C. (2011). Prion hypothesis: The end of the controversy? Trends in Biochemical Sciences,36(3), 151–158.

    Google Scholar 

  42. Thagard, P., Holyoak, K. J., Nelson, G., & Gochfeld, D. (1990). Analog retrieval by constraint satisfaction. Artificial Intelligence,46(3), 259–310.

    Google Scholar 

  43. Ting, Samuel C. C. (1977). The discovery of the J particle: A personal recollection. Reviews of Modern Physics,49(2), 235–249.

    Google Scholar 

  44. Valdés-Pérez, R. E., & Żytkow, J. M. (1996). A new theorem in particle physics enabled by machine discovery. Artificial Intelligence,82(1–2), 331–339.

    Google Scholar 

  45. van der Wal, R., Fischer, A., Marquiss, M., Redpath, S., & Wanless, S. (2009). Is bigger necessarily better for environmental research? Scientometrics,78(2), 317–322.

    Google Scholar 

  46. Van Noorden, R. (2014). Transparency promised for vilified impact factor. Nature News, 29, 2014.

  47. Voinnet, O., Rivas, S., Mestre, P., & Baulcombe, D. (2003). Retracted: An enhanced transient expression system in plants based on suppression of gene silencing by the p19 protein of tomato bushy stunt virus. The Plant Journal,33(5), 949–956.

    Google Scholar 

  48. Warner, J. (2000). A critical review of the application of citation studies to the Research Assessment Exercises. Journal of Information Science,26(6), 453–459.

    Google Scholar 

  49. Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy of Science,76(2), 225–252.

    Google Scholar 

  50. Yang, Z., & Rannala, B. (2012). Molecular phylogenetics: Principles and practice. Nature Reviews Genetics,13, 303–314.

    Google Scholar 

  51. Yang, B., Wang, Y., & Qian, P. Y. (2016). Sensitivity and correlation of hypervariable regions in 16S rRNA genes in phylogenetic analysis. BMC Bioinformatics,17(1), Article number 135.

  52. Zollman, K. J. (2010). The epistemic benefit of transient diversity. Erkenntnis,72(1), 17–35.

    Google Scholar 

  53. Zur Hausen, H. (2009). The search for infectious causes of human cancers: Where and why. Virology,392(1), 1–10.

    Google Scholar 

Download references

Acknowledgements

This work was presented at the conference “Formal Methods of Scientific Inquiry” held at the Ruhr-University, Bochum in 2017. We are greatful to the participants of the conference, audience at the Center for Formal Epistemology at the Carnegie Mellon University, Kevin T. Kelly, Oliver Schulte, Konstantine (Casey) Genin, anonymous referees and guest editors of the special issue for a number of comments and constructive criticisms. This work was supported by grant #179041 of the Ministry of Education, Science, and Technological Development of the Republic of Serbia.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Slobodan Perović.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Perović, S., Sikimić, V. How Theories of Induction Can Streamline Measurements of Scientific Performance. J Gen Philos Sci 51, 267–291 (2020). https://doi.org/10.1007/s10838-019-09468-4

Download citation

Keywords

  • Induction
  • Formal learning theory
  • Scientometrics
  • Bibliometrics
  • High energy physics
  • Phylogenetics