Abstract
Citation counts have long been considered as the primary bibliographic indicator for evaluating the quality of research—a practice premised on the assumption that citation count is reflective of the impact of a scientific publication. However, identifying several limitations in the use of citation counts alone, scholars have advanced the need for multifaceted quality evaluation methods. In this study, we apply a novelty indicator to quantify the degree of citation similarity between a focal paper and a pre-existing same-domain paper from various fields in the natural sciences by proposing a new way of identifying papers that fall into the same domain of focal papers using bibliometric data only. We also conduct a validation analysis, using Japanese survey data, to confirm its usefulness. Employing ordered logit and ordinary least squares regression models, this study tests the consistency between the novelty scores of 1871 Japanese papers published in the natural sciences between 2001 and 2006 and researchers’ subjective judgments of their novelty. The results show statistically positive correlations between novelty scores and researchers’ assessment of research types reflecting aspects of novelty in various natural science fields. As such, this study demonstrates that the proposed novelty indicator is a suitable means of identifying the novelty of various types of natural scientific research.
This is a preview of subscription content, access via your institution.


Notes
The journal field refers the 22 scientific fields in the Essential Science Indicators (ESI) of Thomson Reuters.
The reclassification procedures of multidisciplinary field papers were as follows: (i) collecting the references of a focal paper in the multidisciplinary field; (ii) identifying the scientific field of each reference, where a field was identified based on the scientific fields of a journal; (iii) finding the most frequent scientific field in the references of the focal paper, except for multidisciplinary fields; and (iv) using the most frequent scientific field as the scientific field of the focal paper.
These correspond to focal papers without reference papers or having no same-domain papers. For these focal papers, the novelty scores are not calculable or become zero (the latter case is rare in our study; there are only two observations).
As shown in Tables 2 and 3, our novelty scores are close to 1 and their variances are small. Previous research indicators (i.e., those used by Dahlin and Behrens (2005) and Trapido (2015)), which are the basis of our indicators, also have similar features. The small variation in the scores may make it difficult to interpret whether novelty is high or low, especially for the practical use of the indicators. On this point, applying methods such as standardization would help interpret the indicators. Figure 2 is one such example where we adopted percentile representation for the horizontal axis.
This tendency is also confirmed in the other citation windows.
The ordered logit and OLS regression models use the same dependent and independent variables with robust standard errors.
References
Ahmed, T., Johnson, B., Oppenheim, C., & Peck, C. (2004). Highly cited old papers and the reasons why they continue to be cited: Part II. The 1953 Watson and Crick article on the structure of DNA. Scientometrics, 61, 147–156.
Baird, L. M., & Oppenheim, C. (1994). Do citations matter? Journal of Information Science, 20(1), 2–15.
Bornmann, L., Schier, H., Marx, W., & Daniel, H. D. (2012). What factors determine citation counts of publications in chemistry besides their quality? Journal of Informetrics, 6(1), 11–18.
Bornmann, L., Tekles, A., Zhang, H. H., & Fred, Y. Y. (2019). Do we measure novelty when we analyze unusual combinations of cited references? A validation study of bibliometric novelty indicators based on F1000Prime data. Journal of Informetrics, 13(4), 100979.
Clarivate Analytics. (2020). Web of science core collection help. https://images.webofknowledge.com/images/help/WOS/hp_subject_category_terms_tasca.html. Accessed 16 October 2020.
Dahlin, K. B., & Behrens, D. M. (2005). When is an invention really radical? Defining and measuring technological radicalness. Research Policy, 34(5), 717–737.
Fleming, L. (2001). Recombinant uncertainty in technological search. Management Science, 47(1), 117–132.
Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431.
Igami, M., Nagaoka, S., & Walsh, J. P. (2015). Contribution of postdoctoral fellows to fast-moving and competitive scientific research. The Journal of Technology Transfer, 40(4), 723–741.
Kaplan, S., & Vakili, K. (2015). The double-edged sword of recombination in breakthrough innovation. Strategic Management Journal, 36(10), 1435–1457.
Lee, Y.-N., Walsh, J. P., & Wang, J. (2015). Creativity in scientific teams: Unpacking novelty and impact. Research Policy, 44(3), 684–697.
MacRoberts, M., & MacRoberts, B. (1996). Problems of citation analysis. Scientometrics, 36(3), 435–444.
Mednick, S. (1962). The associative basis of the creative process. Psychological Review, 69(3), 220–232.
Murayama, K., Nirei, M., & Shimizu, H. (2015). Management of science, serendipity, and research performance: Evidence from a survey of scientists in Japan and the US. Research Policy, 44(4), 862–873.
Nagaoka, S., Igami, M., Eto, M., & Ijichi, T. (2010). Knowledge creation process in science: Basic findings from a large-scale survey of researchers in Japan. IIR Working Paper, WP#10–08. Japan: Institute of Innovation Research, Hitotsubashi University.
Nelson, R. R., & Winter, S. G. (1982). An evolutionary theory of economic change. Belknap Press of Harvard University Press.
Nieminen, P., Carpenter, J., Rucker, G., & Schumacher, M. (2006). The relationship between quality of research and citation frequency. BMC Medical Research Methodology. https://doi.org/10.1186/1471-2288-6-42
Oppenheim, C., & Renn, S. P. (1978). Highly cited old papers and reasons why they continue to be cited. Journal of the American Society for Information Science, 29, 225–231.
Romer, P. M. (1994). The origins of endogenous growth. Journal of Economic Perspectives, 8(1), 3–22.
Schumpeter, J. A. (1939). Business cycles: A theoretical, historical and statistical analysis of the capitalist process. McGraw-Hill Book Company.
Simonton, D. K. (2003). Scientific creativity as constrained stochastic behavior: The integration of product, person and process perspectives. Psychological Bulletin, 129(4), 475–494.
Tahamtan, I., & Bornmann, L. (2018). Creativity in science and the link to cited references: Is the creative potential of papers reflected in their cited references? Journal of Informetrics, 12(3), 906–930.
Thelwall, M. (2017). Web indicators for research evaluation: A practical guide. Synthesis Lectures on Information Concepts, Retrieval and Services, 8(4), i1–i155.
Trapido, D. (2015). How novelty in knowledge earns recognition: The role of consistent identities. Research Policy, 44(8), 1488–1500.
Uddin, S., Khan, A., & Baur, L. A. (2015). A framework to explore the knowledge structure of multidisciplinary research fields. PLoS ONE, 10(4), e0123537. https://doi.org/10.1371/journal.pone.0123537
Uzzi, B., Mukherjee, S., Stringer, M., & Jones, B. (2013). Atypical combinations and scientific impact. Science, 342(6157), 468–472.
Verhoeven, D., Bakker, J., & Veugelers, R. (2016). Measuring technological novelty with patent-based indicators. Research Policy, 45(3), 707–723.
Walsh, J. P., & Lee, Y. N. (2015). The bureaucratization of science. Research Policy, 44(8), 1584–1600.
Wang, J., Lee, Y.-N., & Walsh, J. P. (2018). Funding model and creativity in science: Competitive versus block funding and status contingency effects. Research Policy, 47(6), 1070–1083.
Wang, J., Veugelers, R., & Stephan, P. (2017). Bias against novelty in science: A cautionary tale for users of bibliometric indicators. Research Policy, 46(8), 1416–1436.
Zdaniuk, B. (2014). Ordinary least-squares (OLS) model. In A. C. Michalos (Ed.), Encyclopedia of quality of life and well-being research.Springer.
Acknowledgements
We wish to thank Natsuo Onodera for his invaluable insights regarding the measuring of the novelty score.
Funding
None.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by KM. The first draft of the manuscript was written by KM, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Appendix
Appendix
See Tables
8,
9,
10 and
Rights and permissions
About this article
Cite this article
Matsumoto, K., Shibayama, S., Kang, B. et al. Introducing a novelty indicator for scientific research: validating the knowledge-based combinatorial approach. Scientometrics 126, 6891–6915 (2021). https://doi.org/10.1007/s11192-021-04049-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-021-04049-z