Correlating article citedness and journal impact: an empirical investigation by field on a large-scale dataset

In spite of previous research demonstrating the risks involved, and counsel against the practice as early as 1997, some research evaluations continue to use journal impact alone as a surrogate of the number of citations of hosted articles to assess the latter’s impact. Such usage is also taken up by research administrators and policy-makers, with very serious implications. The aim of this work is to investigate the correlation between the citedness of a publication and the impact of the host journal. We extend the analyses of previous literature to all STEM fields. Then we also aim to assess whether this correlation varies across fields and is stronger for highly cited authors than for lowly cited ones. Our dataset consists of a total of almost one million authorships of 2010–2019 publications authored by about 28,000 professors in 230 research fields. Results show a low correlation between the two indicators, more so for lowly cited authors as compared to highly cited ones, although differences occur across fields.


Introduction
In seeking publication, researchers aspire to highly prestigious (often referred to as high-impact or high-quality) scientific journals, aiming at the widest possible distribution of their new knowledge, the greatest access to their scientific community, and the best hopes of career progress and resulting continued scientific output. Considering the strict review processes and high rejection rates, the achievement of publication in 1 3 such journals is indeed a great source of satisfaction. Reviewers are chosen from among the most qualified scholars in the relevant fields, and typically offer observations, suggestions and even criticisms that serve in improving the specific paper and as further stimuli to research. On the side of the scientific community as audience, readers look to prestigious journals with respect and high expectations in terms of the reference value of the contents. High-impact journals attract and select the better works, likely to attract more citations (Traag, 2021), and therefore achieve better diffusion of the knowledge.
In research organizations, it is not uncommon to come across more or less formalized incentive systems that encourage researchers to publish in prestigious journals, and include aspects of evaluation in consideration of the numbers of their articles in such journals. Irrespective of their contents, there can be a tendency to transfer the prestige of the journals to the articles and their authors, so that those who succeed in reaching publication in high-impact journals are looked upon with admiration, even a sort of envy.
Peer-reviewed journals serve in the verification of new knowledge, encoded in written form (articles), and then its transfer from producers (authors) to consumers (scholars, practitioners). The process is the same as that for other channels, serving in quality-control and transfer of products of all kinds from maker to consumer. However, just as consumers resist being fooled by fine displays (buyer beware!, all that glitters is not gold), and investors by a flashy prospectus (what's the bottom line?), we, as scientists, evaluators and research administrators, should exercise caution in considering that a fine journal guarantees the high impact of all the published products on future advances in knowledge. Quoting Nobel laureate Aaron Ciechanover (2013): "It doesn't matter where you publish; it matters what you publish". Eugene Garfield, the father of the journal impact factor (JIF) was also well aware of this issue of the true citedness of the published articles: "It would be more relevant to use the actual impact (citation frequency) of individual papers in evaluating the work of individual scientists rather than using the JIF as a surrogate" (Garfield, 2001). Authoritative scholars, and even entire scientific associations and congresses, have continued to express similar positions (DORA, 2012;Hicks et al., 2015;Seglen, 1997;Wouters et al., 2015).
In the absence of any other information, the prestige of the journal of publication might serve as a kind of certification, or guarantee of quality: the higher it is, the more soundly one could possibly assume the articles published are of high quality. Moreover, in research evaluation exercises, the available citation time windows are often not long enough to guarantee the robustness of citation impact indicators, especially in certain disciplines: in such cases, in fact, compared to the early citations registered, the JIF can function as a better predictor of the future impact of an article (Abramo et al., 2010), or along with the citations can serve well as a co-predictor (Abramo et al., 2019).
Still, as we have noted, in evaluating the work of individual researchers and their institutions, caution is due in taking the JIF alone as a surrogate for citations, particularly in the case of articles above 2 years old, where its use can prove insidious (Abramo et al., 2010). It is also well known that the high prestige of a journal, as measured by the average citedness of the articles published in it, is usually determined by a small share of highly cited articles (Antonoyiannakis, 2020;Egghe, 2005;Kiesslich et al., 2021;Lariviere & Sugimoto, 2019;Milojević et al., 2017;Seglen, 1989Seglen, , 1992Seglen, , 1994Seglen, , 1997Waltman, 2016;Zhang et al., 2017). Apart from these few articles, most others have a citedness that does not deviate much from that of articles published in less prestigious journals. It is also not uncommon to find articles published in high-impact journals, but which are never cited (Larivière et al., 2016;Lozano et al., 2012;Zhang et al., 2017).
It is therefore no surprise that, before us, bibliometricians have investigated the relationship between journal prestige and the citedness of hosted articles (de Oliveira Silva et al., 2021;Seglen, 1989Seglen, , 1994Seglen, , 1997Zhang et al., 2017). In this paper we return to the issue, but extending the scale and disciplinary scope of the population under observation, and also deepening the investigation in several respects, by testing: (i) the correlation between the citedness of a publication and the impact of the host journal, not just overall, but also at field level (RQ1); (ii) whether such correlation varies among researchers within each field (RQ2); (iii) whether this JIF-citedness correlation is stronger for highly cited rather than for lowly cited authors (RQ3).
The work is rooted in the seminal articles by Seglen (1989Seglen ( , 1994Seglen ( , 1997, who analyzed the JIF-citedness relationship on a dataset of 16 Norwegian principal investigators in biomedical research. Twenty years later, Zhang et al. (2017) returned to the same methods, but on a larger sample of around 600 investigators and 18,000 publications, again from Norway and in the biomedical field, for the period 1992-2013, with the aim of verifying the earlier results of Seglen. In fact the findings were in line with Seglen's. At the level of the publication portfolio of individual authors, Zhang et al. found a moderate correlation on average, and in any case, characterized by extreme variability between authors. Finally, their analysis confirmed Seglen's conjecture that the relationship between article citedness and journal impact is much stronger for top-cited authors than for less cited colleagues. More recently, de Oliveira Silva et al. (2021) investigated a sample of 4022 sports sciences articles, showing that altmetric scores have a stronger relationship with number of citations than the JIF.
The current work extends the analysis to all STEM fields, verifying the results of our predecessors on a broader and more diversified population, and above all the questions of potential differences across fields. The dataset of analysis consists of 28,000 Italian professors and their scientific production indexed in Web of Science (WoS) in the period 2010-2019, totaling nearly one million authorships. The Italian case is convenient for the analysis, because of the uniquely fine-grained field classification of all professors (370 fields) and the availability of a highly accurate authorships disambiguation algorithm.
The next section of the paper reviews the literature relevant to the current study. "Methods" section describes the data and methods designed to respond to the research questions. "Results" section provides the results of the analysis and "Discussion and conclusions" section concludes with comments on the main findings and certain of their implications.

Literature review
In the bibliometric field, the oldest and most widely used indicators are at journal level. The introduction of the JIF some 40 years ago, by the Institute for Scientific Information (ISI) of Eugene Garfield, was intended to provide US university librarians with quantitative, objective methods for the selection of journals. Beginning as the main indicator of the Journal Citation Report (JCR), the JIF then spread unstoppably, assuming new purposes in research evaluation across all fields of the hard sciences (Garfield, 2001), in the meantime transforming the publishing industry and significantly influencing recruitment practices, as well as resource allocation and even the directions of research activities (Archambault & Larivière, 2009). Individual researchers use the JIF not only as an indicator of journal quality, but also for other purposes, such as in choosing a narrower selection of papers out of the huge offer, for view, reading and potential citation (De Rijcke et al., 2016;. Its success has fueled the proliferation of multiple variants, including the ones coined by the publishers of all the different bibliographic repertories. Paradoxically, a considerable number of studies have illustrated critical aspects of the indicator: the asymmetry between the numerator (citations of citable items) and the denominator (citable and non-citable items); the many differences between disciplines in habits of publication and citing; the problems of insufficient citation window; the asymmetry of the citation distributions; journal self-citations; the lack of transparency on its calculation, which casts doubt on the results (Lariviere & Sugimoto, 2019;Mingers & Leydesdorff, 2015;Seglen, 1997). Added to these are the weakening relationships between JIF and citedness of papers due to the advent of digital repositories (Lozano et al., 2012), and the increasing recourse to new channels of communication on research results, i.e. via social media, which can impact on the citations to an article more than the prestige of the publishing journal (de Oliveira Silva et al., 2021). Last but not least, the JIF is related to the mean of the distribution of citations of hosted publications, which is notoriously highly skewed (Bornmann & Leydesdorff, 2017;Glänzel & Moed, 2002;Leydesdorff, 2008;Kiesslich et al., 2021;Radner, 1998). For these reasons, most scholars have urged caution when using journal indicators for individual evaluation (Brito & Rodríguez-Navarro, 2019;Jarwal et al., 2009;Larivière et al, 2016;Marx & Bornmann, 2013;Moed, 2020;Moed & van Leeuwen, 1996;Paulus et al., 2018;van Leeuwen & Moed, 2005).
The pioneering investigations of Seglen (1989Seglen ( , 1992Seglen ( , 1994Seglen ( , 1997 on the relationship between article citedness and JIF have been particularly influential in the scientific community, as regards the effects of such indicators in evaluation processes, and have inspired several important initiatives, such as the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for research metrics, and The Metric Tide review on the role of metrics in research assessment and management (DORA, 2012;Hicks et al., 2015;Wouters et al., 2015).
In a 1997 article, Seglen summarized four criticalities: "(i) Use of JIF conceals the difference in article citation rates (articles in the most cited half of articles in a journal are cited 10 times as often as the least cited half); (ii)·JIF is determined by technicalities unrelated to the scientific quality of their articles; (iii) JIF depends on the research field: high JIFs are likely in journals covering large areas of basic research with a rapidly expanding but short-lived literature that use many references per article; (iv) Article citation rates determine the JIF, not vice versa". Traag (2021), however, argued that simply removing consideration of JIF does not negate the influence of journals on the citedness of hosted articles.
Recently, several studies have returned to favoring the use of journal-based indicators under certain conditions (Abramo et al., 2010(Abramo et al., , 2019Bonaccorsi, 2020;Waltman & Traag, 2021). Waltman and Traag (2021) also demonstrated through computer simulations that "depending on the assumptions that are made, the JIF may be a more accurate indicator of the value of an article than the number of citations of the article". Abramo et al. (2019) showed that with very short time windows (0 to 2 years) JIF together with early citations can play a useful role in predicting future impact of publications, even though for longer ones, the weight of early citations is dominating and the JIF is not informative any longer.
Decades after Seglen first raised his criticisms and began investigations, the global research community, and that of research assessment in particular, continues to debate and study his original questions. Callaway (2016), for example, reviews some of the main complaints, and reports anew the cases of Nature and Science journals, where 75% of 2013-2014 hosted articles garnered fewer citations than the 2015 JIF of the journals. Kim et al. (2020) effectively illustrate the methodological difficulties in estimating journal influence on article performance. The approach proposed by the authors consists of using a proxy for individual article quality, unrelated to the publishing journal reputation/ impact: i.e. number of citations to the preprint, posted on arXiv.org. Using the title and digital object identifier (DOI), all articles in arXiv on high energy physics, astrophysics, and condensed matter were linked to the corresponding published versions indexed in Microsoft Academic Graph (MAG). The authors showed that "estimates of the effect of journal reputation on an individual article's impact (measured by citations) are likely inflated", and found "little systematic evidence that the role of journal reputation on article performance has declined." Traag (2021), aiming to uncover the causal mechanism between journal impact and article citedness, also looked at citations to arXiv preprints, but applying a more sophisticated model for comparing these to the citations received by the full versions, as recorded in Scopus. The results show that journal impact "filtering" does not cancel out the influence of journals on the citedness of hosted articles, and that therefore: (i) articles that attract more citations are more likely to be selected and published in high-impact journals; (ii) articles in high-impact journals will be cited even more frequently because of the publication venue. Zhang et al. (2017) adopted a different approach for verification of the pioneering studies by Seglen (1992Seglen ( , 1994Seglen ( , 1997, some 20 years later and on a larger sample of scientists. In particular, the authors examined: (i) the skewness of article citedness; (ii) the correlation between article citedness and impact of the hosting journal; (iii) the real benefit for scholars, in terms of received citations, of publishing in journals with higher impact. Whereas Seglen's work was based on the study of 16 biomedical principal investigators affiliated with only one Norwegian institution, who authored 907 publications, Zhang et al. (2017) consider all scientists in biomedical research working at Norwegian institutions (approximately 600) and their production of approximately 18,000 publications between 1992 and 2013.
The results confirm Seglen's findings that "there is no consistent positive relationship between individual article citedness and the JIF of the journal in which the article is published". Citation distributions are skewed across journals: less cited articles and highly cited articles can appear in any journal. However, while most articles are rarely cited, the few highly cited articles appear more often in high-impact journals. The correlation between the two indicators referring to the scientific portfolios of individual authors is moderate on average and characterized by extreme variability. Finally, the lack of consistent positive relationships is more evident among the majority of researchers with average or lower overall citedness.

Data
The dataset for the current study consists of the scientific publications by Italian academics over the period 2010-2019 that are indexed in WoS. In the Italian university system, all academics are classified in one and only one field, named scientific disciplinary sector 1 3 (SDS), 370 in all. The SDSs are grouped into 14 disciplines, named university disciplinary areas (UDAs). The current analysis is limited to the SDSs for which the WoS coverage is acceptable for bibliometric assessments: 230 out of the total 370 (62%), comprised within a total of 11 UDAs. 1 The data concerning professors were extracted from the database of Italian university personnel maintained by the Ministry of Universities and Research (MUR). For each professor, this database provides information on their gender, affiliation, field classification, and academic rank at the end of each year. 2 In addition to the limitation on disciplinary sectors, a further selection was made at the level of the academics, limiting these to professors tenured over the entire 2010-2019 period, totaling 28,040.
Data on output and relevant citations are extracted from the Italian Observatory of Public Research, a database developed and maintained by the authors, derived under license from the Clarivate Analytics WoS Core Collection. Beginning with the raw data of the WoS, and applying a complex algorithm to reconcile the authors' affiliations and disambiguation of the true identity of the authors (D'Angelo et al., 2011), each publication (articles, letters and reviews hosted by journals with JIF) is attributed to the university professor who produced it. 3 Table 1

Indicators and data analysis plan
To pursue our objectives, we correlate two indicators for all publications in the dataset: -the number of citations received, normalized to the average citations of all WoS cited publications, of the same year and WoS subject category, referred to as AII (article impact index), 4 -the JIF of the host journal, normalized to the average JIF of the journals of the same year and subject category, referred to as JII (journal impact index).
Note that the use of normalized indicators is an absolute necessity in comparing citations/impact factors of publications from different fields and different years (Waltman, 2016).
For the first indicator, the citation count is taken on 31/12/2021. For the second, the JIF is the one at year of publication. The two indicators are by definition correlated, but because of the high skewness of citation distributions within journals, strong correlation is not expected.
In the following section we investigate this correlation: (i) at overall and field levels; (ii) referring to the publication portfolio of each professor in the dataset; (iii) distinguishing between the top (10%) cited and bottom (10%) cited professors. The data analysis, which uses scatter plots and Pearson ρ correlations between the two indicators, is performed using the STATA 12 statistical software package. Outliers were not removed except where specifically reported in the text.

Results
We present the results taking two perspectives: from a first analysis with the article as the fundamental unit of investigation, next with the author as fundamental unit. The first analysis aims at answering RQ1, the second at answering RQ2 and RQ3.

Analysis at the article level
Overall (424,309 dataset publications), the Pearson ρ correlation between AII and JII is 0.308. Excluding the 17 publications with AII greater than 100, the correlation coefficient rises to 0.345. The data scatterplot (Fig. 1) confirms a weak association between the two dimensions under analysis.
The correlation between AII and JII can also be represented by generating a heat plot matrix, in which each cell indicates the relative frequency of the AII-JII row-column combination. For brevity, rather than presenting the entire distribution of absolute values for the two indicators, Table 2 shows their percentiles with respect to the reference distribution (world publications of the same year and SC), 5 also highlighted by grey gradation. If there is a strong correlation between the two dimensions of analysis, we expect an equally strong concentration of high relative frequencies on the main diagonal. Table 2 shows no such concentration, except in the upper left part of the matrix, the part characterized by the "top" deciles of both AII and JII. It is no surprise that the highest relative frequency (5.2%) concerns precisely the top publications, in terms of both indicators. This is due to the relative positioning of Italian scientific production, which features significantly higher average impact than the world average (Consiglio Nazionale delle Ricerche, 2019). Note that the entire matrix is populated, even cell AII = 1-JII = 10, albeit with only 138 publications out of 424,309. Scrolling down column AII = 1, we observe decreasing frequencies, but still greater than 1 up to the fourth decile of JII. Likewise, scrolling across row JII = 1, frequencies again decrease but remain greater than 1 up to the fifth decile.  It may also be interesting to analyze the distribution of JII of publications in the tenth decile (D10) by AII, or AII of publications in D10 by JII (Figs. 2, 3).
In D10 by AII (Fig. 2) we have 21,857 publications 6 (5.2% of total 424,309), of which 6.7% (1467 publications) are hosted in journals falling in D1, and a full 41.7% in journals with JII greater than or equal to the median.  In D10 by JII (Fig. 3) we find 8508 publications (2.0%), from which only 1.6% (i.e. 138 papers) place in D1 by AII, and only 24.8% above median.
Given the shape of distributions in Figs. 2 and 3, very briefly: regardless of the high impact factor of the host journal, a publication may be scarcely or never cited; conversely, journals with a very low impact factor are much more likely to host publications that are scarcely or never cited.
Moving to disciplinary level, Fig. 4 shows an example SDS, in this case the scatterplot of the 8319 publications authored by professors in MED/11 (cardiovascular diseases). 7 The Pearson's correlation for AII and JII is 0.319. The graph shows 15 publications with JII values below one and AII greater than 5 (top left box), but also 107 publications with JII greater than 5 (bottom right box) and AII values less than 1. The size of this second set is substantial, amounting to slightly less than a quarter of the total 433 publications with JII greater than 5. Thus, the presence of publications hosted by journals with very high JIF (five times expected value) but never or scarcely cited is by no means marginal. Figure 5 shows the boxplot of Pearson distribution between AII and JII, for the same analysis repeated with all 230 SDSs. The highest correlation is seen in GEO/12 (Oceanography and atmospheric physics, 364 publications), where the Pearson coefficient is 0.594. The second outlier is ING-IND/24 (Principles of chemical engineering), with Pearson ρ for the 1816 publications by professors of that SDS being 0.573.
The average Pearson ρ for the 230 SDSs is 0.334 (median 0.329), with variability between maximum 0.594 (as noted, in GEO/12) and minimum 0.107 (in SECS-S/05, Social statistics), and a rather small interquartile range of 0.116.
With reference to RQ1, we can conclude that the correlation between citedness and journal impact is rather weak at the overall level. At the field level, although varying substantially, it is never strong.

Analysis at author level
In this section, aiming to answer RQ2 and RQ3, we investigate the relation between article citedness and hosting-journal impact factor for the publication portfolios of all professors. As an example, Fig. 6  By grouping the professors of the dataset by SDS, we obtain distributions of correlation indices that may reveal the presence or absence of elements of differentiation among fields, useful for the response to RQ2. To ensure robustness of results, we consider only professors with a portfolio of least 10 publications.
Taking the example of professors in the eight SDSs of the Physics UDA, Fig. 8 shows the box plots of the correlation distributions, and Table 3 the descriptive statistics. We note significant variability in the data among these SDSs, with median values ranging from minimum 0.184 in FIS/06 to maximum 0.433 in FIS/03. Many portfolios (around 10% of total) show negative AII vs JII correlation, the most of all in FIS/06, where this occurs for a full 30.4% of professors (14 in number). However, the confidence intervals all show a positive lower bound. The dispersion within SDSs is quite marked, with standard deviation values of the same order of magnitude as the averages, and coefficients of variation above 1 in FIS/02, FIS/05, and FIS/06.
The last column of Table 3 shows the Pearson correlation between the average AII and average JII of each of the eight Physics SDSs. With the exception of FIS/04 (Nuclear and subnuclear physics), this is always higher than mean ρ at individual level (column 4): for instance, in FIS/01 (Experimental physics) the correlation is 0.503, against 0.363 of the average ρ registered at individual level. Although the strength of the relationship varies across SDSs, it is quite clear that the authors who average more citations also publish in higher impact journals.
From Table 4, for the 232 SDSs, the highest average Pearson ρ coefficients are in "Electrical converters, machines and switches" (ING-IND/32), then "Blood diseases" (MED/15) and "Electrical energy systems" (ING-IND/33); the lowest in "Physics for earth and atmospheric sciences" (FIS/06), "Nuclear plants" , and "Plastic surgery" (MED/19). Table 5 shows the descriptive statistics for the entire dataset, disaggregated by ADU. The average values of the correlation indices range between 0.264 (UDA 1-Mathematics) and 0.382 (UDA 3-Chemistry). Mathematics presents distinctive characteristics: it is the UDA with the highest dispersion (std dev. 0.294), and the highest number of observations with negative correlation: for 249 out of total 1288 professors (19.3%). Cases of negative correlation are seen through all the UDAs, and overall concern 9% of the dataset. The maximum values of the correlation indices are never less than 0.94. With reference to RQ2, on the AII-JII correlation for researchers within each field, we can certainly state that variation is considerable. While average values show a moderate relationship between AII and JII, the point values shift significantly from one author to another. In practically all SDSs, there is at least one professor with a decidedly negative correlation and at least one with a very strong positive correlation.  1 3 To answer RQ3, we now classify professors according to the total impact of their scientific production, measured as the sum of AII for all of their publications: professors in the top 10% by total impact in their SDS are tagged as "highly cited" (HC); those in the bottom 10% as "lowly cited" (LC).
For these two groups, we plot the median AII of their publications against "classes" of JII (Fig. 9). The data plot very clearly indicates a positive trend for citedness of a paper as a function of impact of the hosting journal. However, the curve for HCs is systematically above the curve for LCs, and also for high levels of journal impact (95th percentile and above), the difference in citedness for the former is significantly higher than for the latter.

Discussion and conclusions
Starting from the seminal works by Seglen (1989Seglen ( , 1994Seglen ( , 1997 and the update by Zhang et al. (2017), this work delves into questions of the correlation between citedness of a publication and the impact of the host journal, extending the analysis to all STEM fields in the Italian academic system. The analysis confirms the results of Zhang et al. (2017), but now also in fields beyond biomedical sciences. The data on the low correlation between the two indicators, however, tends to differ between fields: with Pearson ρ distribution ranging between 0.107 and 0.594, in any case quite concentrated around the central value (mean 0.334, st. dev. = 0.097). Adopting individual publications as the unit of analysis, the correlation between citedness and journal impact is weak overall (Pearson ρ = 0.308; 0.345 excluding outliers). It is notable that the number of publications with little or no citation, even though hosted by journals in D1 for impact factor, is ten times that of publications that rank top-10% for citedness, but are hosted in journals with impact factor in the bottom 10%.
Taking professors as the unit of analysis, the correlation is extremely variable both between and within fields, in some SDSs spanning as much as − 0.7 to + 1. Aggregating professors and their relative publications by discipline, we see cases of negative AII-JII correlation in all the UDAs, overall affecting 9% of the professors.
Finally, the analyses confirm that the correlation between citedness and prestige of the hosting journal, although not strong, is certainly more significant for highly cited authors than lowly cited ones: i.e. although the overall data trend shows a positive correlation between citedness and prestige of the journal, the correlation is higher for the specific population of highly cited authors. This difference in citedness, given the same JII, would evidently be attributable more to the quality of the authors and their articles than to the presence of a hypothetical "journal effect". Interpretations that seek a causal effect of journal impact factors on citation impact could prove problematic. Indeed, as noted by Bornmann and Leydesdorff (2017) if a publication is of high quality, it could be accepted for publication in a journal with high JII and receive a high number of citations. In this case, the correlation between JII and AII is spurious, because the relationship between the two variables is mediated by a third variable, which in this case is the publication quality. In essence, it is not the journals that are cited, rather the quality articles they publish. This led Zhang et al. (2017) to observe that: "It is the highly cited authors that provide these articles, perhaps by being conscious about where they publish their more significant results". In this case, one could hypothesize the presence of a self-selection bias: the highly cited authors would be more aware of their own value and inclined to submit their research products to journals with higher JII. The opposite could hold true for lowly cited authors.
Furthermore, the continued occurrence of a significant citedness gap between highly cited and lowly cited authors, even in journals with very high JII, leads to the hypothesis of phenomena of "scientist stratification", i.e. a manifestation of the Matthew effect. For the lowly cited authors, even when they manage to publish in a high-impact journal, this does not seem to earn them premiums of citedness, although we could certainly imagine some gains in prestige among their scientific community.
The empirical evidence supports the conclusion that the most important criterion in seeking a journal for the publication of our manuscripts is the fit between the article content and the expectations of the journal's target audience. Among the journals addressing the relevant "scientific public", if we wish, we can then aim for the ones of greater prestige. Funding The research project received no funding by third parties.
Data availability Web of Science raw data used in this study have been made available under license by Clarivate Analytics. The authors are not allowed to redistribute WoS data which therefore cannot be made available.

Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in the manuscript. Giovanni Abramo and Ciriaco Andrea D'Angelo are members of the Editorial Board of the journal.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.