Skip to main content
Log in

Taking scholarly books into account, part II: a comparison of 19 European countries in evaluation and funding

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

In May 2016, an article published in Scientometrics, titled ‘Taking scholarly books into account: current developments in five European countries’, introduced a comparison of book evaluation schemes implemented within five European countries. The present article expands upon this work by including a broader and more heterogeneous set of countries (19 European countries in total) and adding new variables for comparison. Two complementary classification models were used to point out the commonalities and differences between each country’s evaluation scheme. First, we employed a double-axis classification to highlight the degree of ‘formalization’ for each scheme, second, we classified each country according to the presence or absence of a bibliographic database. Each country’s evaluation scheme possesses its own unique merits and details; however the result of this study was the identification of four main types of book evaluation systems, leading to the following main conclusions. First, countries may be differentiated on the basis of those that use a formalized evaluation system and those that do not. Also, countries that do use a formalized evaluation system either have a supra-institutional database, quality labels for publishers and/or publisher rankings in place to harmonize the evaluations. Countries that do not use a formalized system tend to rely less on quantitative evaluation procedures. Each evaluation type has its advantages and disadvantages; therefore an exchange between countries might help to generate future improvements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Note that these elements apply only to those countries that have a formalized book evaluation system.

  2. A fourth element, common to the majority but not all of the evaluation procedures studied here is the formal requirement of the existence of ISBN codes for published items prior to their inclusion in databases, labels or as a requirement for evaluation.

  3. Additionally, an empirical clustering using Joint Correspondence Analysis using the six variables created from the questionnaire is proposed as a complementary approach (See “Appendix 2”). The JCA approach, suggests a differentiation between countries having a book quality label in place and those who have not while suggesting that a differentiation of quality levels of publication channels such as publishers be a less differentiating feature.

  4. We refer to “ideal types” in the Weberian sense as theoretical concepts that can take (slightly) different forms in reality, rather than in a normative sense.

  5. A well-known exception is the UK’s Research Excellence Framework (REF). It is formalized and based on peer review panels that attribute a score to a scholarly work/unit to formalize the evaluation. However, the reoccurring investigation on how to increase the use of indicators in the REF (see e.g. Adams 2009; Wilsdon et al. 2015; for a historic overview, see Williams and Grant 2018) points to the fact that a formalized system calls for a more technocratic approach than formative evaluations.

References

  • Adams, J. (2009). The use of bibliometrics to measure research quality in UK higher education institutions. Archivum Immunologiae Et Therapiae Experimentalis, 57(1), 19–32. https://doi.org/10.1007/s00005-009-0003-3.

    Article  Google Scholar 

  • Camiz, S., & Gomes, G. C. (2013). Joint correspondence analysis versus multiple correspondence analysis: A solution to an undetected problem. In Classification and data mining (pp. 11–18). Berlin: Springer.

  • Engels, T. C. E., Ossenblok, T. L. B., & Spruyt, E. H. J. (2012). Changing publication patterns in the social sciences and humanities, 2000–2009. Scientometrics, 93, 373–390.

    Article  Google Scholar 

  • Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277–304.

  • Giménez-Toledo, E., Mañana-Rodríguez, J., Engels, T. C. E., Ingwersen, P., Sivertsen, G., Verleysen, F. T., et al. (2016). Taking scholarly books into account: Current developments in five European countries. Scientometrics, 107(2), 685–699. https://doi.org/10.1007/s11192-016-1886-5.

    Article  Google Scholar 

  • Giménez-Toledo, E., Sivertsen, G., & Mañana-Rodríguez, J. (2017). Peer review as a delineation criterion in data sources for the assessment and measurement of scholarly book publishing in social sciences and humanities. In 16th International conference on scientometrics and informetrics. Wuhan.

  • Gorraiz, J., Purnell, P. J., & Glänzel, W. (2013). Opportunities for and limitations of the Book Citation Index. Journal of the American Society for Information Science and Technology, 64(7), 1388–1398. https://doi.org/10.1002/asi.22875.

    Article  Google Scholar 

  • Greenacre, M. (2007). Correspondence analysis in practice (2nd ed.). Boca Raton, FL: Chapman & Hall.

    Book  MATH  Google Scholar 

  • Kousha, K., Thelwall, M., & Rezaie, S. (2011). Assessing the citation impact of books: The role of Google Books, Google Scholar, and Scopus. Journal of the American Society for Information Science and Technology, 62(11), 2147–2164.

    Article  Google Scholar 

  • Michavila, F. (ed.). (2012). La Universidad española en cifras. Madrid: CRUE. http://www.crue.org/Publicaciones/Documents/UEC/LA_UNIVERSIDAD_ESPANOLA_EN_CIFRAS.pdf. Accessed Sept 2017.

  • Sello de calidad en edición académica (CEA/APQ). http://www.selloceaapq.es/. Accessed Apr 2018.

  • Sīle, L., Guns, R., Sivertsen, G., & Engels, T. C. E. (2017). European databases and repositories for social sciences and humanities research output. Antwerp: ECOOM & ENRESSH. https://doi.org/10.6084/m9.figshare.5172322sivertsen.

  • Sivertsen, G. (2017). Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective. Palgrave Communications, 3, 17078.

    Article  Google Scholar 

  • Sivertsen, G. (2018). Why has no other European country adopted the Research Excellence Framework? Available at the London School of Economics and Political Science blog. http://blogs.lse.ac.uk/politicsandpolicy/why-has-no-other-european-country-adopted-the-research-excellence-framework/.

  • Williams, K., & Grant, J. (2018). A comparative review of how the policy and procedures to assess research impact evolved in Australia and the UK. Research Evaluation, 27, 93–105. https://doi.org/10.1093/reseval/rvx042.

    Article  Google Scholar 

  • Wilsdon, J. et al. (2015). Metric Tide: Report of the independent review of the role of metrics in research assessment and management. https://doi.org/10.13140/rg.2.1.4929.1363.

Download references

Acknowledgements

The authors want to thank all ENRESSH participants in the survey for their valuable contribution to this work: Croatia: Jadranka Stojanovski, Czech Republic: Jiří Kolman and Petr Kolman, France: Ioana Galleron, Israel : Judit Bar-Ilan, Saul Smiliansky and Sharon Link Italy: Ginevra Peruginelli, Latvia: Arnis Kokorevics and Linda Sīle, Lithuania: Aldis Gedutis, Montenegro: Sanja Pekovic, Portugal: Luisa Carvalho and Ana Ramos, Serbia: Dragan Ivanovic, Slovakia: Alexandra Bitusikova, Slovenia: Andreja Istenic Starcic, and Switzerland: Sven Hug.

Funding

This article is based upon work from ENRESSH (European Network for Research Evaluation in the Social Sciences and Humanities, COST Action (CA15137)), supported by COST (European Cooperation in Science and Technology).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jorge Mañana-Rodríguez.

Appendices

Appendix 1: Access interface and text version of the online questionnaire

figure a
figure b
figure c
figure d

Appendix 2: Clustering by Joint Correspondence Analysis (JCA) (see e.g. Greenacre 2007 or Camiz and Gomes 2013 for a comparison with multiple correspondence analysis)

The empirical clustering along the variables presented in Table 1 shows that the variables can be reduced to two dimensions quite comprehensively: The first dimension reflects the degree of formalization and use of metrics; the second dimension represents the use of rankings and labels.

Figure 2 shows the map of the JCA. The position of the variables shows that a strong relationship between the existence of a national or institutional database and a formalized evaluation system (i.e., very close to each other in the map), and that the non-existence of a specific book evaluation system lies just opposite to the former. These variables define the x-axis. The y-axis is defined by the existence of a quality label or a publisher ranking on the one hand and by the use of peer review panels on the other hand. The rankings and peer review panels, however, are also spread on the x-axis.

Fig. 2
figure 2

Two dimensions of evaluation processes in 19 countries using Joint Correspondence Analysis. Note: Formal stands for formalized evaluation procedure, Supra DB stands for supra-institutional database, Inst. DB stands for institutional database, Rank stands for publisher ranking, QualiLab stands for quality label, Panels stands for expert panels, and Book Eval stands for specific procedure for book evaluation (see Table 1). [ES: Spain; FI: Finland; BEFL: Flanders (Belgium); LV: Latvia; DK: Denmark; NO: Norway; SK: Slovakia; LT: Lithuania; ME: Montenegro; CZ: Czech Republic; PL: Poland; IL: Israel; CH: Switzerland; RS: Serbia; PT: Portugal; FR: France; IT: Italy; HR: Croatia; SI: Slovenia]

The countries form four clusters. In the top right quadrant, we find France, Italy, Portugal and Serbia. These countries do not have a specific book evaluation procedure in place. In the bottom right quadrant, Israel, Latvia and Switzerland form a cluster of countries having no formal evaluation procedure for books in place and do not have supra-institutional databases. Books are evaluated using peers in ex-post evaluations. There are small differences between the countries: Switzerland has databases on the institutional level, Israel includes books in evaluations systematically while in Latvia neither is the case. In the bottom left quadrant, the Czech Republic, Denmark, Lithuania, Montenegro, Norway, Poland, Slovenia and Slovakia form a cluster of countries having a formal evaluation system as well as comprehensive databases on a supra-institutional level in place. There is, however, a divide of this cluster into three groups, the first is having a ranking in place (DK, NO, SI), the second does additionally evaluate using peer review panels (SK) while the third has neither a ranking in place nor uses peer review (CZ, LI, ME, PL). Finally, on the top left quadrant, Finland, Flanders (BE) and Spain build a cluster having a formalized evaluation system, a quality label and a publisher ranking in place. Croatia reveals itself as a special case, because it stands for a formal evaluation system and a supra-institutional database. However, there are no other metric instruments used and books are evaluated using expert panels, pulling Croatia into the middle of the x-axis.

The empirical clustering thus reveals different insights than the theoretical clustering represented in Fig. 2. While the axes represent similar dimensions, it groups the countries differently. The emphasis lies more in the differentiation of quantification and peer review, for example. Nevertheless, the results are, not surprisingly, similar. Rotating Fig. 1 by 45° results in a similar solution.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giménez-Toledo, E., Mañana-Rodríguez, J., Engels, T.C.E. et al. Taking scholarly books into account, part II: a comparison of 19 European countries in evaluation and funding. Scientometrics 118, 233–251 (2019). https://doi.org/10.1007/s11192-018-2956-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-018-2956-7

Keywords

MSC Classification

JEL Classification

Navigation