Scientometrics

, Volume 109, Issue 3, pp 2161–2164

Special issue: papers from the 20th International Conference on Science and Technology Indicators

  • Benedetto Lepori
  • Sybille Hinze
  • Wolfgang Glänzel
Article

DOI: 10.1007/s11192-016-2128-6

Cite this article as:
Lepori, B., Hinze, S. & Glänzel, W. Scientometrics (2016) 109: 2161. doi:10.1007/s11192-016-2128-6

The 20th International Conference on Science and Technology Indicators (STI) took place from September 2nd to 4th, 2015 at the Università della Svizzera italiana in Lugano. The conference series, originally hosted in 1988 by CWTS, University of Leiden, became an annual event in 2010 under the auspices of the European Network of Indicator Designers (ENID; www.enid-europe.org), thus broadening its scope to different areas of the development and use of STI indicators, including policy analysis and evaluation, studies of research funding and governance, and indicators for the study of higher education systems and institutions. With the attendance of roughly 140 participants from 35 different countries, the breadth and the reach of the conference series was on full display.

While dealing with the development of Science and Technology Indicators, each STI conference focuses on a specific topic or approach in order to provide new directions and impulses to the community in the field. In Lugano, the conference particularly discussed the development and use of S&T indicators to characterize and understand the behaviour of research organizations, including Higher Education Institutions, Public Research Organizations (PROs) and Research Funding Organizations. And thus, it reflected a longstanding process of development of indicators to characterize the profile and the position of organizations in the broader field of science and technology, the so-called positioning indicators (Lepori et al. 2008). Large initiatives in this direction include the development of databases on higher education institutions, the use of bibliometric indicators for the creation of ratings and rankings of institutional research performance, new developments to measure and assess academic patenting, the development of datasets covering different aspects of research activities—from research projects to research programs—and the assessment of research impact.

In order to advance this interdisciplinary endeavour, the paper presentation sessions and the well-attended poster session were complemented by invited keynotes, respectively by Filippo Wezel (Universities between Bureaucracy and Market Pressures: Selected Reflections from an Organization Theory Perspective), Barry Bozeman (Bureaucratic Red Tape and Organizational Pathologies in Academic Research; Bozeman and Feeney 2011) and Joseph Lampel (Trapped by Information: Performance Rank Ordering and Red Queen Imitation; Giachetti and Lampel 2010). These speeches focussed specifically on the contribution of organizational theory to our understanding of research organizations. Another highlight of the conference was the special session on the Dynamics of Standards, which began with a keynote from David Seidl, who spoke about Standards as Socio-economic Constructions (Brunsson et al. 2012). The session promoted a collective and interdisciplinary reflection on how standards are constructed and adopted in evaluation processes and on their implications for the outcome of evaluation and the behaviour of actors, moving from recent publications on standards like the Leiden Manifesto, recently published by Nature (Hicks et al. 2015), and the independent review of the role of metrics in research evaluation commissioned by the Higher Education Funding Council for England.

The backbone of the conference was represented by around 35 oral presentations of full papers, selected via a strict peer review process and organized in thematic sessions on major topics, like academic careers, research funding, higher education institutions, bibliometric indicators, as well as by a poster session displaying on-going research activities and emerging developments. Additionally, in four thematic panels on mixed-methods, researchers’ careers, standards in funding evaluation and research responsible innovation, novel research insights and potentially promising data sources (like in the case of careers) were introduced. The audience was invited to actively engage in discussions on central topics concerning STI research, as well as its practical relevance, which was the case with the evaluation standards adopted by research councils.

Finally, the conference was an opportunity to advance the discussion on the establishment of a European data infrastructure, in order to overcome the fragmentation and limitations of currently available datasets, particularly concerning interoperability and access to data sources. The STI community is faced in this context with dramatic developments, such as the availability of digital sources, advanced information retrieval methods and new approaches to data handling and analysis from data sciences, which might lead to fundamental changes in how STI indicators are produced and handled in the future. Such issues are at the core of the EU-FP7 project on Research Infrastructure for Research and Innovation Policy Studies (RISIS; http://risis.eu/), which promoted this debate.

The papers in this special issue

This special issue does not aim to provide a full overview of the results presented during the conference. Rather, papers were selected based on their topicality in respect to the focus of Scientometrics and their scientific quality. Even if the papers already went through a peer review process as a basis for their acceptance into the conference, all papers submitted for the special issue were subject to a rigorous peer review.

Given the focus of the journal, most of the papers deal with the design and use of bibliometric indicators. The paper by Glänzel, Thijs and Chi deals with a major shortcoming of many bibliometric analyses, which, due to the coverage of the major tools at hand, are largely based on journal publications, a problem which results in biases, particularly against the social sciences and humanities. By analysing the new Thomson Reuters Book Citation Index, they conclude that most advanced models and indicators that have been developed for periodicals, also work for books—however with three major limitations, i.e. issues of coverage and data quality, the lack of information on author affiliation and the inability to consequently apply citation counts at the chapter level.

Two papers in this issue focus on the applications of bibliometric indicators—mostly combined with other information sources—which analyse the structure of research systems. Tijssen and Winnink’s paper develops an alternative to the OECD classification by differentiating between ‘basic’ and ‘applied’ research by classifying Web of Science publications in terms of their ‘domain of application’. Based on the share of authors located in the university, hospitals and industry sector, they classify the Web of Science (WoS) in terms of their ‘domain of application’. Using these categories, they analyse shifts in the ‘domain orientation’ of worldwide science since 2007. While changes overall are limited, they highlight the increasing role of ‘hospital’ or medical orientation in WoS publications. Valeria Aman provides an in-depth analysis of the relationships between the major research organizations in Germany, particularly between large PROs and universities, based on citation flows from WoS data. Results display the high level of interconnectedness within the German research system, but also that the relationships between sectors are highly dependent on the field. This analysis is therefore exemplary of how bibliometric data can be combined with information on the organizational structure of research in order to provide a richer understanding of national research systems.

Two more papers focus on the value of bibliometric indicators for policy evaluation and analysis. Möller, Schmidt and Hornbostel deal with the difficult issue of evaluating the scientific impact of a policy measure, i.e. the German Excellence Initiative, which particularly aims to foster excellent research. They devise a refined methodology in order to identify publication output data that can be attributed to this initiative. In order to assess whether this objective could be reached, bibliometric characteristics of funded output were compared with national benchmarks. The outcome of their analysis is somewhat ambiguous. On the one hand they revealed that the funding program succeeded in concentrating excellent research and fostering collaborations between universities and the non-university research sector, on the other hand, the overall changes to the German university and research system’s publication output and impact were not massive. Lindhal and Danell empirically analyse the predictive value of early career productivity, i.e. the extent to which it allows predicting future productivity and, therefore, may support hiring decisions. The contribution of their paper is to provide an analytical framework to systematically compare selection criteria: indeed, their data show that some decision-making scenarios are better at avoiding false positives (i.e. cases were high productivity is predicted, but not realised), while others may avoid false negatives (i.e. cases where low productivity is predicted, but high productivity is realised). Decision-makers are therefore well advised to tailor the selected criteria to their specific goal.

Piro and Sivertsen provide empirical arguments to the lasting debate on the reliability of international research rankings. To this aim, they perform a disaggregated analysis of the scores obtained by a sample of Scandinavian Universities in two of the most widely used international rankings, i.e. Times Higher Education and the Shanghai ranking. Their analysis shows that, while both rankings combine various indicators, few explain the majority of differences in the position of universities among the aggregated scores, while differences are largely due to size or historical effects (like in the case of Nobel Prizes). They conclude that, due to the aggregated nature of ranking scores and to the non-availability of individual indicators, such rankings are of limited use for university management and strategy building.

The two remaining papers substantially broaden the scope of this special issue by adding different perspectives on higher education management and on knowledge creation and dissemination respectively. Lepori, Wise, Ingenhoff and Buhmann develop a multi-level model of strategic decision-making in university research institutes. Their model shows how decisions on the allocation of professorial positions by the university constrain the development of research institutes, despite the availability of third-party funding, and, therefore, sheds light on the implications of the dual funding system of European universities for the development of units. By combining different indicators on a set of research units, they provide some preliminary evidence on their model.

Finally, Nobuya Fukugawa analyses the role of Local Public Technology Centers (LPTC) in Japan for the transfer of technology and knowledge to Small and Medium Enterprises. Based on patent analysis, he provides evidence that LPTC indeed function as a significant source of knowledge for SMEs in regional innovation systems and outperform universities in this respect. While displaying the potential of patent analysis for the quantitative evaluation of regional innovation policies, they also conclude that patent data should be complemented with more qualitative information, as most of the activities in such centres lies in technical consultation activities, which are hardly reflected in patents.

By thanking all authors and reviewers for their engagement in preparing this special issue, we would like to welcome participants to the STI2016 Conference on “Peripheries, Frontiers and Beyond”, which takes place in Valencia from the 14th to 16th September.

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  • Benedetto Lepori
    • 1
  • Sybille Hinze
    • 2
  • Wolfgang Glänzel
    • 3
    • 4
  1. 1.Interdisciplinary Institute for Data Science, Faculty of Communication SciencesUniversità della Svizzera ItalianaLuganoSwitzerland
  2. 2.German Centre for Higher Education Research and Science Studies, DZHWBerlinGermany
  3. 3.Centre for R&D Monitoring (ECOOM), Department of MSIKU LeuvenLouvainBelgium
  4. 4.Department of Science Policy and ScientometricsLibrary of the Hungarian Academy of SciencesBudapestHungary

Personalised recommendations