Skip to main content

Science in Transition How Science Goes Wrong and What to Do About It


Science in Transition, which started in 2013, is a small-scale Dutch initiative that presented a systems approach, comprised of analyses and suggested actions, based on experience in academia. It was built on writings by early science watchers and most recent theoretical developments in philosophy, history and sociology of science and STS on the practice and politics of science. This chapter will include my personal experiences as one of the four Dutch founders of Science in Transition. I will discuss the message and the various forms of reception over the past 6 years by the different actors in the field, including administrators in university, academic societies and Ministries of Higher Education, Economic Affairs and Public Health but also from leadership in the private sector. I will report on my personal experience of how these myths and ideologies play out in the daily practice of 40 years of biomedical research in policy and decision making in lab meetings, at departments, at grant review committees of funders and in the Board rooms and the rooms of Deans, Vice Chancellors and Rectors.

It has in the previous chapters become clear that the ideology and ideals that we are brought up with are not valid, are not practiced despite that even in 2020 they are still somehow ‘believed’ by most scientists and even by many science watchers, journalists and used in political correct rhetoric and policy making by science’s leadership. In that way these ideologies and beliefs mostly implicitly but sometimes even explicitly determine debates regarding the internal policy of science and science policy in the public arena. These include all time classic themes like the uniqueness of science compared to any other societal activity; ethical superiority of science and scientists based on Mertonian norms; the vocational disinterested search for truth, autonomy; values and moral (political) neutrality, dominance of internal epistemic values and unpredictability regards impact. These ideas have influenced debates about the ideal and hegemony of natural science, the hierarchy of basic over applied science; theoretical over technological research and at a higher level in academic institutions and at the funders the widely held supremacy of STEM over SSH. This has directly determined the attitudes of scientists in the interaction with peers within the field, but also shaped the politics of science within science but also with policy makers and stakeholders from the public and private sector and with interactions with popular media.

Science it was concluded was suboptimal because of growing problems with the quality and reproducibility of its published products due to failing quality control at several levels. Because of too little interactions with society during the phases of agenda setting and the actual process of knowledge production, its societal impact was limited which also relates to the lack of inclusiveness, multidisciplinarity and diversity in academia. Production of robust and significant results aiming at real world problems are mainly secondary to academic output relevant for an internally driven incentive and reward system steering for academic career advancement at the individual level. Similarly, at the higher organizational and national level this reward system is skewed to types of output and impact focused on positions on international ranking lists. This incentive and reward system, with flawed use of metrics, drives a hyper-competitive social system in academia which results in a widely felt lack of alignment and little shared value in the academic community. Empirical data, most of it from within science and academia, showing these problems in different academic disciplines, countries and continents are published on virtually a weekly basis since 2014. These critiques focus on the practices of scholarly publishing including Open Access and open data, the adverse effects of the incentive and reward system, in particular its flawed use of metrics. Images, ideologies and politics of science were exposed that insulate academia and science from society and its stakeholders, which distort the research agenda and subsequentially its societal and economic impact.

3.1 The Royal Response (1)

In the fall of 2012, there were a few high-profile academic public events that were related to the discovery in the year before of a few serious fraud cases in The Netherlands in biomedicine and social psychology. The latter case was shocking and notorious for how it had been done with unflinching arrogance over many years. Because of its size and impact, it became worldwide known. I was present at the meeting held in September at the Royal Academy of Arts and Sciences where Kees Schuyt, a prominent sociologist and law scholar, as chair of a committee of the Royal Academy presented the advice that was focussed on responsible handling of research data (KNAW, 2012). The conclusions of the advice and of the meeting at the Royal Academy was that fraud and violation of the principles of integrity in research was believed to be very rare, but that it should be investigated. The feeling was that education of researchers about integrity, but also in the institutions technical proper handling of data should be promoted and enabled. Very cautiously the idea was mentioned of the obligation for researchers to making data available that supported claims in a journal paper to improve peer review. Finally, it was concluded that informal peer pressure in the community and in the later stages more formally through peer review should be improved. Despite a classical reference to the ‘leading values of science which are distinct from any other social activity’ and cautious conclusions, the committee did pose a series of critical questions that they believed should not be evaded. They suggested that the social system in which individual researchers do their work might allow or even invite misconduct. In that context they mention the incentive and reward system with its academic hierarchies and publication pressure (p60). The panel with members of Academy and Young Academy largely agreed. Of interest was the mentioning of some examples of serious fraud in physics (amongst others ‘the Schön’ case). In response to this, a very senior Royal Society member from the natural sciences remarked that of course this issue of quality is typical for ‘the soft sciences and biomedicine, but not for us in the hard sciences, because in physics, through our experimentation, we ask a question to nature and nature gives a clear answer, so physics is beyond fraud’. The chairman, a theology scholar who early in his career had become a professional university administrator, who knew about the problem of foundations, decided to let that one go. At the conclusion of the debate, I made a short critical remark from the floor, that something is really wrong with science if we focus on the rare fraud cases but are looking away from to the growing evidence of a large ‘grey zone’ of shoddy science, also in other disciplines than biomedicine and social psychology. This grey zone is not populated with fraudsters or bad people who are to blame, but honest researchers that try to survive in our crazy academic system driven by perverse incentives and rewards. This I thought should be acknowledged and discussed. What I had in mind then was in fact to become one of the corner stones of Science in Transition and of this book. The chairman’s reply was ‘that may be so, but we cannot change a whole system’ and then there were drinks, gossip and appetizers (typically Dutch ‘bitterballen’) in the foyer.

Kees Schuyt was interviewed in a national newspaper and to my relief was much more open about the likely systemic cause of the problems. In October at a meeting held in Spui 25, a University of Amsterdam open podium/debate centre, Huub Dijstelbloem took part in the panel discussion with Kees Schuyt and Andre Knottnerus an authority in the Dutch health science and governmental science advice system. The debate was much more open and critical and did not evade the problems of the system.

November 28, 2012, at the Royal Academy again, a committee chaired by Pim Levelt, a former President of the Academy, presented its investigation of fraud and misconduct of Diederik Stapel. ( This case, together with a case at Erasmus Medical Centre, since their discovery in September and December 2011, dominated the debate about trust in science in the country. The committee revealed the technical and methodological aspects of the case in great detail. In their final comments they state that ‘Committees that have evaluated the research of social psychology, have not recognized some of the signals that the committee in this report do describe. They simply were relying on peer review both with respect to methodology and contribution to theory. Another issue in this context is to what degree these evaluation committees are instrumental in sustaining the assumed undue publication pressure and connected mores and behaviours. This specifically concerns requirements of numbers of publications, the order of authors, responsibilities of co-authors and repeated publication of similar results.’ (translation FM).

The Science in Transition Team

A year later in November 2013, the public start of Science in Transition took place at the same prestigious venue of the Royal Academy of Arts and Sciences on one of the channels in the centre of Amsterdam. The Science in Transition team started its work in January 2013. Huub Dijstelbloem, whom I already mentioned, had the years before been very active in national debates about incentive and rewards focussed on inclusive indicators and methods for evaluation of the impact of research. He also studied public participation and policy making which is discussed in Chap. 5. The other three members of the group that started Science in Transition, were Jerome Ravetz and professors Frank Huisman and Wijnand Mijnhardt. The five of us did not really know each other, but we shared our thinking about science which brought us together.

Jerome Ravetz (1929), Jerry, as we call him, replied promptly and enthusiastically, full of energy looking for action when I had send him in the fall of 2012 my little book about science, Science 3.0, Real Science, Significant Knowledge (Miedema, 2012). I did not know him, but knew his 1971 book (see Chap. 2). Ravetz with a small group of colleagues had published in 1993 a paper in which they described another way of doing science, explicitly with the aim to deal with policy issues of high risk and high uncertainty for science is critical but for which the time for deliberation is limited. They coined the name Post-Normal Science for an approach in an integrated and democratized process in which all relevant knowledge and social values and the relevant publics are fully acknowledged and participate (Funtowicz & Ravetz, 1993). In the months that followed Jerry received a Fellowship of the Descartes Centre of Utrecht University which brought him and his wife frequently to Utrecht. His first visit was to Amsterdam on January 4, 2013 when we talked the whole day and part of the evening about his work, his thoughts about science in 2013 and the actions to be taken.

figure a

Frank Huisman, Huub Dijstelbloem and Jerome Ravetz. (Amsterdam, February 2013)

Frank Huisman (1956) is at Maastricht University, the interdisciplinary group of Science, Technology and Society Studies (MUSTS) and since 2006 full professor of the History of Medicine at UMC Utrecht. His interest is the history (and sociology) of modern medicine. Together we started in 2009 a selective advanced PhD course on philosophy and sociology of science, called This thing called Science. The course proved to be an immediate success: 120 PhDs applied to be enrolled in the course which offered place to only 45 of them, and students declared it to be the best course offered by the Graduate School of Life Sciences. There was clearly a great need among some PhD students to learn about the history, philosophy, ethics and politics of science, and be socialized into the biomedical sciences in a different way. We felt very happy to be able to create this new awareness among a new generation of biomedical researchers.

Frank Huisman introduced Wijnand Mijnhardt (1950) to me at the end of November.

figure b

Wijnand Mijnhardt is an international well-known historian of culture and science and at that time served as Chair of Comparative History of the Sciences and the Humanities. He is founder and past director of the Descartes Centre for the History and Philosophy of the Sciences at Utrecht University. I told Wijnand I was honoured that he came to my room and I pulled his leg stating that ‘his Centre, to the best of my knowledge, preferably studied scientists and scholars that had already passed away a long time ago. I understood, I said, that this nicely avoids the political issues that in our time trouble academia and society, but’, I said, ‘my goals are quite the opposite. Our thinking about science should, in the good tradition of pragmatism lead to action in the real world to improve the academic lives of our stakeholders: graduates, post-docs, students and professors in our universities and those in society alike’. Wijnand appreciated the humour and loved this idea for the project. He in the following years eloquently brought to the table his strong opinions with colourful flavours in the context of Science in Transition.

figure c

Sarah de Rijcke, photo taken by Bart van Overbeeke

Regarding the composition of the team, we were criticized and had to admit, that we had a problem: we were five, and later four, older white males who each had done well in the system. This was corrected in part very soon, when Sarah de Rijcke a well-known researcher in STS at CWTS Leiden and an expert on all the issues Science in Transition was addressing joined the team. We did not have graduate students or early career scientist in the team. Our best defence to this was that, given the idea that changing a social system goes against the elites and the most powerful in that very system, we were not vulnerable to the classical framing of ‘being a couple of losers complaining about the system in which they had failed’. Many who question the mores and rules of the system indeed are told ‘If you cannot stand the heat, get out of the kitchen’.

3.1.1 Science in Transition

We started from the optimistic, some thought naïve, perspective of the possibility to improve science. Our analysis of the problem had a broad scope, from quality issues, fraud, poorly conducted or irrelevant science, agenda setting and responsiveness to issues in society, assumptions, ideologies and hierarchies that distorted the system internally in academia and in interaction with society. We wanted at all cost to avoid the well-known type of general academic discussions about problems of ‘the university’ and easy blaming of ‘incompetent’ administrators, lazy students or neoliberal economics’. Angry complaining and blaming without realistic directions for improvement would stifle our initiative like has happened to many initiatives before. For all of us it was clear that these problems had to be approached in a larger context of the socioeconomics of the institutional organization of science. From the start it was clear that we needed to more specifically discuss the contribution of the incentive and reward system. Persistence of specific problems, in our view, seemed to be related to the system of research evaluation in institutions and by funders as it had gradually developed since the 1980s. Our focus was very much on research, but in the incentive and rewards structures in academia this related to the poor appreciation of teaching and teaching careers and so this was discussed as well.

All of these issues one by one were not new, but we believed that an integral approach of the issues seen as parts of one social system was going to be quite unique. Many science writers and philosophers discussed their favourite views and worries, but a consistent system approach to science to our knowledge was very rare, if available at all. Apparently, without such an explicit awareness, we felt confident as a team to have enough complementary experience in science and academia, both in theory and practice, to take on this ambitious project. We decided that we had to get a proper analysis and comprehensive picture first. We agreed that going from there, to achieve long-lasting improvements, concerted actions of the community were required. This involved systemic institutional change in which academic leadership at universities, especially from Rectors, Deans, Royal Academies and prominent scholars, public and private funders should be engaged and committed.

Three workshops were held on Image and Trust, Quality and Corruption, Communication and Democracy in April, May and June 2013. Next to the initiators, about ten scientists from the Netherlands were invited to participate in each of the workshops. Participants were hand-picked by us, known for their expertise, critical thinking and outspoken views about science. (See website for the lists of participants and the workshop presentations.) Based on the results of these three workshops in the summer of 2013, a draft position paper was produced by the initiators, mainly by exchanges via the mail.

3.2 The Royal Response (2)

We were not alone in this endeavour. In the Royal Society, in that same period, a committee was working on a report on trust in science. With reference to the recent high-profile fraud cases, the Ministry of Education, Culture and Science (OCW), in January 2012 had formally asked the Royal Academy to advise the Ministry regarding trust in science. In it there was a specific request to advice on possible actions by the main actors in the domain of science, including research institutes, funders and government, that could help improve integrity and trust. This committee started in March 2012 and published their advice in May 2013. The report was presented in May 2013 by the committee chaired by Keimpe Algra, a humanities scholar, who was to become Dean of the Faculty of the Humanities of Utrecht University the next year (KNAW, 2013). The committee in response to the questions from the Ministry had taken a broad approach, explicitly including the wider system and community of science. They concluded that the Mertonian rules were under pressure because of changes in the scientific system that had occurred in the past 30 years, with consequences for the practice of research. This agrees with the analysis of the legacy of Merton presented in Chap. 2. From this analysis the committee concluded that there is, with respect to integrity and quality, a duty for the individual to show ‘honesty about the research goals and intentions’. At the same time and with even more emphasis the institutions, universities and funders were urged to take responsibility for the culture of science where it did not promote or even obstruct proper behaviour and integrity of researchers. A couple of times the committee suggested that it would be a good idea to have institutional accreditations for research to help the institutions in setting up and uphold the relevant practical policies. The example of quality assurance policies in health care were mentioned. Although the committee was cautious with respect to top-down programming of research, they made it clear that not only should research be done right, but also the right research should be done, which brought ‘agenda setting’ and external values as a novel dimension in the discussion. It was proposed to invest in more practical awareness and social control in the form of positive peer pressure with important roles for the research communities in universities and research institutes. For this, they said an open and safe academic culture was required.

The committee, in contrast to previous reports discussed the problematic effects of the external forces on the practice of academia. This related to allocation of funds and collaborations with private commercial partners. The increasing influence on academic life also of tenured staff, of short-term funding schemes, being focussed on ‘sexy topics and hypes’ and the researcher’s temptation to promise unrealistic impact and novelty were mentioned as a distortion of the dynamics of academia. This induced bias against replication and negative results, also at the journals, goes against long term more difficult research. The committee states that this is reflected in national research evaluations which enforces these practices and on a focus on numbers of publications. Concluding with constructive suggestions, the committee did not suggest to really rethink the issue of cause-and-effect regarding the problems discussed. They were nearly there but did not take that logical next step to conclude or at least suggest that the institutional organization with its incentive and reward system, and specifically its indicators for excellence, critical for decisions on funding and career advancement might provoke strategic behaviours that caused or at least promoted many of these interdependent problems.

This advice, ‘Trust in Science’, was discussed at a meeting at the Royal Academy in September 2013 where I was invited to give a talk and presented the Science in Transition Position Paper and ‘A Toolbox for Science in Transition’ to reassure the audience, of mainly early career scientists, that national and international change was possible (Supplement 1).

3.3 Science in Transition Position Paper, October 2013

A final version of the Position Paper, incorporating comments that we had received thus far was published on the website on October 17, 2013.

The Position Paper is composed of the chapters: Images of Science, Trust, Quality, Reliability and Corruption, Communication, Democracy and Policy, University and Education, and a brief Conclusion paragraph.

3.4 Science in Transition: A Systems Approach

Science in Transition as an initiative and movement to improve the impact of science and research entered a field where many had gone before. We were heavily inspired and influenced by many different scientists and scholars who had been writing about science and society, as the reference list of the Position Paper duly reflects. These writings and actions go back to the 1970s and deal with the ideology of science and its Legend, the sociology and social organization of science and with problems of science in and with society. In the years before 2012, major initiatives with respect to quality and reproducibility in research had started. This in reaction to the increasing evidence from empirical research showing poor quality and unexpectedly low reproducibility in biomedicine and psychology, but also other fields of research (Altman, 1994; Begley & Ellis, 2012; Prinz et al., 2011) (Ioannidis, 2005; Ioannidis et al., 2012; Moffitt et al., 2011; Moore et al., 2017; Nosek et al., 2012). I will not discuss the history of this meta-science work on poor quality research and replication, as that has been done by experts before. Our interest in the context of Science in Transition was to understand why poor research is being done and published. It had not been decreasing even as we know about it but had apparently rapidly been increasing to be a problem in the more recent years. Many studies had already shown the relation between the use of bibliometric indicators in research evaluation, thus in the incentives and rewards system and strategic behaviour of researchers (Hammarfelt & de Rijcke, 2014; Moore et al., 2017; Wilsdon, 2016; Wouters, 1999, 2014; Wouters et al., 2015).

It is this problem that is, more implicitly though, addressed by the San Francisco Declaration On Research Assessment, known by its acronym DORA, that started in December 2012:

There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties. To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment.

The debate about quality and impact of research in biomedicine reached a novel international level in January 2014 with a series of articles under the heading Research: Increasing Value, Reducing Waste in the Lancet on January 8, 2014. This was the result of an initiative of a group of very established biomedical researchers, who in some case already for many years had focussed on quality issues related to methodology, design and problem choice in clinical studies in humans, but also in animal studies. Internationally the best known are: John Ioannidis (METRICS Stanford University), Doug Altman (Oxford University), Ian Chalmers (Oxford University) and Paul Glaziou (Bond University) ( and the initiative is called Reward Alliance (

In the same month, a paper was published in Nature by the NIH leadership, Francis Collins and Lawrence Tabak, announcing the NIH reproducibility project. This project is an adequate reaction to the seminal study by Begley and Ellis published 2 years before also in Nature on poor reproducibility of pre-clinical biomedical research published in Nature, Science and Cell and an earlier study by Prinz et al. This was boosted in March 2012 by a paper in PNAS with the ominous tittle Rescuing US biomedical research from its systemic flaws, written by five very high-profile authors, from the US biomedical science community. The best-known authors included Bruce Alberts, a former long-serving editor of Science, Shirley Tilghman a former president of Princeton and Harold Varmus, a former Director of NIH, former president of Memorial Sloan Kettering Cancer Centre and then director of the National Cancer Centre of NIH and last but not least the 1989 Nobel prize winner (

figure d

3.5 How Scientists Get Credit

This discussion about quality of reporting and actions to be made to improve science was, in agreement with the initiators’ professional backgrounds, for years mostly focussed on methodology, statistics and trial design. However, with these papers in different so-called prestigious ‘high impact factor’ journals, that attracted quite some international attention, the discourse broadened to take in to account another critical aspect. To the best of my knowledge, for the first time the distorting systemic effects of research evaluations were explicitly mentioned in public debates and discussions in academic circles. Indeed, that is the most dangerous of ‘elephants in the room’ of science and academia, that almost all writers of papers, policy reports and Royal Academy advisors about trust and quality had evaded.

We, in Science in Transition, were convinced that without including in our analyses and actions this crucial part of the system, little progress can be expected. It was a corner stone of the Position Paper and we have argued strongly for it, although the critique was that it would be impossible to change because many different players with divergent and contrasting interests are involved. Most of the problems that were pointed out by Science in Transition and the national and international initiatives described above, may at least be maintained or even institutionalized by the incentive and reward system. For some time now since 2013 the ‘metrics’ the use or in fact abuse of bibliometric indicators has been a central issue. We had the fortune to have Paul Wouters, an international distinguished researcher in the field of bibliometrics in our team and soon as remarked before, Sarah de Rijcke affiliated with CWTS joined the team.  Paul Wouters was in 2012 appointed as Director of the Centre for Science and Technology Studies (CWTS) at Leiden University.

Use and Abuse of the Metrics

The five initiators of Science in Transition have been introduced, and I will introduce some of our fellow travellers with the progress of the narrative. From the summary of Paul Wouters’ contribution to the second workshop it is clear that his expertise and broad experience with both theory and practice of bibliometrics and of the social organization of science were of utmost importance. In the public debate, Paul was very visible and strongly connected with the Incentive and Rewards theme of Science in Transition. Paul appeared to have a very interesting and colourful career. He has a Masters in biochemistry (Free University of Amsterdam, 1977) and a PhD in science and technology studies (University of Amsterdam, 1999). His PhD thesis titled The Citation Culture (1999) is on the history of the Science Citation Index, on and in scientometrics, and on the way the criteria of scientific quality and relevance have been changed by the use of performance indicators. In between these degrees he has worked as science journalist and as editor-in-chief of a daily newspaper (“De Waarheid”). This newspaper was the daily newspaper of the Dutch Communist Party (CPN) that in 1990 was stopped when the CPN merged with the Green Left political party. From 2010 to 2019 he was Director of The Centre for Science and Technology Studies (CWTS). From 2016 on, Wouters served on several EU expert groups that were started by the DG Research and Innovation to advice on the transition to Open Science. Since January 2019, Paul Wouters is Dean of the Faculty of Social Sciences, Leiden University. (Source and citations: from The Leiden University website)

In the second workshop of Science in Transition held in June 2013, Paul Wouters gave a seminar largely based on his article, The citation from culture to infrastructure, which was published later that year (Wouters, 2014). Wouters presented an overview of his own work and major studies of other bibliometricians on the different types of effects of research evaluation, and specifically the use and abuse of bibliometric indicators. As virtually every debate about Incentives and Rewards is still dominated by the use and abuse of metrics this appeared to be a corner stone of the analyses that we did in the context of Science in Transition.

In the 1960s, at the advent of bibliometrics, its focus was on studying the dynamics of the different fields of scientific research to help understand nearly real time where science and the scientist are going. To know what the scientist are working on, what the big questions are in the different fields and, also of interest, what they do not (yet) study. Dynamics was in terms of changing numbers of papers and authors and thus researchers and funding. Tracking citations and citation patterns was done to discover networks of researchers working on somehow related problems and the relative importance of specific research questions based on citations to that work. Paul Wouters has studied the history of the Science Citation Index (SCI) which was developed and launched by Eugene Garfield in the early 1960s. I remember them from my first visits to NIH, and readers of my age will remember these enormous yellow SCI books in the library. They allowed you to track who had recently cited your papers and which papers of colleagues and competitors were cited or not. It took some time for the SCI to be used by a larger fraction of the community other than bibliometricians. ‘This use increased markedly’, as Wouters wrote in 2017 his obituary of Garfield, ‘after the Journal Impact Factor was marketed in the SCI Journal Citation Reports starting in 1975’. Garfield like many other bibliometricians with regard to JIFs ‘was uncomfortable with their misuse as performance indicators’ (Paul Wouters, 2017).

The bibliometricians, were not naïve, they did not believe that science was guided by a neutral ‘invisible hand’ or Polanyi’s autonomous ‘Republic of Science’. The use of ‘their’ indicators in the evaluation of research, research institutions, and even at the level of individual scientists as performance indicators, was unwanted and mostly incorrect use of their work. From the early papers on this issue one sometimes gets the impression that they had not in full anticipated this cross-over between bibliometrics and sociology of science and later research management and research governance. In that cross-over, the indicators were used not to understand the dynamics of research by looking back at its recent past, but to steer and manage the direction and the agenda of research in a forward-looking approach (Whitley & Gläser, 2007). This had wide ranging effects outside academia. Wouters argued that there is convincing evidence that worldwide, since the 1980s research evaluations, and the indicators employed, to a great extend shape the research agenda (‘problem choice’) at universities and funders. The greatest impact of these performance assessments is, as will be discussed below, the direct effects it has on the allocation of research funds at the EU, national and university levels. This was until 2000 relatively rare, but he cites the paper by Diane Hicks on this saying that: ‘By late 2010, 14 countries had adopted a system in which research funding is explicitly determined by research performance (Hicks, 2012). Examples of indirect effects of performance indicators have been described, for instance the Standard Evaluation Protocol (SEP) in the Netherlands. In the SEP evaluations with comparisons of research done in similar fields of research in all universities have been done which has since 1990 increasingly been based on quantitative metrics. In this national research evaluation, funding is not directly distributed based on such rankings but the effects on reputation, esteem and standing in the field are well recognized and anticipated (Van der Meulen, 1997). It thus has become common practice to show in resumes a publication list with JIF’s and the most current h-index. The latter has since its launch in 2005 (Hirsch, 2005) seen a very rapid and world-wide use which as Wouters said ‘makes the h-index itself an indicator of indicator proliferation’.

Even more important, in a natural human reflex, researchers anticipate the use of these metrics when their work will be judged and started to show several strategic behaviours. It has been shown that when evaluations focus on numbers of articles and on the JIF, that is the venue where these articles are published, this will be directly affecting the type of output of the researchers. If books, or articles in professional journals or publications in the national language are not valued, and don’t score in the system, despite intrinsic interest or possible impact they will have for specific fields, researchers will undertake much less efforts in that direction (Butler, 2007; Laudel & Glaser, 2006). Even more important is the effect on the choice of research topics and engaging in multidisciplinary work. As will be argued and demonstrated in later chapters, the use of metrics can have detrimental effects on types of more applied research which has large and urgent societal impact but does not bring credit points to the researchers because results do not get published in journals with a high JIF. Active researchers, dependent on grant money, recognize these survival mechanisms immediately and most of them as we shall discuss later obviously find this very frustrating. When talking to university administrators, board members of academic medical centres and directors of funding agency’s many times I heard them say in all honesty that this description of the behaviour of researchers must be a gross exaggeration. One may safely assume, they said, that the behaviour and choices of our highly educated scientists and staff are not likely to be so simply influenced by these metrics and indicators. Having been on committee’s with scientists that evaluate resumes for academic promotions or grant proposals this behaviour is explicitly visible and audible, both by the committee members and by the candidates and from the materials under review.

In the final paragraph of the 2014 paper, Wouters says that it has to be seen how in the future these behaviours will develop and whether they will persist. As we know now, they did persist and are used to rank the universities world-wide (Hazelkorn, 2011) The topic of the perverse effects of the abuse of metrics and how they invite or even enforce strategic behaviour of scientists was and is still hot. This is sad but not unexpected given the analyses in this book that show how hard it is to change the indicators in order to make the system more inclusive, qualitative and fair.

Wouters as Director of CWTS was prominently interviewed by NRC, Trouw and Volkskrant in the months after the first symposium. In May 2014, together with John Ioannidis, we were interviewed in an article about incentives and rewards in “Medisch Contact’, a Dutch weekly widely read by the medical profession published by The Royal Dutch Medical Association (RDMA).

At CWTS, Paul Wouters and colleagues had launched a research programme in 2012 that took the role of metrics in science head-on. Paul Wouters and Sarah de Rijcke were also members of the team that wrote during 2014 and 2015 a thorough ‘Independent Review of the Role of Metrics in Research Assessment and Management’ in the UK Research Excellence Framework, with the title ‘The Metric Tide’ (Wilsdon, 2016). This report, came on the basis of broad and detailed research, to quite similar conclusions as Science in Transition but had a strong focus in addition on scholar publishing and bibliometrics. In the accompanying literature review, new empirical studies are cited that paint a detailed picture of how metrics are not only being taken up in research management and decision-making, but also feed into quite run-of-the-mill choices scientists make on the shop-floor: metrics-infused decisions that structurally influence the terms, conditions and content of their research (Rijcke et al., 2015; Wouters et al., 2015). Around the same time, Paul Wouters and Sarah de Rijcke, then both at CWTS Leiden, with three colleagues among which Diana Hicks, published The Leiden Manifesto (Hicks et al., 2015). It calls for a different way to evaluate research based on an inclusive set of ten principles (Supplement 3).

Entering the Field (1)

I did bench research on bacterial vaccines as my military conscription, at The Netherlands Institutes of Health (RIVM), just outside Utrecht. In that way as a researcher, I entered the field of science and was pretty much immediately introduced to the credit cycle. Being trained in immunology in Groningen, at RIVM I joined a small research team that made a new type of experimental (so called conjugated) vaccine ragainst bacteria (Neisseria meningitidis) that cause disease among children and young adults. The compounds were tested for induction of a protective immune response in mice. We did in that year 1980, quickly a lot of experiments with nice positive results that were presented at meetings and published the years after I had already left, with me being a second or third author. The senior investigator had set up a very productive and original collaboration with a high-profile pyrolysis mass-spectrometry group at AMOLF in Amsterdam, then ‘the heaven of physics’ in the Netherlands. A grant was obtained for me to pursue our work as a PhD student. AMOLF wanted to do more biophysical life science research, so I did a job interview in the fall of 1980 with the director. I was quite nervous since the director was professor Jaap Kistemaker, an impressive man, well known for having developed the principle and technology of uranium enrichment with ultracentrifugation. I was offered the job but chose not to join AMOLF which I had to tell Kistemaker over the phone. I preferred a job offer per January 1981, in a PhD position at CLB in Amsterdam. CLB then was regarded as one of the finest immunology institutes in the country. Kees Melief, a MD PhD then in his early forties, was heading the unit. He had returned from the US after a very productive stay in Boston. He had published well and was considered one of the new generation biomedical scientists with strong vision and a modern view of science and research. Melief took with him the American research culture and knew how to lead his team to the top of the field. We were a modern immunology lab where consciously strategic choices were being made on what to study. We were closely following the international fronts of the field and developments at the funders. In those days funding for biomedicine was rapidly increasing on the promise of new insights from molecular biology. The lab culture was to drive for results and publications and was already aimed at the top journals and we played the journal impact factor game. We were conscious of national competition and competitors abroad and thus highly competitive. New technologies like the generation of monoclonal antibodies, molecular biology, oncogenes, novel methods in molecular virology and immunology were immediately incorporated, experiments were done in the mouse and with blood samples obtained from patients.

figure e

Figure adapted from (Hessels et al., 2011)

3.6 The Credibility Cycle: Opening Pandora’s Box!

Despite being heavily criticized by papers like The Metric Tide and, The Leiden Manifesto and despite high profile and widely endorsed actions from within the community of science, such as DORA, the abuse of metrics clearly still is common practice around the globe. To understand this persistence of the use of metrics, we have to understand the role incentives and rewards, and of the critical role of metrics therein in the institutional and social organization of science and academia. As Paul Wouters argued, since the 1950s the sociology and management literature on the institutionalization and organization of science - in academia, universities and the various funding organisations- mentions incentives and rewards as an important component of the governance of the community of science. As discussed in Chap. 2, that literature was too respectful of science and even more of scientists and was normative. In the early days from Merton to Popper and Polanyi, science was believed to be at the discretion of the community of science, interference was not appreciated. The reward system was part of the ‘Black Box of Science’, not to be questioned by outsiders, who anyway were believed to not understand science at all. This Black Box was part of the magic of the ‘science knows best’ narrative of Vannevar Bush after the Second World War. Especially basic science, which also goes by the names of ‘blue skies’, ‘curiosity-driven’ or ‘free science’, when left alone, of course well-supported, so it goes, will have huge returns for society, for the military and economy. As discussed in Chap. 1, this was the power of the marketing and sales of natural basic science between 1945 and 1960 and the basis for the distinction between the natural and biomedical sciences and the humanities and the social sciences.

In the late 1950s, Peter Winch, in his The Idea of a Social Science, was the first to point out that the research in social sciences and the humanities is also science, albeit it a different form of the natural science and research and should not be judged by the frame of’the scientific method’ of the natural sciences. (Winch, 1958) Winch as many others, apparently was still under the impression at least left it open, that the natural sciences indeed were successful because they had a unique formal, well- founded and infallible method. This is, as I discussed in Chap. 2, in the positivist context of those days, not strange. The ‘successes of the natural sciences’ were the main reason why it was in these days, but still, mind-boggling even for most philosophers to admit that even in the natural and biomedical sciences there is no general, validated, formal and universal timeless method. I already pointed out that Ernst Nagel in his influential textbook of 1961, discussed this problem in general terms as well as other methods of inquiry appropriate for the social sciences (Nagel, 1961).

As we have seen when philosophers, historians and sociologist after 1970 started to study science as practice, eventually they also came closer to the social system and the Black Box that hid the rewards system from the eyes of outsiders. Stephen Toulmin (1972), John Ziman (1978), and a few other authors, after Winch, explicitly expressed a conceptual critique regarding the reward system of science and the indicators used in research evaluations in comparisons between academic disciplines. This was at that time not yet about the type of metrics, but as discussed in Chap. 2, about the myth of the method of the natural sciences in contrast to the hermeneutics (interpretative methods) and reasoning (‘the vague methods’) of the humanities and the social sciences. As a consequence of this belief of the supremacy of the method of the natural sciences, the social sciences and humanities, these authors concluded, were systematically undervalued. They were getting a bad deal in academia. Toulmin was one of the first in his Human Understanding (Toulmin, 1972) to take this insight to the ‘corridors of power’ of academia and firmly attacked the positivist Cartesian dominance of the natural sciences and to point to it as the cause of this unequal fight between these disciplines in academia. He believed this was the poverty of academia, a major problem for the enterprise of science and scholarship in society. Ziman in his early work criticizes the ideology of the legend and the natural sciences but struggles with the idea that SSH have their own field of inquiry with proven methods and huge impact. It is of interest to note, as I did in Chap. 1, that only 13 years before C.P. Snow had criticized the humanities for their snobbery regarding the natural sciences (Snow, 1993).

We see the connect between the Legend, its philosophy of science, with the way science became organized and governed since 1945. From the 1960s on, but definitely in the past 40 years a multitude of complex often antagonistic interactions between society, academia, universities and knowledge institutes has shaped science in all possible meanings of the word ‘science’. In these interactions, communication, debates and conflicts, contracts and agreements, serious power relations are at play that shape science and the growth of knowledge at many levels. This involves science as the national and global system of public knowledge production, science as the total of disciplines organized in the structures of academia, including the sciences and social sciences and humanities (Guston, 2000; Rip, 1994; Rip & van der Meulen, 1996; Whitley, 2000).

3.7 Distinction

At the institutional level, virtually the whole of academia became organized by a social system that is most adequately described by Bourdieu’s concept of ‘a field’ (Bourdieu, 1975, 2004). It is a truly social game of stratification, elites and distinction based on indicators about professional quality and excellence but also on habitus and subtle social rules. We have seen in Chap. 1, that the idea of ‘pure’ and ‘applied’ science has been and still is an ideological concept that is, by both sides, called upon in debates about science policy. Bourdieu, in his seminal book “Distinction’ published in 1979 in French and in English in 1984, provides amazing insight and understanding of the different cultural, political and social tastes and preferences of the two main social classes (Bourdieu, 2010). Based on empirical sociological research performed in France in the 1960s, this is primarily discussed for tastes of the arts, painting, literature, furniture and music. The ideas of ‘pure’, ‘abstract’, ‘universal’, ‘disinterested’, ‘distance to necessity’ are indicators of the distinction of ‘high culture’. It is clear that ‘this distance to necessity’ that provides economic freedom for useless and free thinking is a privilege of the middle and upper class. Bourdieu shows how members born in families of these economic, socially and culturally distinct classes fare in education and academia. Building on these insights, and the concepts of habitus and field, a host of research has shown that this is not typically French. In Chap. 1 I already discussed the influence of class distinction in England on the preference for pure over applied science which was 60 years go criticised by C.P. Snow and Peter Medawar (Medawar, 1982; Snow, 1993).

This schism historically and philosophically runs deep. First Plato, of course, is mentioned by Bourdieu as a source (p47), but in a postscript (p487–502) this distinction between ‘pure’ and vulgar’ is taken to philosophy and academia with many citations from Kant’s Critique of Judgement. Indicators of high culture relate in the Greek classical natural philosophy to the opposite of charming, easy (pleasure and listening), facile, bodily pleasure, common (as in common knowledge). ‘Pure’ thus suggest more difficult, requiring more perseverance compared to ‘applied’ which is crude, easy and with results to be readily obtained. The ‘taste of reflection’ is opposed to ‘the taste of the senses’. I like to use ‘high church’ versus ‘low church’ to designate this distinction.

Five years after Distinction, Bourdieu published Homo Academicus where he studied how citizens born in different social classes achieve in the respectively preferred educational trajectories leading to academia with distinct preferences for specific faculties and jobs in and outside academia (Bourdieu, 1988). Finally, Stokes presented in his Pasteur’s Quadrant, a critical survey of the idea of pure and applied science in relation to technological innovation (Stokes, 1997). Stokes discussed how since the times of classical Greek philosophy, philosophy per definition ought to be ‘pure’ and not deal with mundane and real-world problems. He cites A.C. Crombie saying ‘it remained characteristic of Greek scientific thought to be interested primarily in knowledge and understanding and only very secondarily with practical usefulness’. (p29) Stokes shows that this idea of pure science and research, next to the rise of technology and applied science since Bacon in the nineteenth century, survived with sharp ideological and organizational separations in the system in mainly France and Germany.

The ‘pure’ and ‘applied’ distinction, like the schism between the ‘hard’ and the ‘soft’ sciences, has been to a great extent adopted world-wide and is very much alive within the natural sciences and biomedical research, but also within the social sciences and the humanities and has in the past 40 years in addition been institutionalized by the corresponding metrics. It still comes with the whole connotation of professional scientific but also political and cultural distinctions of ‘high’ and ‘low church’ and, as described by Bourdieu, cannot be underestimated as part of the power games of the academic field. If a scientist explains that the does fundamental or basic science, this implicitly but really means to say that he or she in his or her field belongs to the class of scientist with highest reputation and highest standing. During the Covid-19 crisis experienced scientists from all academic different disciplines spontaneously started research in multidisciplinary teams to fight the virus and its public health, social and economic crises. Virtually all of the scientists that we saw in the media and who did the work, most of their professional lives did research on for instance biology, epidemiology or mathematical modelling in the applied context of infectious diseases. Still, scientists from the ‘hard’ and ‘pure’ sciences argued that COVID-19 had demonstrated again that it was fundamental science that made major contributions in our dealing with the crisis, basic science should receive increased funding. Mind you, in most cases basic science in this type of political statements refers to basic natural sciences.

Advancing in the Field (2)

We learned ‘science the modern way’ by doing. We learned how the write, how to present and how to do our networking at meetings. We learned by looking on how Melief organized the lab, how he was critical regards novelty, rigor and quality, played the game of networking and publishing, how he dealt with peer review, and wrote his grants. In the days before the internet, we combined meetings in the US with visits to relevant labs to present our work. My first roundtrip was in December 1982 with visits to Mount Sinai NY, NIH/NCI at Bethesda, Stanford and a cellular immunology meeting at Asilomar, near Monterey and a laboratory at UCSF. Melief showed us how to move, discuss, pointed out the competition, criticized bad talks and introduced us to famous colleagues.

  • figure f

    Melief Lab retreat Spring 1981

  • figure g

    At Asilomar, December 1982

Grants, we learned, have to deal with the short cycle and be risk avoiding. You should pick problems that are considered relevant but should not be too complex or too difficult. If the grant is received, after the typical 4 years the grant is running you must have something to show in order to be able to secure new grants. ‘Something to show?‘Yes, at least three accepted papers in good journals.In four years??? ‘. If the work takes longer, this may not allow for these papers and you failed and will not be funded anymore. Career over!’ I often close this part of my talk a bit ironically: ‘For him and me it worked well, I was first author, Melief, last author, he moved on to his next job and became a professor in 1986, I stayed behind and became a Fellow of the Royal Society and wrote my own grants and started my own lab on HIV/aids, was last author on the papers and became a professor in 1996. Science is as simple as that’. Of course, the research style of Melief also in 1981 was not unique. True, he was an early adopter of the way biomedical research was to be done after ‘the molecular turn’. This was the real Science in Action (Latour, 1987) that Latour described which I in these days did read straight from the press. I must confess, I loved science from the very start. Some of the team and the department did not and still do not like it at all, as is also described by Latour (p155). They hated the need for networking and seeking allies, or to have to listen to the slick presentations of sometimes too weak data by competitive group leaders at meetings, the discussions with peer reviewers riding their hobby horses and other aspects of marketing and sales techniques. This was in their view embarrassing and even pathetic behaviour, more fit for short term politics, but surely not appropriate for the solid research they were doing at the bench, that had attracted them to a career in science.

In the power struggle to enter a field and for upwards mobility, indicators and criteria for excellence are employed within science not by voting or a democratic process, but by colleagues (peers) in committees, advisory boards and promotion committees populated by the elites of the various academic disciplines at any given time (Bourdieu, 2004; Polanyi, 1962). This is also how professional credits, reputation, academic positions and last but not least financial credit, research funds are distributed. This concept of a field and its credibility cycle was taken from the work of Bourdieu and visually depicted by Latour and Woolgar in their seminal study of the daily practice of knowledge production by biomedical scientists at the Salk Institute at San Diego (Latour & Woolgar, 1979).

3.8 Of High Church, Low Church

Over the years since the 1980s the system of science was increasingly held more accountable to its claims and promises on the return on investments. The external political causes relate to the growth of the system in researchers, the ever-increasing volume of investments required, the need for governments to make choices that could be explained and defended, based on data to show results in relation to societal and since 1990 dominantly economic needs. In that development, the life sciences and engineering were thriving, physics that did well in the Cold War with user-inspired basic research, suffered (Stokes, 1997). Increasingly also research on environmental sciences has been growing until now. As described (Rip & van der Meulen, 1996; Wouters, 2014), the national aim to compete with respect to the military during the Cold War and later mainly economically by investing in science, technology and development called for ways to quantitatively measure the impact of science. Since for societal impact to show a large lag time is required, short term quantitative measures were used mainly of publications and their impact via citations and numbers of patents. Gradually from 1980 the use of these metrics has become dominant to measure the performance of the system at the national and institutional levels and to the level of departments, laboratories and research groups.

Gradually since the 1980s it rapidly became normal practice in academia to use these indicators also for the evaluation of research of individual scientists. The nature of the indicators to choose were never discussed on beforehand in small committees or larger conferences and meetings. They evolved over the years and their use got established by the legendary ‘invisible hand’, being an interplay of concepts of science, and of interests and powers in the different academic communities as discussed in the previous chapters. Implicit and explicit ideas about hierarchies of journals had evolved and were linked to journal impact factors which became the measure, not only for the journal at large, but for the individual research paper published in the given journal. Not unexpectedly, in the natural, biological and biomedical sciences the idea of excellence became linked to a specific type of research inspired by the Legend. This modelled after the quantitative, formal and analytical type of the work done in physics. The emphasis like in the natural sciences was on more basic work resulting in general findings, and theories of a more abstract and theoretical type which was suitable for international English language journals with a broad readership. These journals by definition thus had a higher impact factor and they started to actively game this process in order to become the hottest journal for researchers to publish. They started, for example to solicit more reviews on topical issues and focussed on and invited ‘sexy’ research papers that presented novelty about hot topics that changed over time given developments in the field. The 'normal' solid science was rejected and advised to go to ‘speciality journals’ in my field for immunology, virology and infectious diseases for instance. In the same vein, qualitative scholarly work and applied research and papers reporting negative results became less valued and less easy to publish properly. This translated to a shift to reductionist formal methods of research also in other fields like economy, the geosciences, social-psychology, sociology, linguistics and even in the humanities. As ‘high church’ research scored in higher impact factor journals and was thus better regarded in career advancement or funding committees, it converted these academic credits in monetary (funding) credits which in turn were used to produce more of the required type of papers. At the higher organizational level, this type of academic output is important for the institution’s position on international ranking lists.

Of note, this trend was coming from the gradually changing mores of the researchers serving on committees and via that route it became policy in committees at universities and funding agencies to use quantitative bibliometric indicators for quality that referred to internal academic excellence and not societal value and impact. This all relates to the accumulation of credit, scientific and social capital required for career advancement at the individual level and has resulted in an academic culture characterized by massive production of papers, a bibliometrics game driving for particular types of publications. Metrics are even changing how scientists define quality, relevance and originality in the first place (Müller & de Rijcke, 2017; Wouters, 1999). Production of robust and significant knowledge and results are secondary to short term output complying with a quantitative credit system for academic career advancement. This is primarily evaluated at the individual level which goes against collaboration and multidisciplinary team science in departments. There is, based on empirical data, wide consensus that this is the main factor that determines the semi-economical behaviour of researchers regarding problem choice, collaborations, networking, grantsmanship and publication strategies, funding and outreach (Bourdieu, 2004; Latour, 1987; Stephan, 1996). This highly competitive social system does result in a widely felt lack of alignment and shared value in the academic community (Fitzpatrick, 2019). These normative, opposing and often conflicting ideas about what science should be and the type of research excellent scientists should be doing, indeed still are the cause of many problems in academia. Within the field (of the social game) of science and research it has resulted in unsound competition, power struggles, elitism, stratification and hierarchy between academic fields and of note, within disciplines that are based on obsolete or simple wrong ideas about science and research. It has been shown by numerous studies now that across academia because of a massive growth of the numbers of scientists and investments, because of hyper specialisation social and quality control by institutional and peer review fails. This has led to a generally felt frustration by the majority of scientists in academia, which however the academic leadership did not immediately recognise, flatly denied or recognized but rebutted with ‘this is how science is, if you cannot stand the heat get out of the kitchen’. When it was acknowledged, then one was advised by mentors and colleagues not to address it openly, in order ‘to not hurt one’s own career chances in academia’. It in this way has had and still has major impact, in particular on the lives and careers of students, young and mid-career scientists.

figure h

3.9 Physics Envy

It is clear that this system is not incentivising and rewarding investigators who do work in too close connection with (‘messy’) problems in the real world, as it does appreciate more fundamental (‘pure) formal work in the natural sciences and biomedicine, but also in the social sciences. The criteria and norms of excellence and concomitantly the dominant metrics being used to evaluate science and scientists across the institutions in academia were and are still strongly determined by the classical ideas about science, with the historical preference for the methodology and type of formal products of the natural sciences. This forms in academia a major well-known disadvantage to SSH compared to STEM and the biomedical sciences. In a response to survive and compete, researchers in social sciences, economics and even humanities in the past 20 years took refuge to research with more quantitative methods aiming for more general theories and insights. This ‘physics envy’ serves to show that their methods and conclusions are ‘hard’ science as well. As a consequence in these academic disciplines, including the biomedical sciences, there developed a visible gradient from quantitative physics-like research to classical scholarly humanities work not using math but reasoning and argumentation. This is the gradient from ‘high church to ‘low church’ as I call it.

Playing the Games of the Field (3)

Getting his attention!

On a spring morning in 1991, Hanneke Schuitemaker and I had, after heavy negotiation with his secretariat, an 8.00 am meeting scheduled for just 15 min. Knowing that he started in the office at 6 am and worked till very late and mostly on formal more important dossiers, we were prepared. I had adopted that practice and nearly every year visited NIH to discuss with the important researchers at NIAID, then and still (!) lead by Dr. Anthony (Tony) Fauci and his collaborators. Fauci, now 79 of age, is still very much in that job, now in daily White House press briefings because of the COVID-19 pandemic. He was already then busy and extremely efficient with his time, always 1 day ‘in and out’ of conferences giving his famous ultra-speed keynote talks on data from his own laboratory. We were that morning going to show Fauci unpublished work from Hanneke with evidence for two different strains of HIV with pathological and clinical implications. We knew we had to talk for 15 minutes straight without a pause to inhale, because we feared that Fauci would otherwise takeover and start to tell us about his work. It was a rehearsed marketing and sales pitch for our SI and NSI viral phenotypes. We apparently succeeded. Many years later after the molecular confirmation and identification of their receptors from many labs, Fauci at a meeting referred to our pitch. So the advice is: ‘spread the news about your ‘important’ work at visits around the globe, at the corridors during coffee breaks of the meetings, ski lifts, and especially at the ‘gossip sessions’ at the bar during meetings, at speaker dinners and of course on TV shows if you get a chance. You never know for which major journals they are a reviewer or on which committees these people might serve. I hasten here to give Fauci the credit he deserves for his current role in dealing with the COVID-19 pandemic in the US, but in the context of the early days of the aids pandemic for engaging with the gay community in New York and truly listening to their complaints and needs. Fauci has been more or less personally responsible for the formidable budgets coming to NIH to fight HIV and AIDS. In those early years when the US government was not that receptive, the gay community in their frustration unfairly put the blame on Fauci, but soon recognized him to be a loyal partner in the fight against HIV/AIDS.

figure i

Photo: National Institutes of Health Library

3.10 Science in Transition: The Initial Reception

Before the official international start a symposium was planned on 7 and 8 November, but we first organized a small format on-invitation meeting to get a first response to a near final draft of the Position Paper and commitment from the field on September 25, 2013. We had invited representatives of the various players in the domain of science and society. These included the association of universities in Netherlands (VSNU), The Royal Academy (KNAW), governmental funder the Dutch Science Council (NWO/ZonMw), the representative of the joined federation of Dutch charities and directors of intermediate institutes that advice the government on science, innovation and development. The latter included the Netherlands Scientific Council for Government Policy (WRR), the Netherlands Environmental Assessment Agency (PBL) and The Rathenau Institute. The reactions were, as anticipated, quite mixed. Some, especially the representatives from the Royal Society, the universities and The Dutch Science Council felt that the tone was harsh and suggestive of a crisis for which data, they thought, were lacking since mostly anecdotal stories were reported. Some felt offended, and even doubted that something was wrong at all. In general though, the fact that by our position paper this debate is now in the open was appreciated, although fear for backfire from politics and society was abundant in the group. It was believed that more empirical evidence was needed to better estimate the size of the various problems and get a feel for the international and historical perspectives. It was agreed that the relation between research and teaching and the interaction with society, needed more attention. Finally, it was felt that given the issues that were brought up, the adverse effects of critical parts of ‘the system’ needed more research. Bert van der Zwaan, from geosciences and then the Rector of Utrecht University, after being critical and irritated about the logic and unpolite tone of our paper, was clearly in agreement with our idea that actions should be undertaken to change the incentive and rewards system. Hans Clevers, an internationaly well-known researcher in biomedicine, then the President of the Royal Academy said that he, as an active researcher in stem cell and cancer biology, recognized the issues and was sympathetic to the proposed actions to be undertaken.

Rutger Bregman, historian and journalist at De Correspondent announced to start to practice investigative journalism into science, in analogy of how Joris Luyendijk researched the financial industry of the London City, to find out about a crisis of the system. This idea of Bregman was repeated by me as an invitation to the participants of a meeting of the Dutch science journalists held in October at the Royal Academy. In the weeks before the symposium of 7 and 8 November, Volkskrant a major national newspaper announced an investigative series on how science really works. The Utrecht University journal DUB started a science blog around the Science in Transition debate. Economist came in October with an impressive well-researched issue on How science goes wrong. Scientific research has changed the world. Now it needs to change itself’

The articles in Economist, much to our surprise, followed largely the main criticisms of our position paper with evidence. Our response was, ‘hey, they stole our thunder’, but we were pleased also, because those who questioned our analyses and called for more evidence were being served. Had we only known how much more of that evidence was to come in the next few years! Already in the days immediately before Thursday 7 November there was media coverage. Saturday, 2 November NRC Wetenschap had a very constructive main article about Science in Transition. Hendrik Spiering, Chief Editor of Science News/Wetenschap of NRC who was a columnist on Friday, had written a main Editorial in the newspaper of Wednesday 6 November, on Science in Transition. The morning of 7 November, Volkskrant featured a large interview about the Science in Transition. Where I frankly explained the perverse incentives and argued for a more socially responsible research agenda to make research more relevant for society. Next to DUB and Folia, the magazines of the University of Utrecht and Amsterdam announced the symposium. As a surprise at breakfast, the Saturday morning after the meeting Volkskrant had a large piece with a figure showing the credit cycle! This was based on a slide I had started to use in these years and still use, adapted from Laurence Hessels (Hessels et al., 2009). Each day the symposium was attended by approximately 200 people. It was in the subsequent days and week covered in many newspapers and radio interviews. The evening of 7 November, I was in a nine-minute live interview on Nieuwsuur, a high-quality late-night news program. Some in the science community were absolutely not amused by the tone and style of how we presented our conclusions and our case for change. ‘Not so much that there are no issues, but research is by far not as grim as your story suggest, and this is going to undermine trust and is going to decrease funding from government.’

Prof. Jan Vandenbroucke, in an exchange earlier that year, disproved of the contrast of the Legend of science, the positivistic idea of the objective ‘scientific method’ and its Mertonian norms (Chap. 2), with the less romantic social reality of how knowledge is produced in the workplaces of science and research. He argued that both are part of the more realistic practice of science and ‘fierce competition and jealousy’ do not inhibit or interfere with the growth of knowledge. It is, he says, exactly criticism and strong debates that are needed to arrive at reliable knowledge. He cites Stephen Gould who in the context of the Science Wars has argued that these views of research can be understood to be parts of our daily research practice and that this is the social way in which we produce ‘objective’ -or did Gould mean ‘intersubjective’- knowledge that we accept as ‘truth? With this I agree. I have argued in Science 3.0 and the Position Paper, that once we leave the positivistic Legend behind, we can explain in honesty as Gould does, how we arrive at accepted claims that are not absolute timeless truths but always subject to tests and criticism. So, where is the problem between Vandenbroucke and us? In an email to me in response to the Position Paper in September 2013 and in follow up of the debate, Vandenbroucke clarified the issue. He does not, as I do, believe that the positivistic Legend has a deforming effect on the practice of science. I am fighting a ghost he says.

Science in Transition Conference: November 7 and 8, 2013, KNAW Amsterdam

Over the next few years, science will have to make a number of important transitions. There is a deeply felt uncertainty and discontent on a number of aspects of the scientific system: the tools measuring scientific output, the publish-or-perish culture, the level of academic teaching, the scarcity of career opportunities for young scholars, the impact of science on policy, and the relationship between science, society and industry.

The checks and balances of our scientific system are in need of revision. To accomplish this, science should be evaluated on the basis of its added value to society. The public should be given a better insight in the process of knowledge production: what parties play a role and what issues are at stake? Stakeholders from society should become more involved in this process and have a bigger say in the allocation of research funding. This is the view of the Science in Transition initiators Huub Dijstelbloem (WRR/UvA), Frank Huisman (UU/UM), Frank Miedema (UMC Utrecht), Jerry Ravetz (Oxford) and Wijnand Mijnhardt (Descartes Centre, UU).

Location: Tinbergenzaal, KNAW Trippenhuis, Kloveniersburgwal 29, Amsterdam.

Key notes by Sheila Jasanoff (Pforzheimer Professor of Science and Technology Studies, Harvard Kennedy School) and Mark Brown (Professor in the Department of Government at California State University, Sacramento); Column: Hendrik Spiering (Chef Wetenschap/Editor NRC Science): Nieuwe tijden, nieuwe wetenschap

Speakers: Sally Wyatt (Professor of Digital Cultures in Development, Department Technology and Society Studies, Maastricht University); Henk van Houten (General Manager Philips Research); Hans Altevogt (Greenpeace); Jeroen Geurts (Chairman Young Academy KNAW, Professor Translational Neuroscience VU Medical Center); Rudolf van Olden (Director Medical & Regulatory Glaxo Smith Kline Netherlands); Peter Blom (CEO Triodos Bank); Jasper van Dijk (Member of Parliament Socialist Party); Hans Clevers (President of the Royal Netherlands Academy of Arts and Sciences (KNAW). Panel discussion with: Jos Engelen (Chairman Netherlands Organisation for Scientific Research (NWO); André Knottnerus (Chairman Scientific Council for Government Policy (WRR))Lodi Nauta (Dean Faculty of Philosophy, Professor in History of Philosophy, University of Groningen); Wijnand Mijnhardt (Director Descartes Centre for the History and Philosophy of the Sciences and the Humanities/Professor Comparative History of the Sciences and the Humanities, Utrecht University)

3.11 Science in Transition on Tour

After the symposium, Jerome Ravetz left the scene, he had done his job and found it too difficult to participate any longer from his home in Oxford. We were invited to organize an afternoon session on Science in Transition at the 2013 WTMC annual meeting, on November 29. Huub Dijsterbloem, Frank Miedema, Paul Wouters and Hans Radder presented, for an at least for me, quite intimidating audience of scholars, including the members of the WTMC International Advisory Board, Aant Elzinga, Tom Gieryn, Steven Shapin and Andrew Webster. My point to them was: ‘You have been studying and writing about science and its institutions. STS has over the past 30 years obtained the status of a well-respected discipline in SSH and academia. Now it is time ‘to translate this ‘pre-clinical’ knowledge to the ‘clinic’ were the patients are. We have a problem in university, and we need you and your knowledge badly.’

The Dutch initiators received and accepted many invitations to present and explain the message of Science in Transition at universities in the country. On our website we had the agenda with these activities to show to interested people the reception and that the movement was alive. In 2014, virtually at every university and academic medical centre one of us presented and debated. In these days the audience recognized the issues and urged us to present more of the interventions needed. The Boards of universities, we were told at some of these meetings, were not all amused, they feared it could cause unrest. Particularly with regard to the use of metrics, it obstructed with all institutes heavily playing the Shanghai Ranking. I here must be honest, since I as researcher, professor and institutional administrator, also had until very recently been ‘addicted to the Journal Impact Factor’. A confession I still often use to start my seminars with. It must be said that the rectors of University of Amsterdam (UvA) and Leiden University in January and February in their Dies speeches supported the initiative. De Jonge Academie of the KNAW in February presented a Vision on science and research that echoed many of the issues. Folia, the weekly of the University of Amsterdam featured Dijstelbloem and me in a discussion with UvA professors who were quite critical.

We were invited for a discussion with Jet Bussemaker, the Minister of Higher Education who was very interested. We discussed at the Royal Society with Directors of the KNAW Institutes where we were met with support, interesting suggestions for improvement and heard the familiar objections: that ‘if we engage the public they will not allow for basic science and novel programmes’, that they don’t understand science, and of course from the natural sciences ‘When I am hiring, I judge scientists on the JIF of their publications. If that is abandoned, what shall we use instead? Anyhow, it will take much more time.’ We tried with: ‘…..uhhh, just an idea, whar about reading their selected papers?’

We met with the Board of NWO, the major Dutch government funder board, who were really not amused at all. In a meeting with the chair and director of the Association of Universities in the Netherlands (VSNU), who were much more engaged already we discussed the effects of the current Incentive and rewards system. We pitched at the ‘Night of Science’ of UvA and Hans Clevers in his annual speech as President of KNAW discussed some of the hot topics. In June 2014 we published our evaluation of an academic year of Science in Transition and announced we would continue, because of enormous support and because we were even more convinced of urgency and need.

The Elephant in the University Board Room

‘It was a bright and sunny afternoon in June 2014, when members of the Science in Transition team met with the Rectors of the Dutch Universities at Utrecht University’s Academiegebouw. The meeting took place 7 months after the first symposium, which had inspired a national discussion about the state of the art of in science and academia. The message of Science and Transition was initially met with a lot of sympathy by those who recognized the problems and their potential causes. Many liked the interventions suggested by Science and Transition to improve science and academia. But some complained about the polemical way the message had been delivered in the media. While they agreed with the analysis, they were afraid that it might backfire on science and scientists.

figure j

Others said the analyses were not new at all, as they were being discussed for years already. Lastly, there were those who rejected the analyses of SiT altogether, arguing that there was no need to change: science is an international endeavour, and the Netherlands were doing an excellent job in the rankings. All of these criticisms were aired that Thursday in June during the first 30 min of our meeting. Then the Rector of the University of Amsterdam, Dymph van den Boom intervened. She stopped the discussion and said: ‘Dear colleagues, let’s face it, there is a big elephant in the room. It may not have been particularly nice how our guests talked about our science and our universities, but they definitely have a point’. That started the conversation.’

In some respect the Rectors have to be excused for their slow response. Just before our public debate in 2013, Hans Radder, who had been engaged with us, had with Willem Haffman published an Academic Manifesto which put all the blame on the university administrators (Halffman & Radder, 2015). They had sold academia due to the neoliberal evil of private interests, driving for patents (patenting they believed should be abandoned anyway) and financial gains. They had turned scientists into capitalist entrepreneurs instead of working for the public good. It may be that the Rectors also regarding Science in Transition sensed that something much worse was in the air. Indeed, 9 months later in Amsterdam a far more radical and uncontrolled up rise started in the University of Amsterdam with squatting of the Maagdenhuis, the home of the University Board which resulted in the stepping down of the Board. This movement called Re-Think was more in line with Radder and Haffman’s Manifesto, many complaints and a call for academic autonomy and for the democratization of university government and in a sense arguing for insulation from influences from society. In their eyes we, Science in Transition, were not to be trusted because too close to the people in power in academia. In our eyes they were not forward looking and did not present a clear integrated vision on science and academia in the twenty-first century.

In the summer of 2014 the European Commission, the Directorate-General for Research and Innovation (RTD) and DG Communications Networks, Content and Technology (CONNECT) started a public consultation under the heading ‘Science 2.0’: Science in Transition. The accompanying background document written by René von Schomberg and Jean Claude Burgelman presents an analysis of the current state of science and how science could change to be more efficient and may contribute more to society (EU, 2014). In a section called Science in Transition a few ongoing initiatives driving for change are discussed. Many of the issues are in agreement with the Science in Transition analysis and the authors state: ‘In the Netherlands, an intensive debate has evolved on the basis of a position-paper entitled ‘Science in Transition’. The ongoing debate in the Netherlands addressed, among other, the issue of the use of bibliometrics in relation to the determination of scientific careers. However, this debate went actually beyond the scope of what is described in this consultation paper as ‘Science 2.0’ and included also discussions on the democratisation of the research agenda, the science-policy interface and calls for making research more socially relevant. This questionnaire and the very informative analysis of the results were the start of the EU Open Science program in 2015. It appeared that many stakeholders preferred ‘Open Science’, not only as an alternative term over ‘Science 2.0’ but more importantly they liked to see science make the transition to the practice of Open Science. This policy transition to Open Science by the EU, in my mind was critical and will be discussed in more detail in Chap. 7.

A presentation on Science in Transition was given in September in Brussels for the policy advisors of Science Europe, the European association of public research performing and research funding organisations. One of them said that she like the ideas and plans a lot, but ‘did I know why the ERC was established next to FP7 and Horizon 2020? To serve those who want to get ample funds to do ‘free curiosity-driven research and not be bothered.’

The Dutch Ministry of Higher Education, Culture and Research, with reference to the debate elicited by Science in Transition organized debates to prepare for an integral vision and mission of research and science for the new government. Their Science Vision was proudly presented in November 2014. December 3, the second Symposium was held at KNAW about transitions, with international and national discussants. At that occasion the Association of Dutch Universities signed DORA (for the first time).

Level Playing Field? (4)

The popular image of science, as we saw in Chap. 2, is based on a community of researchers with, if not unique, for sure, exceptional integrity and altruism. They follow their professional vocation to search for truth and do this openly, disinterestedly and with great unselfish honesty. It was admitted by Merton, there is the Matthew Effect and inequality and there are elites. It was believed that especially the top scientists are endowed with exceptional integrity to serve as role models for those who are in the heat of the daily competition. Advancing in the field, scientists realize there is more stake than finding significant insights and knowledge. It is very much about who first discovered an insight. Moreover, major novel insights are threatening as they overthrow major previous results of leaders in the field and are generally resisted and not immediately accepted. When you are not generally seen as a major player, work has to be done to make the community aware of an interesting result and get the credits badly needed to survive in the system. During the first years as a group leader I learned some ‘tricks of the trade’, pushing the findings of your laboratory, which after reading Jim Watson’s The Double Helix were not that surprising anymore.

In 1987 in a collaboration with Hidde Ploegh and his colleagues, then at the Netherlands Cancer Institute, we observed that by inhibiting enzymes that are important for the sugar coating of the HIV envelope protein the interaction with the receptor on human T cells was disturbed. HIV was rendered non-infectious. This was biochemically of interest and opened up avenues for anti-viral drug development. Hidde was the major and thus last author and decided ‘to go for Nature’. The review reports, at that time by airmail, were not all that favourable. No problem for Hidde who had at that time already broad international experience and standing in the field as a top biochemist and immunologist. In my presence he simply called the editor, they discussed the comments and Hidde explained why the thought not all reviewers appreciated the significance of the work. A fourth expert was asked to review and November 5, the day after my oldest son was born the paper was published and was prominently featured in The Volkskrant, a respected national newspaper (Gruters et al., 1987).

Nine years later, in January 1995 two major, very innovative papers were published in Nature that shed new light on the dynamics of HIV infection and urged us to rethink the immunopathogenesis of AIDS (Ho et al., 1995; Wei et al., 1995). The authors were interviewed on CCN and made headlines in major newspapers around the world. We had been engaged in experiments to test the old hypothesis and came to the conclusion that the old hypothesis was wrong, but our data also provided unexpected amazing evidence against the major immunological component of the new hypothesis proposed by Ho et al. As David Ho then was one of the major scientist in the field, I thus anticipated resistance from reviewers to our data and decided to make a bold action. In a rooftop restaurant overlooking the harbour of Vancouver, at the occasion of the XIth International AIDS Conference in Vancouver in July 1996, I met with an editor of Science. At the meeting, the new hypothesis was the hottest topic by far, with in the meantime new papers by these same authors in major journals.

Over dinner I explained our data and its implications in detail. She was very interested and after the desert and coffee, asked me to submit as soon as possible. As anticipated the reviewers thought the data, intriguing, but they were not sure and in the end found the data hard to believe. ‘Because’, one said, ‘if this is true then even the new immunology hypothesis is not correct’. The paper was improved by taking these comments into account and was published in Science in November 1996 (Wolthers et al., 1996). Fortunately, our data were confirmed very soon.

You think I was addicted to the JIF? Yes, I was, because we knew that papers in these journals were regarded very important and instrumental to convince the community and our peers in the national review boards of our findings. They also definitely helped me to get my appointment as professor that same year. I hope that for experts, it was not the JIF, but our data that made the difference. Speaking about impact, David Ho, the major principal investigator and advocate of the new hypothesis of the Nature papers was elected Man of the Year of 1996 by Time Magazine.

3.12 Metrics Shapes Science

The style of the ‘high church’ of research remained the style of research with the highest esteem in academia and public research institutes. Accordingly, a credibility cycle with indicators derived from that type of esteem and excellence was dominantly used in distribution of reputation and funds in heavy competition by classical peer review schemes. This is reflected in the appreciation of pure/basic over applied science, formal quantitative (modern) over qualitative and argumentative research. Also think of the scientific status of the ‘hard’ sciences over the ‘soft’ sciences and correspondingly the potential impact of investments in natural and biomedical sciences over those in humanities and the social sciences. This system with its dominant indicators thus has major effects on the agenda setting of our research. Since these problems have been put forward by a now increasing number of writers from within academia, the issues are also increasingly experienced by administrators in university, funding agencies, government and elite key opinion leaders in academia. In reaction to that conservative view and reward system from academia, alternative institutes and funding schemes were developed, initially mainly by governments to accommodate mission-driven science for which next to scientific excellence, quality criteria related to reliability, robustness in practice and thus to societal impact was important. Here, researchers do work in national and international teams and consortia on complex real-world problems, many times in collaboration with private partners and citizens involved. This was and still is to a large extent by the academic elites regarded as ‘low church’ research because it is done with less competitive, soft money and thus these types of grants, such as those from FP7 or Horizon 2020, come with much less esteem that a grant from the ERC. This is just the old academic elitist game being played over and over and here on the distinction between pure and applied science and on winning in competition. It needs no explanation that research done with whatever type of money of course can result in excellent research in its own right.

Distortions of the Practice of Science and Research

STEM dominate over Social Science and Humanities

Theoretical & pure science dominate over applied science and technology

Curiosity-driven research is believed the best for solving societal problems

Scientific knowledge is neutral and value free and science should be autonomous, not bothered by external publics or politics and their problems. Scientists cannot be held responsible for the knowledge they door do not produce

Quantity, Replication, Relevance and Impact are subordinate to novelty and quantity

Individual Hyper-competition works against Team-Science, Multidisciplinarity and Diversity

Universities outsource talent management to funders based on flawed metrics, instead of having a research strategy according to their mission

Short-termism and risk aversion is rife because of four-year funding and evaluation cycles

Fields with high societal impact, but low impact in the metrics system suffer (aplied vs basic; local vs international)

The national and institutional research agenda is not properly reflecting societal (clinical) needs and disease burden

Open Science research practices are just ‘nice to have’: stakeholder engagement, FAIR DATA, Open Code and Open Access

Who Sets the Research Agenda of the Field? (5)

When in 1981 the first aids cases presented in the US and later all over the world, it was quickly understood that an infectious agent, most likely a virus was the cause. It was sexually transmitted by body fluids, like blood and thus also blood products, for which good evidence was produced early on. Patients presented and died because of compromised immunity and it soon appeared to be associated with a loss of a specific population of white blood cells, so called helper CD4 T cells. At CLB, one of the predecessors of Sanquin, the Dutch Blood Supply Foundation, the new virus was a serious threat to the safety of the blood supply and called for immediate action. Virology at that time, was not a big thing. In times of COVID-19, knowing now, what has happened since 1980, with HIV, SARS and Ebola and major Flu pandemics, that is hard to believe. At that time, it was thought that we had won the war against viruses and not much academic reputation and funding was to be obtained in human virology. There was, driven by medical microbiologists an effort on Hepatitis B Virus and to some respect on non-A- non- B Hepatitis Virus, which later was called Hepatitis C Virus. Medical microbiology was a very applied art, important for patient care and public health but academically regarded a done job. Identifying new viruses, for instance in seals, what our now famous colleague Ab Osterhaus at that time was doing, was pitied by scientists and compared to ‘collecting rare stamps’.

The Melief lab where I was, was involved in tumour-immunology in murine models. Given the career of Melief, an MD who was raised in the setting of blood transfusion and blood products, he was open to go to human research. He studied the development of murine leukaemia caused by mouse retroviruses, following the then widely held belief that viruses caused cancers also in humans. In the past 40 years there is much more evidence for that, but at that time this had been shown for Epstein Barr virus causing Burkitt’s Lymphoma and chronic Hepatitis C Virus infection associated with liver cancer. Retroviruses, related to those known to cause tumors in mouse and cats were sought in humans but not known this changed in 1980 when the first bonafide novel human retrovirus was identified by NIH researchers lead by Bob Gallo and a group in Japan lead by Hinuma. This virus caused a rare cancer of white blood cells prevalent in the population in Japan and the Caribbean. It happened that my project was on human T-cell leukaemia and Melief started a collaboration with colleagues, who treated leukaemia patients in the Caribbean communities in Amsterdam and London to study the involvement of the virus. I brought tests detecting immune responses to HTLV-1 to the lab from London and indeed found evidence for the presence of the virus in T-cell leukaemia patients. In 1982 when the first aids patients also presented in The Netherlands, there appeared a claim in the literature that HTLV-1 might be involved. We started a collaboration with Jaap Goudsmit a medical microbiologist and virologist at AMC, who was keen to find an interesting and challenging new research topic and had spotted aids as an ideal candidate. We tested whether evidence could be found for HTLV-1 infections in AIDS patients in Amsterdam. There was no convincing evidence, but my career had made a dramatic turn to research on HIV/aids already. From then, I worked on viro-immunology of HIV and aids which was driven by the urgent problem that HIV caused for the safety of various blood products. The murine virology at the institute had stopped in 1984 as Melief had followed the field to work on oncogenes in mouse and humans. Oncogenes had just then been discovered in models of murine and Rous sarcoma tumour viruses, research that was propelled by enormous technical progress in molecular biology in the late 1970s. So, in 1984 Melief and his group logically left for the Netherlands Cancer Institute.

3.13 It Is Contagious?

Could this view and this practice of science, the reader might secretly hope, not be a ‘Dutch Disease’, driven by dangerous liaison between Calvinism and neoliberal capitalism? The answer, I am afraid, is a clear no. This system of incentive and rewards, informed by the Legend and its legacy of the myth of the scientific method of reductionism, has shown to be highly contagious and has been disseminated as an infectious disease by academics travelling all over the globe. It has in the past 20 years become common practice in Europe, Canada, Australia, India, Indonesia, China, Singapore and Hong Kong, Latin America and in Sub-Saharan Africa, most notably South Africa. The introduction of international rankings, especially the Shanghai Ranking in 2006 has accelerated the use and abuse of the metrics in the incentive and rewards system. As we have seen this put most of the weight on the more basic science, and the publication and citation cultures of STEM. Science nowadays must be ‘international’ to score, work on urgent national and regional problems normally does not get published in the English language top journals. The effect is that in order to get higher in the rankings, research in universities in for instance Indonesia or South Africa is steered towards topics that score in international high impact journals, at the expense of research on topical problems and needs of the local publics. I don’t even mention that most institutions in developing countries cannot afford the subscription fees of the top journals, most which are not open access. Results of our HIV/aids research done in Amsterdam were not accessible to researchers and medical specialists in developing countries that had the greatest disease burden with social disruption and literally millions of deaths from aids. Only in an acute crisis as the COVID-19 pandemic that we experience at this time, all data and papers are made immediately open and accessible to all. Will this openness be only temporarily?

3.14 Interventions Needed

Most of the different components of the analysis of Science in Transition, as out lined above, at that time were not new or original at all. I can cite many more well-written and well-documented texts, in journal articles and books that analytically tells us the same. A fine example is European Science Foundation Science Policy Briefing, written at the same time as our position paper (ESF, 2013) in 2012/2013 by seven top experts amongst others Ulrike Felt, Alan Irwin and Arie Rip. The paper explicitly discusses the adverse effects of metrics on problem choice and the fact that public engagement is not being considered as part of the research and suggests interventions, as DORA (p20–21). The authors however did not take the next step to list a series of concrete actions to be taken by administrators and scientists who are responsible for that problematic and limited system of research evaluation. I have pointed out that in general most of these authors stayed at safe distance from the proverbial elephant in the room.

Discussions about changes in this part of the governance system of science and research have intensified in recent years. It was pretty normal in the 1960s and 1970s to talk about science in terms of power, elites and money. Since the 1990s that talk seemingly was taboo. It seems to me that since 2015 or so the taboo has been broken, hopefully for good. We needed to open the black box, of how science as an industry is being run and by whom, to expose and make visible the machinery of what the classical sociologist of science called ‘the invisible hand’. Like in an unregulated economy, the invisible hand, not unexpectedly, when made visible appears to belong to the powerful and the elites of the day. In this case thus very much the scientists who did well in the social system described above. They strongly believe in the Legend and its metrics, some honestly and for real, but others used it as a masquerade, a ‘front stage’ mythical image that still sells science well to public and politics. We have seen that the myth is scientifically but also socially untenable. In ‘modernity’, that is to say in our modern times, the public in the new social media is in uncompromising open interaction with science in its many forms. On that boundary of the science of complex societal problems and society, there is no consensus, no absolute truth and the public increasingly gets to see more of the backstage practice of science, where the discussion have not settled but are raging as they always did. In these reflexive times in society, we need a more reflexive narrative about how we do, and with whom, and for whom we do science and research. There are as we see in the next chapter many small-scale ongoing movements and actions to build this reflexive narrative. In many of these this is done together with people from outside academia that have a stake in the research because it is their problem that is to be investigated. In these transitions there is awareness that the publics will talk back. We need to let go the idea of ‘The Quest for Certainty’ and relate to ‘The Public and its Problems’ in order to produce not absolute truths, but significant reliable knowledge that benefits us all.

3.15 Sensing the Zeitgeist

During the Christmas break of 2014, reflecting on the start and the reception of the message of Science in Transition in the first year in The Netherlands, we were surprised and amazed. We had expected some reactions and a bit of media attention when we prepared the paper and the symposium. We had not anticipated the enormous and sustained support, from academia and outside academia nor the media attention and exposure. What we had expected was a typical half-hearted response from the leadership, with a standard reflex that this ‘was all already known and adequately addressed’ by the Boards and Deans. After that, we thought our message would for surely fade away quickly replaced by other news. We even had been prepared for straightforward denial and rejection by the establishment. Some of these reflexes were heard and seen in writing. The response generally however was positive from many different corners and echelons inside and outside academia. Our analysis was widely recognized and brought palpable relief that it was now acceptable to openly discuss these issues without being scorned as a complaining loser. In addition, the debates did not stop with pointing out the problems but included actions and interventions at the systemic level.

This description of the reception is provided here. Not to show how unique or enormously clever we were, because in fact we weren’t, as some colleagues were happy to point out. It is to illustrate the widespread criticism, critical insights and frustration that became tangible and had apparently been building up in academia over the years. Obviously, this was not the effect of our initiative. It was already in the air, after years of critical thinking and writing by many colleagues in different countries. In addition, it was fuelled by increasing massification and digitalization, by the distorting effects of the neoliberal knowledge economy and its New Public Management. This somehow had been brewing for a decade and the science community was ready for this broad and international call for change. It was this Zeitgeist that had activated us to take action, to give, like others had been doing at the same time elsewhere, a small push.


Download references

Author information

Authors and Affiliations


Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and Permissions

Copyright information

© 2022 The Author(s)

About this chapter

Verify currency and authenticity via CrossMark

Cite this chapter

Miedema, F. (2022). Science in Transition How Science Goes Wrong and What to Do About It. In: Open Science: the Very Idea. Springer, Dordrecht.

Download citation