Keywords

1 Introduction

1.1 Our Moral Duty

If you’re reading this book, its likely you’re a social sciences student. Perhaps you’ve only recently embarked upon your journey through the land of the learned and the learners, or perhaps you’re well into your undergraduate education, with graduate school or the job market waiting just around the corner. All the same, we hope you’re just as excited about becoming a scientist as we were when we embarked upon our own scientific careers.

Your immersion in science is, as you surely know, part of a larger, collective human endeavor – understanding and explaining the world in a scientific way. As such, you must approach your work academically, without prejudice or bias and as free from preconceived ideas as possible. The problem being – this is not self-evident.

Science in general is about great ideas and technical innovations, but it comes with a moral duty; to be thoughtful and critical of your own and other people’s ideas. The motto of the British Royal Society, founded in 1660, captures this concept well: Nullis in verba (take nobody’s word for it).

It is Nullis in verba, the skeptical and self-critical approach of the scientific community, that we turn in this book. It is what we, the authors of this volume, but also the academic community at large, consider the moral duty of any scientist (Fig. 2.1).

Fig. 2.1
An illustration of 2 dogs wearing a crown collar and lifting a throne made of leaves on their shoulders and holding a medieval shield with their legs. A crown with a bird on top is on the throne.

Motto of British Royal Society. Source: Wikicommons

1.2 Understanding of Ethics

In the chapters that follow, we offer an introduction into the ethics of social science research as an instrument to systematically explore this moral duty of skepticism and self-critique. We probe the most common moral dilemmas that social scientists encounter while conducting research, and we discuss several possible solutions to them, although often no one solution satisfies completely.

Many of the dilemmas discussed in this book are not specific for the social sciences and the questions they raise are common across many disciplines. Different disciplines struggle with questions regarding how to treat participants in research with respect, how to ensure that data is collected and stored safely, or how to deal with deception. However, the way these questions are understood and explored may be different from one field of expertise to the next.

To mention just one example in somewhat more detail: intrusive research is a concern for many scientists. But what is considered ‘intrusive’ in the social sciences (something that arouses in the participant unpleasant, even painful experiences or memories) does not compare with its meaning in the medical sciences (something that jeopardizes the integrity of the participant’s body).

In short, to properly understand the ethical questions of a particular field, we need to have a grasp of certain ‘qualities’ within that field. By qualities, we mean a (historic) understanding of what science and scientific knowledge means to them, what the aims of their scientific research is, what rules they follow, and if there are particular questions, discussions, and issues that they are particularly sensitive to.

The purpose of the present and subsequent chapter is to explore the first part of these ‘qualities’, while the particular ‘sensitivities’ will be the subject of discussion in later chapters.

In this chapter, we briefly explore both the history of science, with a particular focus on the social sciences, and examine differing perspectives on knowledge. In the next chapter, we explore a number of important perspectives on ethics, and we outline several important principles thereof, including a discussion of modern ‘codes of conduct’. What these chapters do not offer is an extensive introduction into the history and philosophy of social science, nor do they extensively discuss ethics from a philosophical point of view. For a more exhaustive exploration of these topics, we gladly refer the reader to the ever-expanding lexicon of fascinating literature on these subjects (see Suggested Reading).

2 Science

2.1 The Beginning

Europe’s first universities date back to the twelfth century, but they were not the first to be founded. Already in the fifth century, ancient universities flourished in India. In Nalanda, for example, the ruins of one of the first great universities in recorded history can be found. It once attracted thousands of students and is believed to have housed a library with over nine million books.

The function of these early universities was principally scholastic, focused upon the articulation and defense of clerical dogmas. However, in a period now known as the ‘Scientific Revolution’ (sixteenth and seventeenth centuries), the work of knowledge-producing university scholars changed dramatically in Western Europe (Fig. 2.2).

Fig. 2.2
A photograph of a building of the university of Nalanda made of bricks. A flight of stairs leads to a tall dense brick structure.

University of Nalanda. (Source: Wikicommons)

It was as if the human imagination had been liberated. From this period onward, natural philosophers (the precursor to ‘scientists’, a term first used in the nineteenth century) were allowed to ‘wonder’ without dogma – performing experiments conceived first in the mind and controlled through rationality. The mystery of the cosmos offered more than a feeling of awe and amazement, becoming a backdrop for a cascade of questions: Why do the celestial bodies move in the way they do? What makes them move? What are they even? What is light? What are the ‘natural forces’? How large is the universe? How old is it? Where does humanity fit into it all? How do we think? Why do we think? What am I? I think therefore I am – right? Minds were being blown, one question at a time.

These broad questions that helped form the basis for what we today call ‘science’ have kept generations of scholars busy ever since. But what do we mean exactly when we talk about ‘science?

A common characterization of science is that it’s an attempt to explain reality and offer knowledge that can help predict or prepare for future events. But is this unique to science? Note that there are other organized systems of though (such a world religions) that aim to do the same.

A more elaborate answer would be that science (a) produces a body of robust knowledge by way of (b) a certain methodology, and it does so within (c) an infrastructure of physical institutions (such as universities, laboratories, etc.).

These three dimensions of science (knowledge, methods, and infrastructure) presuppose a fourth dimension, which is particularly relevant in the context of this book. Knowledge, methods, and infrastructure require (d) a set of moral values, embedded in our academic way of thinking. Moral values structure the scientists’ activities. Producing robust knowledge within the framework of an institution means you must adhere to certain rules, regulations, and appropriate methodologies.

On one hand, the procedures governing the act of actually ‘doing’ science are institutionalized in the regulations and protocols of each discipline – a moral compass defined on paper. On the other hand, they are more implicit, with greater reliance on the moral virtues of the individual researcher and are thus more difficult to identify (more on this in Chap. 3). Both ways of considering the set of moral values in science, institutionalized and implicit, are played out on the center stage of this book, representing the ethics of scientific research and the integrity of the researcher, respectively.

In the next section, we briefly outline the above-mentioned dimensions of science – knowledge, methodology and infrastructure – as framed against the background of the developing social sciences as they emerged in the nineteenth and twentieth centuries.

2.2 A Very Brief History of the Social Sciences

Before the social sciences entered the academic arena in the nineteenth century, the ‘project of science’ was closely connected to the natural sciences. From Newton’s law of gravity to x-rays, and from gun powder to penicillin, science was all about great discoveries. Even the idea of ‘discovery’ is connected to science: the very concept did not exist before the Scientific Revolution (Wootton 2015).

Powered by science and its instruments, such as compasses, canons, and cartography, Western countries set their sights on world domination and established, largely during the long nineteenth century of colonialism, their global empires. The scientists themselves were predominantly white males from a privileged background.

From the mid-nineteenth century onward, several disciplines professionalized and institutionalized into different subdisciplines. For each separate field of scientific inquiry, particular methodologies were prescribed, and dedicated societies and journals were founded. Astronomers built telescopes and observatories to study the stars and began to develop theories on the origin of life; biologists probed the world unseen by human eyes with microscopes and developed specializations such as botany and zoology. Nearly every other knowledge producing discipline followed similar patterns and scientific inquiry developed in divergent directions.

Within this large network, most of the actors shared the positivistic ideal, meaning they believed that progression in science is understood as an accumulation of true and empirically confirmed, factual knowledge. Science had become not only a new arbiter in matters of truth and falsehood, but it was also seen as a strong instrument for improving the human condition. It had taken up a position which was previously, and rather exclusively, the realm of religious systems.

It was during the nineteenth century that the social sciences stepped foot on the stage. The social sciences first emerged in the shape of political economy, sociology, and what was then called the ‘moral sciences’ (an early form of psychology). Early social scientists sought to transform the rising nation-states of the world into stable, governable economies.

The second half of the nineteenth century revealed a need for analytical insights into the inner-workings of capitalism, the state, and its growing bureaucracies. The quest for this knowledge laid the foundation for the modern-day social sciences. In order to quantify human behavior and to get a grip on the emerging patterns in modern societies, they employed their own discipline specific tools, such as statistics, which proved to be a valuable instrument for their cause.

By the early twentieth century, social scientists were already studying a multitude of topics, spanning a wide-breadth of human-related matters; from perception and consciousness, psychopathology, and public administration, to problems of recruitment and selection, the mysteries of religion, and the supernatural. From these different areas of interest, a variety of new disciplines, subdisciplines, and schools of thought emerged.

For example, within psychology in the 1920s alone, there were Gestalt psychologists, behaviorists, experimental psychologists, industrial psychologists, even ‘parapsychologists’ (who studied the spiritual dimension of life), not to mention psychoanalysts (who had their roots in medicine). All these subdisciplines and their corresponding schools of thought developed their own institutions, established their own journals, and formed their own methodologies.

Similar developments took place in sociology, anthropology, and economics, as well as in philosophy, history, and theology, all disciplines that were then still considered bastions of the social sciences. A number of subdisciplines that formed during this time, such as what we would now call clinical psychology and neuropsychology, were not yet regarded as a part of the social sciences, but rather part of psychiatry. Educational studies were only in their infancy, and political science, gender studies, and interdisciplinary studies would not emerge until much later, generally after World War Two (Repko et al. 2014). Which disciplines belonged to the social sciences and which did not has long been a subject of debate, raging still today. This illustrates the fact that the social sciences as a whole are still a collection of rather loosely connected fields of developing knowledge.

By the second half of the twentieth century, two developments further shaped the field of social science. For one, the social sciences had become regarded as an independent ‘discipline’ and was no longer considered the offspring of other disciplines (almost all early psychologists trained in the nineteenth century were physicians, for example). While the methods and corresponding ‘objects of knowledge’ the social sciences sought were situated between the natural sciences (explaining the world by means of natural laws and experimentation, resulting in objective knowledge) and the humanities (understanding the world with ideographic methods, resulting in more subjective narratives), their object, human behavior, was unique.

Secondly, a strong impetus towards independence came via a post-war surge of popularity in the social sciences. There had been only a handful of students interested in psychology or sociology in the years prior to 1940, but this dramatically changed in the 1950s, ramping up further from the 1960s on. Thousands of students began enrolling in social science disciplines like psychology, sociology, educational sciences, and political science to meet the growing demand for social scientists. Applied science became one of the social sciences’ most valuable additions, delivering an innumerable number of new therapists, educationalists, human recourse managers, test psychologists, and policy makers every year.

This rapid influx allowed the social sciences to establish itself firmly in the post-war framework of modern universities, which persists today. Scores of professorships were created, large research institutions were established, and considerable sums of money began flowing into the social sciences. These processes of institutionalization and professionalization went hand in hand with the formalization of research procedures, reflected in stricter and more formalized views on ethics, exemplifying an increased concern with scientific misconduct (discussed in Chaps. 4, 5 and 6).

Approaching the end of the twentieth century and beginning of the twenty-first, new developments set in motion a series of changes that transformed the outlines of the social sciences once again. Neoliberal politics caused budgets to dwindle, forcing social scientists to collaborate with other disciplines, cross-pollinating their work. Many strove to ‘valorize’ their work, emphasizing its commercial value and thus allowing others to influence their research agendas, intentionally or not.

These tendencies, though grossly oversimplified here, clearly reflected on the social sciences’ fundamental commitment to understanding the world. While some argued that the social sciences were acquiring a newfound importance in society, others doubted that the knowledge it produced was capable of withstanding tests of validity, and in response a ‘replication crisis’ was declared, a charge many sought to counter (Nussbaum 2010). We will return to these arguments in Chaps. 8 and 9 (Box 2.1).

Box 2.1: ‘What Do Social Sciences Study?’

The Study of Humans Debates about the validity of social science knowledge exemplify the challenges in the scholastic study of humans. One of the oldest and arguably most notable disciplines in the social sciences, sociology, focuses on collective human activity, social relationships, and social interaction. As it is situated at the interplay between social structure and individual agency, one of its most fundamental issues lies in the existence of social structures and how they objectively influence our lives.

Psychology, on the other hand, often seeks to understand and predict individual human behavior in a way resembling the ‘hard’ (natural) sciences. The working of the mind, cognitive processes, and functionality of the brain have all been the subject of psychological research. Subdisciplines such as neuropsychology, developmental psychology, social psychology, and clinical psychology are all devoted to different dimensions of individual behavior.

Taking a longer view, anthropology studies the rituals, values, and practices of human societies and cultures, forming subdisciplines in cultural, social, medical, and linguistic anthropology. Because cross-cultural analysis plays such an important role in the study of anthropology, questions regarding cultural relativism (to what extent are someone’s values to be understood as a product of their culture) have always received a lot of attention from university scholars.

3 Knowledge

3.1 The Role of Universities

If science’s most important task is the production and reproduction of knowledge by use of certified methodologies within a structural framework of institutions, then how should universities prepare students for this feat?

Gabelnick (1990) proposes that we should view universities as learning communities. Universities are institutions populated by professors, teachers, researchers, staffers, managers, and of course students, who are all committed to the same objective – the accurate production and reproduction of knowledge.

The perspective that universities are learning communities takes for granted that academic institutions are bureaucratic organizations seeking cost-efficiency. In order to do what they must do, they should have strict curricular structures, consisting of well-defined teaching programs with formal learning objectives, prescribed assessment criteria, and quality control agencies. Such environments strive for productivity as their goal, allowing for little deviation from the norms they put in place. Indeed, modern teaching programs at universities often wield knowledge as their instrument and regard students as passive consumers of it.

Dissatisfied with such a restricted view of the student’s role in the university, Etienne Wenger proposed an alternative perspective. His work has been influential in higher education circles since the 1990s. Instead of regarding learning as a formalized activity, carried out by isolated members of an institution, Wenger proposed that learning is a shared and situated activity that requires communitiesof practice.

In communities of practice, people are actively engaged with each other, constructing knowledge together. Participants in these communities ‘share a concern, a set of problems, or a passion about a topic, and deepen their knowledge and expertise in this area by interacting on an ongoing basis’ (Wenger et al. 2002, p. 4).

Participation, sharing, and interacting in communities of practice are essential elements in learning, since it is through participation that identity and practices develop. Participants in communities of practices ‘learn by doing’ (instead of learning by absorbing or consuming knowledge).

In this book, we too adhere to such a constructivist perspective, and we invite the reader to be actively involved with the normative questions raised here, developing their own solutions to moral dilemmas. Of course, books are interactive in only a limited sense, but hopefully the case studies offered in the following chapters, along with the corresponding exercises that accompany the chapters, enable students to become involved in these debates. We want them to be able to discuss their ideas and engage with classmates, co-constructing their own solutions to the problems posed here (Fig. 2.3).

Fig. 2.3
An illustration of a group of around 6 men and women in seated positions closer together.

Communities of practice

3.2 Knowledge Construction

If your role at university is to ‘co-construct knowledge’, then what exactly should you develop? What does true knowledge consist of?

Fundamentally, knowledge is simply any information about the world (i.e. ‘Moscow is the capital of Russia’, ‘Water consists of two components of hydrogen and one component of oxygen’). In an academic context, however, knowledge is more precisely defined as (a) a body of discipline-based theories, concepts, and methodologies, and (b) any number of practical generalizations and principles that apply to fields of professional action (Eraut 1994, p. 43).

Thus, what psychologists, sociologists, and anthropologists claim to know is a result of how they define the world and of how they operate in their fields of research. Accordingly, not just their view of the world, but the properties they ascribe to the world or in the universe may differ radically from one discipline to the next.

Acquiring knowledge implies much more than simply learning a set of theories and concepts. Being immersed in academic education, you’ll follow a process of gradual mastery.

At first, learning is about understanding the fundamentals of a field of knowledge. At this stage, little ownership is involved. Research procedures are learned, reporting preferences are practiced, and the existing historiography is read.

Quickly thereafter, these fundamentals need to be applied to practical situations, and the knowledge of other disciplines becomes indispensable. An increased sensitivity to the explicit and implicit norms and expectations across disciplines becomes a tool for collaboration. Despite a sharp learning curve, by the end of your education, you are expected to display analytical skills, propose your own ideas, and develop insights in your own right. Only then have you become a trusted and productive member of the academic community, a co-constructor of knowledge.

Benjamin Bloom’s taxonomy of cognitive knowledge-based learning attempts to grasp this gradual development (see Figs. 2.2 and 2.4).

Fig. 2.4
A triangle model from top with elements of creating, evaluating, analyzing, applying, understanding, and remembering.

Revised taxonomy of Bloom’s knowledge-based learning. (In Anderson et al. 2001)

3.3 Risk and Reflexivity

A key factor in becoming a ‘trusted and productive member of the academic community’ is reflexivity: the ability to critically reflect on the responsibilities of both yourself and others.

Reflexivity isn’t just some invitation to be cautious or thoughtful. We are living in an age of increased accountability, meaning that more than ever, there is an obligation on individuals, businesses, and institutions to explain and justify the choices they make. For scientific researchers, this means that you can and will be held accountable, or even be liable, in a case of wrongdoing, intentional or not.

The monitoring of risk is therefore a crucial aspect of reflexivity (Giddens 1991). Risk assessment is no longer an individual responsibility, but a collectively carried burden, and this reality had grown in significance over the years.

Much of this collective responsibility has been written into regulation at an international level. In Europe, for example, the General Data Protection Regulation (or GDPR for short; see Box 2.2), constitutes a set of binding directives that protect the rights of human participants in research, ensuring that researchers and research institutions actively assume responsibility.

At the level of local institutions, special independent controlling bodies have been installed to coordinate the ethical dimensions of research. Most universities and research institutions today require that researchers submit their proposals to these Institutional Review Boards(IRBs), who demand strict procedures when considering research applications. Funding agencies will often demand compliance with these bodies, and many journals require approval from them before they publish an article.

Box 2.2: ‘GDPR’

Under the European General Data Protection Regulation, individuals have the right to access their personal data, the right to be informed and/or forgotten by those who use their data, the right to object or restrict (further) use of their data, and the right to be notified in case a data breach has taken place.

GDPR requires that concrete and appropriate procedural and technical measures be taken to protect these rights. Enacted in May 2018, it has far-reaching consequences for all institutions (including universities) that use personal data. Institutions are required to:

  • Create a comprehensive privacy policy;

  • Appoint data protection officers and representatives;

  • Adopt specific codes of conduct;

  • Maintain records of all data processing activities.

4 Ethos

4.1 Science’s Ethos

We started this chapter with the observation that science’s mission is to understand the world systematically and methodically, while being as unbiased as possible. Furthermore, every scientist has a moral duty to be skeptical and critical. We now return to this grounding principle and ask: Are there any general guidelines that scientists must follow that allows them to be both critical and methodical? Yes, there are.

One of the first demands scientists must meet is the need to remain autonomous. It has been long advocated that universities should safeguard their independence. They must seek objectivity and establish as much self-regulation as possible. Science is not to serve interested parties; either influenced by ideologically, politically, or commercially inspired motives.

Another important principal is that scientists must fulfill their tasks carefully and reliably. The methods and procedures of science should be transparent, its studies replicable, and its results accessible to all. This open character of scientific knowledge is pivotal to its mission.

In his now famous 1942 article ‘A Note on Science and Democracy’, American sociologist Robert Merton formulated several essential principles which, if followed, he argued would ensure science a secure and autonomous place in society. He wrote this at a time when Western civilization was at the threshold of being radically transformed politically, culturally, and economically, with the free exercise of science all but self-evident. As such, Merton (1942) proposed four ‘imperatives’ that make up the ‘ethos of science’ (Fig. 2.5).

Fig. 2.5
A photograph of Robert K Merton wearing the academic cap and gown.

Robert K. Merton, at Leiden University, the Netherlands, at the occasion of receiving an honorary doctorate, 2 July 1965. Source: Wikicommons

  1. 1.

    Communism, later dubbed communalism. Because knowledge is the product of collective effort, substantive findings of science are assigned to the community. They constitute a common heritage and therefore ‘property rights’ are held down to a bare minimum.

  2. 2.

    Universalism. The acceptance or rejection of scientific claims should not depend on any personal or social attributes of the researcher. Only pre-established, impersonal criteria should be used to determine a truth claim.

  3. 3.

    Disinterestedness. Scientists should act for the benefit of a common scientific enterprise, not for their own gain. Self-interest should play no part in science.

  4. 4.

    OrganizedSkepticism. Scientists should be skeptics, suspending judgement until the facts are at hand. Logical and empirical criteria allow for a detached inspection of any claims, which are exposed to critical scrutiny before being accepted.

  5. 5.

    Later a fifth imperative was added: Originality. Researchers must create new scientific knowledge, and not just reproduce established findings (Ziman 2000).

These five imperatives are known by its acronym CUDOS. These imperatives have found their way into various ‘codes of conduct’ (to be discussed in the next chapter) and existing scientific practices and procedures (see the last chapter of this book for a detailed discussion).

The principle of communism, for example, is accomplished through the practice of academic publishing, which allows researchers to share their findings. The principles of universalism and disinterestedness then, are grounded in the widely accepted practice of ‘peer review.’

The all-important practice of peer reviewing (which means that authors submit their work to a forum of experts before it gets published) was not yet utilized during Merton’s time, but gradually became standard practice in the decades after World War Two. From then on, it was the reviewers who decided whether or not a paper met accepted standards. Reviews are as a rule ‘blind’, which means the identity of the author remains unknown to the reviewers, ensuring fair judgement. Peer reviewing itself is considered part of one’s ‘academic duty,’ and the imperative of organized skepticism, for reviewers don’t get paid (Box 2.3).

Box 2.3: ‘Climate Change or Nudging. A Dilemma’

Since the early 2000s, climate change – and the totality of challenges associated with climatic variability and change – is recognized among scientists as an indisputable fact, even though there may be debate among researchers about specific causes, implications, or future scenarios.

In an attempt to anticipate some of the most pressing issues related to climate change, scientists have proposed a wide range of ideas and solutions that could be implemented by policymakers. Social scientists have also contributed their fair share to these solutions. The challenge they face is in finding ways to alter human behavior on a massive scale. In response, the UK-based Behavioral Insights Team (BIT) was founded in 2010. One of the solutions they developed were small, low-cost ‘nudges’ that focus on making subtle changes to people’s environments. For example, loft insulation helps reduce energy waste, but few people were installing it. When provided with low-cost labor to clear their lofts, however, the number of loft insulation installations increased fivefold.

Nudges may only impart a slight change in human behavior, but they are cheap to implement and when used on a large scale, can still have a significant impact. Nudges are contrasted with traditional government levers for behavior change, that include vast mechanisms for interfering in economic and ecological systems, many of which are much more resource and cost intensive.

Recently, Irish Prime Minster Leo Varadkar adopted the ‘BIT approach’ and argued that the government’s pathway to zero emissions by 2050 was to ‘nudge people and businesses to change behavior and adapt new technologies through incentives, disincentives, regulations and information’ (The Irish Times, June 17th 2019).

We may all agree that climate change is real, and that ‘something needs to be done’ about it, but the question here is: what role should social scientists play in this discussion, and whether or not they should get involved with politics.

Proponents could argue that the social sciences have a moral obligation to invent instruments that help drive human behavior in the right direction. Opponents could argue that nudging does not involve consent – people are ‘gently pushed’ in a certain direction and may not even be aware of it. They may contend that it’s not up to the social sciences to invent instruments that steer the behaviors of people, it should be up to each individual to make their own decisions.

Where do you stand in this debate?

4.2 Ethos or Arena?

The question that emerges at the end of this chapter is whether these Mertonian guidelines (communalism, universalism, disinterestedness, organized skepticism, and originality; CUDOS) still meet our expectations. As we indicated in our brief history of the social sciences, much has changed in the past two or three decades. So then, what is the value of scientific knowledge, however ‘true’, if it isn’t supported by politicians, policymakers, or other stakeholders?

Social science in the early twenty-first century feels like a battle in an arena. And this arena is anything but a level playing field. It is populated not only by professional and highly competitive scientists, but also by networks of stakeholders, financers, managers, journalists, and career officers, all operating with their own interests in mind.

These developments, while grossly oversimplified here (we offer greater attention to some of them in later chapters), clearly threaten to undermine science’s fundamental commitment to an unbiased, systematic, and methodical understanding of the world.

True, science’s mission has remained largely the same throughout the past few centuries, but that doesn’t mean its mission ever was nor is self-evident today. To accomplish its goals, science must relate to society, adapting to its needs and wants, its political, and even commercial pressures. At the same time, it must seek to retain its integrity and autonomy. And for this, we need ethical reflection.

5 Conclusions

5.1 Summary

This chapter started out with a definition of science’s original mission: to understand the world without prejudice or bias, free from preconceived ideas or dogmas. From this, an accompanying obligation was derived, to be skeptical towards one’s ideas and the ideas of others.

A brief summary of the history of science and the history of the social sciences outlined a picture of a field still in development. A view of universities as learningcommunities was contrasted with one of universities as communitiesof practice. Furthermore, a constructivist perspective, in which students are regarded as active co-constructers of knowledge, was proposed. It was argued that being immersed in academic education implies a process of gradual mastery and increasing ownership, in which reflexivity plays a crucial role.

Finally, Merton’s ‘institutional imperatives’ CUDOS were discussed: universalism, communism, disinterestedness and organized skepticism, later extended with originality, and how they extend into scientific practices such as peer review procedures.

5.2 Discussion

We investigated the ‘parameters’ of scientific practice, and we questioned the conditions that enable it. We found that universities as institutions have changed since their early days, though at its core the mission of the scientist has remained the same: to understand and explain the world, free from bias and dogmas. We further identified one value that we considered pivotal for science to fulfil its mission: to remain autonomous, at least to some degree. One question we did not ask: how will science succeed in remaining autonomous? The answer can be found in the subject matter of this book: its ethics.