Science and Engineering Ethics

, Volume 19, Issue 3, pp 685–701 | Cite as

Research Integrity and Everyday Practice of Science

Original Paper


Science traditionally is taught as a linear process based on logic and carried out by objective researchers following the scientific method. Practice of science is a far more nuanced enterprise, one in which intuition and passion become just as important as objectivity and logic. Whether the activity is committing to study a particular research problem, drawing conclusions about a hypothesis under investigation, choosing whether to count results as data or experimental noise, or deciding what information to present in a research paper, ethical challenges inevitably will arise because of the ambiguities inherent in practice. Unless these ambiguities are acknowledged and their sources understood explicitly, responsible conduct of science education will not adequately prepare the individuals receiving the training for the kinds of decisions essential to research integrity that they will have to make as scientists.


Responsible conduct of research Science education Science policy Philosophy of science 


In 2002, the National Academies Institute of Medicine (IOM) published a report called Integrity in Scientific Research (National Academies—Institute of Medicine 2002). The IOM committee wrote that “Integrity in research embraces the aspirational standards of scientific conduct rather than simply the avoidance of questionable practices.” More than mere compliance, understanding and commitment to research integrity should be an institutional output along with the research itself (Figure 3.1 in the IOM report). Education in responsible conduct of research (RCR) was viewed by the IOM committee as the essential means to encourage research integrity.

RCR training in the United States began in 1989 when the National Institutes of Health (NIH) announced that NIH training programs should teach the principles of scientific integrity as an integral part of training efforts (U.S. Department of Health and Human Services 1989). For many years, most details regarding how to conduct RCR education and what subjects to cover were left to the institutions providing the instruction. Beginning in 2011, NIH provided expanded guidance concerning format, overall subject matter, faculty participation, duration and frequency of RCR instruction (National Institutes of Health 2011). Beginning in 2010, the National Science Foundation (NSF) also introduced an RCR training requirement for undergraduates, graduate students, and postdoctoral fellows receiving NSF support (National Science Foundation 2010). The new NIH and NSF requirements strongly encourage institutions to make RCR instruction part of the core educational curriculum rather than an ancillary component.

This essay concerns the fundamental orientation of RCR education. The perspective presented is that research integrity should be taught in the context of everyday practice of science (Grinnell 1992, 2009). Consider the following example that will be discussed later in detail. Research papers frequently contain only a small portion of the data collected. Moreover, the data presented often are arranged to tell the best story even though doing so is historically inaccurate. What are students to think—what do they learn—when they realize these features? Is the research paper honest? In an intellectual sense, yes, and consistent with the conventions of everyday practice. But it an absolute sense, the paper is false.

The 2010 Singapore Statement on Research Integrity begins with the principle Honesty in all aspects of research (2nd World Conference on Research Integrity 2010). Such aspirational documents about research integrity as well as RCR courses based on theory of science will not by themselves adequately prepare the individuals receiving RCR training for the kinds of decisions essential to research integrity that they will have to make as scientists. Because of ambiguities inherent in everyday practice such as the example of research papers mentioned above, ethical challenges will arise in every aspect of research. RCR education that fails to acknowledge these ambiguities and make explicit their sources loses its relevance for the students receiving the training.

In what follows, I begin by contrasting everyday practice with the textbook and linear models of science. Then, I discuss the problem of objectivity in science given the influence of personal biography on what one does and how one does it. Subsequent sections will focus on the two central practices of science, discovery and credibility, Discovery means learning new things about the world. Credibility means convincing others of the correctness of what one has learned. Final sections will deal with the sociopolitical environment in which research is conducted and with the question whether RCR education should begin earlier in the science curriculum.

Everyday Practice of Science

Towards the beginning of his classic work Against Method, Paul Feyerabend wrote,

The history of science will be as complex, chaotic, full of mistakes, and entertaining as the ideas it contains, and these ideas will be as complex, full of mistakes, and entertaining as are the minds of those who invented them. Conversely, a little brainwashing will go a long way in making the history of science duller, simpler, more uniform, more ‘objective’ and more easily accessible to treat by strict and unchangeable rules… Science education as we know it today has precisely this aim. (Feyerabend 1975)

Science comes in three versions: textbook, linear model, and everyday practice. From the first two versions, one can learn a lot about the theory of science but not so much about what actually happens in the laboratory or the field. Textbooks offer facts with little possibility to understand their origins. As Harvard University president James Conant commented in Science and Common Sense.

The stumbling way in which even the ablest of the scientists of every generation have had to fight through thickets of erroneous observations, misleading generalizations, inadequate formulations, and unconscious prejudice is rarely appreciated by those who obtain their scientific knowledge from textbooks. (Conant 1951)

The linear model offers a description of science in which the path from hypothesis to discovery follows a direct line guided by objectivity and logic as if the facts were waiting to be observed and collected by dispassionate researchers. The linear model exemplifies how scientists communicate with each other when they make their research findings public in papers and lectures. Formulated as the scientific method, the linear model provides the traditional focus of science education.

“It is commonly believed,” wrote Harold Schilling—introducing the image of science as cranking out discoveries,

that science is a sort of intellectual machine, which, when one turns a crank called ‘the scientific method,’ inevitably grinds out ultimate truth in a series of predictably sequential steps, with complete accuracy and certainty (Schilling 1958).

Shilling’s description exactly fits the expectation that arises from science education based on the linear model.
Everyday practice focuses on what actually happens in the conduct of research. Rather than textbooks and research papers, the best place to find this version of science is in scientific memoirs and other historical accounts. When Sir Peter Medawar reviewed Jim Watson’s memoir The Double Helix, Medawar commented about how the book offered key insights about practice of science,

No layman who reads this book with any kind of understanding will every again think of the scientist as a man who cranks a machine of discovery. No beginner in science will henceforward believe that discovery is bound to come his way if only he practices a certain method, goes through a certain well-defined performance of hand and mind. (Medawar 1968)

Reading The Double Helix one quickly leans that the path to discovery is anything but linear, and that the researchers involved are anything but disinterested.

Biography and Personality

Research programs vary in focus from highly descriptive to mathematical and theoretical. They range in size from one or several individuals working in a single laboratory to hundreds of collaborators interacting world-wide. Whatever the focus and size of the research program, responsible conduct of research begins with the individual investigator.

Thomas Kuhn is best known for his book The Structure of Scientific Revolutions in which he makes a distinction between normal and revolutionary science and describes paradigms as community-shared sets of beliefs and acceptable ways of problem solving that guide practice during periods of normal science (Kuhn 1962). Although less well appreciated, Kuhn also emphasized that beyond values shared by the community, scientific judgment by individuals depends on biography and personality (Kuhn 1979). The interface between prevailing beliefs of the community and individual biography and personality always will be the starting point for subsequent work. Others have reached similar conclusions and offered operational terms for the influence of biography and personality, e.g., schemata (Piaget 1970), thematic presuppositions (Holton 1973), and thought styles (Fleck 1979).

The term thought style comes from Ludwik Fleck’s book Genesis and Development of a Scientific Fact (Fleck 1979), which traces the transformation of beliefs about syphilis from the 15th to 20th century. Fleck described a researcher’s thought style as the cluster of education, experience, temperament, and life situation that establishes the particular standpoint from which the person approaches the work at hand. Here, intuition blends with logic; conviction blends with skepticism.

At every step of the discovery and credibility process, the thought style will influence what the person experiences, how experience is interpreted, and what actions are taken in response. These actions include making decisions about what research problems to study; how to design experiments; and how to distinguish between data and noise.

Decisions produce commitments. For instance, deciding to study a particular research question takes for granted:
  • That prior research was somehow incomplete or incorrect leaving behind an unanswered question to be investigated.

  • That adequate methodological, infrastructure, personnel, and financial resources are available to answer the question.

  • That finding the answer will be worth the effort.

Because resources of time, money and personnel are limiting, carrying out one project almost always means that something else will not be accomplished. What if the investigator’s starting assumptions are wrong, i.e., the problem already has been solved by others; or finding a solution is beyond the abilities of the research group; or even if the problem is solved successfully, the community thinks it is unimportant? Being wrong ultimately can result in failure to achieve the accomplishments that advance one’s career as a scientist. Recognizing the influence of biography and personality makes it clear why researchers can never be objective and dispassionate in the way imagined by the linear model of science. Indeed, given the context of biography and personality, one wonders—what is the source of objectivity in science?


Creativity, originality, novelty—these words reflect the gold standard of science. The goal of discovery is to be first to know something new about the world—to go where no one has gone before. By contrast, researchers usually find little reward (and therefore little incentive) to simply replicate and confirm what others already have done. Achieving recognition and fame are symbols of success in carrying out new-search not re-search. But achieving new-search is not easy. Indeed, creative thinking is hard to teach and rarely part of the science education curriculum (DeHaan 2011).

In everyday practice, the path to discovery frequently is convoluted with lots of dead ends. Failure is frequent. The pressure to produce is great. Why is discovery so hard? The Greek philosopher Plato argued that discovery is not just difficult, but impossible! In the Dialogues, Meno asks Socrates:

How will you look for it, Socrates, when you do not know at all what it is? How will you aim to search for something you do not know at all? If you should meet with it, how will you know that this is the thing that you did not know?

Socrates answers:

I know what you want to say, Meno…that a man cannot search either for what he knows or for what he does not know. He cannot search for what he knows – since he knows it, there is no need to search – nor for what he does not know, for he does not know what to look for. (Plato 380 B.C.E.)

The exchange between Meno and Socrates points to the paradox of discovery. If an investigator searches for and finds what he already knows and can recognize, then nothing new has been discovered. Discovery requires searching for what is beforehand unknown and not yet recognizable, but how is that possible?

Plato’s paradox captures the problem that every researcher encounters. The already known and expected can act as an impediment to discovery by constraining investigators from seeing and thinking anything more. Claude Bernard, one of the first experimental physiologists, emphasized the foregoing difficulty in his classic 1865 work, An Introduction to the Study of Experimental Medicine.

Men who have excessive faith in their theories or ideas are not only ill prepared for making discoveries; they also make very poor observations. Of necessity, they observe with a preconceived idea, and when they devise an experiment, they can see, in its results, only a confirmation of their theory. In this way they distort observations and often neglect very important facts because they do not further their aim. (Bernard 1957)

For Plato, discovery is an event with two outcomes. It’s one of them, or I didn’t see anything. In practice, discovery occurs as a process. Through this process, new things can be noticed even if they are not understood at the moment. It looks like one of them, but I’m not sure, or I don’t know what it is, I’ve never seen one before. Noticing what does not fit into one’s expectations becomes the starting point. Learning to see how after all it does fit becomes the discovery.

Sometimes, one notices and learns something new without being explicitly aware of having done so. We know more than we can tell, said Michael Polanyi, describing the tacit dimension of knowledge (Polanyi 1983). Discovery results from getting in touch with that tacit knowledge. Commenting on the 50th anniversary of the discovery of the Operon in gene regulation, Nobel Laureate François Jacob wrote that a moment occurred when he sensed “in a flash” the relationship between the research going on at the two ends of corridor where he worked at the Pasteur Institute in Paris (Jacob 2011).

Our breakthrough was the result of “night science”: a stumbling, wandering exploration of the natural world that relies on intuition as much as it does on the cold, orderly logic of “day science.”

Frequently, it is not the individual alone, but the individual engaged with others that leads to recognition of what has been learned. Fleck describes these interactions in terms of an expanding community of thought styles.

Thoughts pass from one individual to another, each time a little transformed, for each individual can attach to them somewhat different associations. Strictly speaking, the receiver never understands the thought exactly in the way the transmitter intended it to be understood. After a series of such encounters, practically nothing is left of the original content. Whose thought is it that continues to circulate? (Fleck 1979)

Disagreements about “whose thought is it” can easily lead to disputes regarding priority of discovery and authorship.

Not only do we know more than we can tell, but also we do more than we intend. In this case, noticing new things is made possible by unintended experiments. Nobel Laureate Max Delbrück facetiously called doing more than we intend the principle of limited sloppiness (Hayes 1982). By sloppiness, Delbrück did not mean technical error, although the history of science shows that important discoveries sometimes occur through technical error. Rather Delbrück was commenting on the openness of experimental design. Because knowledge is limiting and researchers do not know exactly what they are looking for, experimental design can result in unexpected outcomes. The decision to study old problems with new technologies is a useful strategy to increase opportunities to notice something new (de Solla Price 1983). Once noticed, unanticipated outcomes can become the first step towards important new discoveries.

Charles Peirce gave the name abduction, in contrast to deduction or induction, to the logic of discovery by unintended experiments. Neither of the latter, he argued, could result in any new ideas (Peirce 1958). Peirce formulated the logic of abduction as follows.

The surprising fact C is observed.

But if A were true, C would be self-evident.

Consequently, there is ground to suspect that A is true.

According to Peirce’s analysis, experimental results can be thought of as pieces of a jigsaw puzzle. An unexpected result (C)—noticed because it is “surprising”—can be seen as fitting into a puzzle (A) that had not been under investigation at the time the experiment was planned and carried out. Noticing the new piece (C) opens the possibility of focusing on the new puzzle (A) in which the new piece appears to fit.


As experiments proceed, they divide into three categories: heuristic, demonstrative, and—most common—failed. Heuristic experiments offer researchers new insights into the problem under investigation including evidence that disproves a new hypothesis. Demonstrative experiments re-work heuristic findings, if necessary, into a form suitable for making discovery claims public. Failed experiments arise when results are inconclusive or uninterpretable, which may occur for many reasons including technical errors, uncertain methods, or poor study design. Success in science frequently depends on turning failed experiments into new starts.

Carrying out any experiment requires guessing what will be the outcome. The guess becomes the basis for study design. Because the answer is not known in advance (otherwise why do the experiment—Plato’s paradox revisited), every experiment tests both the investigator’s explicit hypothesis about how things are and implicit hypothesis about the type of methodology and design adequate to answer the question under investigation. Finding the “right” answer always will be a work in progress.

In his memoir The Statue Within, François Jacob describes the failures that he and his colleagues encountered as they tried to demonstrate the existence of mRNA.

We were to do very long, very arduous experiments… But nothing worked. We had tremendous technical problems… Full of energy and excitement, sure of the correctness of our hypothesis, we started our experiment over and over again. Modifying it slightly. Changing some technical detail.

Our confidence crumbled. We found ourselves lying limply on a beach, vacantly gazing at the huge waves of the Pacific crashing onto the sand. Only a few days were left before the inevitable end. But should we keep on? What was the use? Suddenly, Sydney gives a shout. He leaps up, yelling, “The magnesium! It’s the magnesium!” Immediately we get in Hildegaard’s car and race to the lab to run the experiment one last time…Sydney had been right. It was indeed the magnesium that gave the ribosomes their cohesion. But the usual quantities were insufficient…This time we added plenty of magnesium. (Jacob 1988)

The hypothesis had been correct; the method used to test the hypothesis had been incorrect.

Karl Popper suggested that research advances by falsification of hypotheses (Popper 1959). Because every experiment tests both the hypothesis and the adequacy of the experiment to test the hypothesis, conclusions always will be open to interpretation and debate. One does not give up a good hypothesis just because the data do not fit, at least not at first. Popper’s notion of falsification might work for linear science, but in everyday practice, the significance of falsification is aspirational. Researchers should be open to the possibility of being wrong.

Besides the uncertainty of experimental methodology and design, a second aspect of “don’t give up the hypothesis just because of the data don’t fit” concerns anomalous research findings. In The Structure of Scientific Revolutions (Kuhn 1962), Kuhn describes how anomalous findings accompany research during normal science and accumulate until their presence becomes so overwhelming that a crises occurs, which is the point at which revolutionary science begins. However, during normal science, the natural tendency for researchers will be to overlook anomalous findings. Here is what Nobel Laureate Rita Levi-Montalcini wrote regarding her research that ultimately led to discovery of nerve growth factor, a key regulator of cell growth and development.

Even though I possessed no proof in favor of the hypothesis, in my secret heart of hearts, I was certain that the [cancerous] tumors that had been transplanted into the embryos would in fact stimulate [nerve] fiber growth.

The tumors stimulated fiber growth but, unexpectedly, Levi-Montalcini found a similar effect with normal tissue fragments. She described this observation as “the most severe blow to my enthusiasm that I could ever have suffered.”

After suffering the brunt of the initial shock at these results, in a partially unconscious way I began to apply what Alexander Luria, the Russian neuropsychologist has called ‘the law of disregard of negative information’… facts that fit into a preconceived hypothesis attract attention, are singled out and remembered. Facts that are contrary to it are disregarded, treated as exception, and forgotten. (Levi-Montalcini 1988)

If Levi-Montalcini had focused on the anomalous result, she might have abandoned the research project. Instead, she continued to believe in the unique importance of the factor in tumor biology. Years later, she returned to study the importance of nerve growth factor with normal tissues.

The foregoing examples turned out to be success stories for the researchers involved. However, sticking to one’s hypothesis in the face of contrary data or ignoring anomalous data is risky business. The challenge is to be aware of the choices that one is making and to know when to give up a hypothesis even though it is a good one. Error is common. Failure is frequent.

Research Papers

The adage publish or perish concerns both researchers and the discoveries that they make. When researchers do not publish, they put their careers a risk. When discoveries are not made public, it is as if the work was never performed.

Research papers provide the formal mechanism by which investigators make public the details of their discovery claims. Papers typically describe the small collection of demonstrative experiments carried out with the much larger set of failures omitted. It would not be unusual for ten research notebooks worth of experiments to become a ten page research paper.

As the research proceeds, investigators and co-workers make judgments about experimental outcomes and decide what results count as data versus experimental noise. Unlike high school and college science experiments, no one knows the right answer in advance. Heuristic principles can help distinguish data from noise but rarely are sufficient. Given the uncertainty, experience and intuition (biography and personality) become equally important.

In any particular case, the way that results are selected and used by one investigator might appear self-serving and inappropriate to another. As the National Academies 1992 report Responsible Science comments, “The selective use of research data is another area where the boundary between fabrication and creative insight may not be obvious.” (National Academies Panel on Scientific Responsibility and the Conduct of Research 1992)

In his work describing the charge on the electron, Nobel laureate Robert A. Millikan based his calculations on 58 out of 140 oil drop experiments, which he called the “golden events” (Holton 1973). In reporting his findings, Millikan said that he included “all” the data in making his calculation. The discrepancy (58 vs. 140) led the Sigma Xi Research Society in its pamphlet Honor in Science to describe Millikan’s published work as “one of the best known cases of cooking” (i.e., falsifying data by unrepresentative selection; Jackson 1984).

Several years after Honor in Science was published, Sigma Xi awarded its annual McGovern Science and Society Award to physicist David Goodstein. Goodstein entitled his award lecture “In the case of Robert Andrews Millikan.” He argued that the 58 drops selected by Millikan were all of the drops that fit the criteria Millikan had used to distinguish which results counted as data. The other drops did not count for one reason or another (Goodstein 2001).

In addition to presenting only a selected set of data, research papers also typically rewrite history to present a logical and internally consistent account of the studies. Just as failed experiments are omitted, so will be failed hypotheses that have been discarded and older experiments at one time believed to be demonstrative but reinterpreted or discarded in light of later findings.

The paper published in Nature in which Jacob and co-workers describe the evidence for mRNA (Jacob 1988) offers a much different account from that found in Jacob’s memoir. Instead of “sure of the correctness of our hypothesis,” one reads early on, “A priori, three types of hypothesis may be considered to account for the known facts of phage protein synthesis…” Then comes a logical discussion of how these three types of hypotheses might be distinguished. When the subject turns to magnesium,

The bulk of the RNA synthesized after infection is found in the

ribosome fraction, provided that the extraction is carried out in 0.01 M magnesium ions12. Lowering of the magnesium concentration in the gradient, or dialyzing the particles against low magnesium, produces a decrease of the B band and an increase of the A band. At the same time, the radioactive RNA leaves the B band to appear at the bottom of the gradient.

The beach and Sydney’s shout are gone. The citation to reference 12 makes it appear as if the experiments had been carried out from the beginning with high (0.01 M) magnesium as recommended previously by others. The summer of “But nothing worked. We had tremendous technical problems.” becomes an intentional control experiment showing the result of lowering the magnesium concentration.

“Writing a paper” writes Jacob,

is to substitute order for the disorder and agitation that animate life in the laboratory… To replace the real order of events and discoveries by what appears as the logical order, the one that should have been followed if the conclusions were known from the start. (Jacob 1988)

The paper converts the process of discovery into an announcement of a discovery claim, a scientific short story whose plot is none other than the scientific method.

In an essay called Is the scientific paper a fraud? Sir Peter Medawar complained that research publications distort science—“a totally mistaken conception, even a travesty, of the nature of scientific thought.” One cannot not learn from the publications the “adventures of the mind” leading researchers to make their discoveries (Medawar 1963). While RCR education would benefit from teaching about the adventures of the mind leading to discoveries, the function of research papers is not to teach about the nature of scientific thought but rather to make it as easy as possible for new discovery claims to be understood and used.

Notwithstanding his essay, Medawar was no different from everyone else when it came to his own practice. Rupert Billingham, Medawar’s younger collaborator in the work establishing the field of transplantation immunology for which Medawar won the Nobel Prize, published an autobiographical essay containing a section called “The most important lecture I’ve ever attended.”

In 1947 Medawar gave a joint paper on our work… It was obvious to me that in this lecture the only items that were sacrosanct or inviolable were actual factual observations. Hypotheses could be invented or rejected at will, and the chronology of the experiments conducted and the reasons for embarking upon them could be altered to make the best possible story.

In a subsequent session of this symposium Dr. J. H. Woodger gave what amounted to a lecture on scientific method. I was amused when he cited Medawar’s contribution as a model of its kind, exemplifying how an investigation should be tackled. Obviously, it had never occurred to Woodger that Medawar’s narrative represented a gross travesty of the true history of the project. (Billingham 1974)

If one is looking for a historically accurate picture a particular piece of research, then research notebooks would be the only potential source of information. Because research publications contain only a representative selection of data and present the findings in a logical rather than historically accurate fashion, keeping and preserving a historical record in essential for research integrity. Existence of the detailed notebook historical record allows researchers potentially to share all of their data with others and to explain and justify their choices if asked to do so.


Discussions of research integrity frequently emphasize the importance of trust. For instance, the aspirational code of ethics of the American Society of Biochemistry and Molecular Biology begins,

Members … are engaged in the quest for knowledge … with the ultimate goal of advancing human welfare. Underlying this quest is the fundamental principle of trust. The [society] encourages its members to engage in the responsible practice of research required for such trust by fulfilling the following obligations. (American Society of Biochemistry and Molecular Biology 1998)

Trust between investigators plays a central role in the discovery and credibility processes. It begins in each research group where a necessary feature for normal functioning is that investigators trust each other to do what they say and to get the results that they describe. When multiple groups are involved in a collaborative research project, then the degree of trust required increases with the physical and disciplinary distances in between.

When investigators make their work public in the form of scientific manuscripts or submitted grant applications, reviewers trust that the completed work or preliminary studies were carried out as described and led to the results reported. Researchers, on the other hand, trust that reviewers of their submitted manuscripts and research grant proposals will not misuse the information that they learn. Here the situation becomes tricky. The contents of submitted manuscripts and grant applications are privileged and confidential, but reviewers will be not be able to unlearn what they have learned during the review process. Moreover, the best peer-reviewers often are those most knowledgeable about a subject, and who have the most to gain by advance knowledge of the others’ work and ideas. Scientific peer-review resembles a bizarre version of poker in which competitors show each other their cards for analysis and comment but expect that everyone will continue to play their own cards unaffected by what has been seen.

The National Academies’ defines conflict of interest as:

any financial or other interest which conflicts with the service of the

individual because it (i) could significantly impair the individual’s objectivity or (ii) could create an unfair competitive advantage for any person or organization. (The National Academies 2003)

According to part (ii), scientific peer-review system is inherently conflicted. Yet the rigor of peer-review generally is credited with the success of contemporary science.


Once a discovery claim is made public, the credibility process can begin. Because scientists bring biography and personality to their work, discovery claims can be no more than protoscience, inseparable from the subjectivity and ownership associated with the investigator or research group that makes the claim. For a discovery claim to become a scientific discovery, the researcher must turn towards the larger community.

Making a discovery claim public allows individual researchers to transcend their own subjectivity through intersubjectivity. Intersubjectivity is at the base of all social interactions. People live in a shared world and experience the world in similar ways. If two individuals interchange places, then they will (sort of) see, hear, think similar things—reciprocity of perspectives (Schutz 1967). In science, intersubjectivity means that researchers will be able to verify and validate each others work if it is correct. Through the credibility process, the individual researcher’s existential me/here/now becomes the scientific community’s anyone/anywhere/anytime. Objective knowledge is the goal, not the starting point. The community rather than the individual provides the source of objectivity in science. Paraphrasing William James’ pragmatic conception of truth (James 1975),

Credible discoveries are those that we can assimilate, validate, corroborate, and verify… The credibility of a discovery is not a stagnant property inherent in it. Credibility happens to a discovery. It becomes credible, is made credible by events … Its verity is the process of verification. Its validity is the process of validation. (paraphrased)

That the credibility process goes on and on and on is the self-correcting feature of science. As a consequence, what was believed at first to be correct and important, frequently turns out later to be wrong or overstated (Ioannidis 2005).

At the beginning of the credibility process, the attitude of the community towards discovery claims can be summed up by the inscription on the coat of arms of the Royal Society—Nullius in verba—which Sir Peter Medawar translated as Don’t take anybody’s word for it! Because of the novelty associated with discovery, intersubjectivity can act as a double-edged sword. Nobel Laureate Albert Szent-Györgyi’s characterized discovery as seeing what everybody else has seen and thinking what nobody else has thought. Szent-Györgyi’s idea is captured in René Magritte’s 1936 oil painting Perspicacity, which shows a seated artist staring at a solitary egg on a draped table but painting a bird in full flight on the canvas. The more novel a discovery claim, the more likely it will challenge prevailing scientific beliefs with the outcome of non-reciprocity of perspectives, which can lead, at least at first, to skepticism if not outright rejection.

The history of Nobel Prizes includes many examples of novel discoveries that were either ignored or disputed for years, e.g., tumor viruses (Nobel Prize in 1966), chemiosmotic theory (Nobel Prize in 1978), transposable elements (Nobel Prize in 1983), catalytic ribonucleic acid (Nobel Prize in 1989), and prions (Nobel Prize in 1997).

Nobel Laureate Barbara McClintock explicitly commented in her Nobel banquet speech about why her research went unaccepted for so long,

I have been asked, notably by young investigators, just how I felt during the long period when my work was ignored, dismissed, or aroused frustration…. My understanding of the phenomenon responsible for rapid changes in gene action… was much too radical for the time. A person would need to have my experiences, or ones similar to them, to penetrate this barrier. (McClintock 1983)

Ralph Steinman shared (posthumously) in the 2011 Nobel Prize in Physiology and Medicine. After he won the 2007 Lasker Prize for his work, the chair of the Lasker Committee wrote,

Why were Steinman’s early studies ignored, neglected and often denigrated by the immunological community? Longstanding dogma… made it easy for immunologists to brush aside Steinman’s experiments and ideas on dendritic cells, and to view them as some type of Victorian curiosity with little or no relevance to the mainstream of immunology. Fortunately, Steinman’s passionate belief in his data and his unshakable self-confidence propelled him forward despite the criticisms of his colleagues. (Goldstein 2007)

Ironically, when skepticism about the novel discoveries leads one’s research findings to be ignored, neglected and denigrated, success sometimes requires the individual to become an advocate—a passionate advocate—for the work. How to become a passionate advocate for one’s work and yet remain intellectually honest becomes the challenge. Of course, in the end the community might be right.

The Research Environment

While responsible conduct of research begins with the individual investigator, the overall research environment can exert a profound influence (National Academies—Institute of Medicine 2002). The complex intersection of society, government and research institution (academic, independent, industrial) creates the moral climate in which the individual does the work. In general, attitudes towards one’s surrounding moral climate influence individual behavior. Science is no different. Empirical evidence suggests that scientists’ perceptions of the research environment impacts their commitment to research integrity (Martinson et al. 2006).

Political, economic and cultural factors exert an important influence on what science will be done, who will do it, and how the work will be financed. For instance, soft money support of US academic researcher salaries began in 1960, when the President’s Science Advisory Committee (PSAC) recommended that federal agencies provide support for basic research and graduate education. However, PSAC envisioned a block grant support mechanism and commented about “the need for avoiding situations in which a professor becomes partly or wholly responsible for raising his own salary” (President’s Science Advisory Committee 1960). Contrary to PSAC’s recommendation, what has evolved is a system in which it is precisely the professors who are now responsible for raising their own salaries. Success in doing so can influence getting a job, keeping a job, and keeping one’s salary even if one’s job is tenured.

Linking a researcher’s position and salary to the ability to win external research funding creates potential conflicts of interest and commitment. Satisfying the demands of one’s external funding review panel can become more important than satisfying the internal demands of the university or research center such as teaching, clinical practice and service. Because graduate students and postdoctoral fellows frequently also are supported by research grants, they straddle the line between employee and trainee. What is in the best interests of a trainee’s education may not be in the best interests of research productivity.

In today’s environment, the grant-supported research group increasingly resembles a small business. The principle investigator (owner/operator) generates grants and contracts (income) to support students, postdoctoral fellows and other staff (employees) who produce papers and patents (products) described in seminars and scientific conferences (advertising) to the scientific community (potential buyers). Acting as small business owners, scientists take on new roles beyond research directors. They become personnel and business managers, activities that come with their own ethical challenges. And when principle investigators transition from science to business entrepreneurs, patent and prosper replaces the traditional goal of publish or perish (Schachman 2006).

Should RCR Education Should Begin Earlier in the Science Curriculum?

The overarching aim of this essay has been to show the importance of orienting RCR education towards everyday practice rather than theory of science. The examples presented demonstrate that in every aspect of research, ethical challenges inevitably will arise because of the ambiguities of practice. One way to help teach about these ambiguities would be to incorporate scientific memoirs and other historical accounts of science into RCR education. A separate issue to be raised in this final section is the question whether RCR education should begin earlier in the science curriculum in connection with activities such as science laboratory experiments and science fair.

In chemistry laboratory with an unknown to investigate, getting anything close to the predicted result sometimes is impossible regardless how carefully the student works given time constraints and quality of reagents. Yet getting the “right” result often determines the grade. As a result, writing a lab report based on the calculated amount of end product—not the amount actually determined—sometimes becomes the path to success (B. Fisher, Survival skills and ethics program, University of Pittsburgh, Personal communication, 2011). Does this sort of chemistry lab experience encourage students to falsify data?

In physics laboratory, pendulum experiments exemplify the potential for ambiguity. A National Academies study America’s Lab Report: Investigations in High School Science describes the problem as follows,

[W]hen discussing a pendulum in class, a physics teacher may ignore without discussion a host of variables that may affect its operation. However, when a student starts doing a simple experiment with a pendulum, these variables suddenly become relevant… The student may feel betrayed by the apparent mismatch between the neatness of a phenomenon as presented in a textbook and the inherent messiness and ambiguity of the same phenomenon encountered in the laboratory. (National Academies National Research Council 2006)

So how are these difficulties managed?

To reduce the potential confusion and to help students attain one goal—mastery of subject matter—a typical high school pendulum activity is “cleaned up.” This activity is designed to guide students toward making observations that will verify the accepted scientific principle that the period of a pendulum (the time it takes to swing out and back) depends on the length of the string and the force of gravity. It focuses only on science content. (National Academies National Research Council 2006)

Similar to chemistry laboratory, the emphasis is on the right answer not the right practice. Ironically, some research suggests that the inherent openness and ambiguity of science laboratory experiments, including opportunities to design one’s own messy experiments, has the potential to promote critical thinking skills and collaborative problem solving similar to that which takes place in everyday practice of science, e.g., (Roth 1994; Etkina et al. 2010).

Science fair has a different set of problems. Studies in the education literature, albeit not many in number, have demonstrated that outright scientific misconduct occurs in science fair, especially when student participation is compulsory. Out of time, having difficulty choosing a topic, lacking resources that they feel they need, about one in five students makes up their data (Shore et al. 2007). A recent student conducted survey of science fair research integrity at duPont Manual High School in Louisville, KY, reported that 65 % of the respondents admitting to falsifying their data. The survey also concluded that 20 % of the students had “abused the scientific method” by altering their hypothesis after finishing their study (Manoharan 2011). Altering one’s hypothesis in response to the data may be an abuse of the linear scientific method, but it is not an abuse of what researchers do in practice. Lumping together falsification of data with changing a hypothesis to fit the data reflects a mistaken impression of what doing science entails, just the sort of misunderstanding that RCR education carried out in connection with science fair might correct.

Earlier introduction of RCR education into the science curriculum should be considered for science laboratory experiments and science fair. Doing so could provide unique opportunities to clarify misimpressions on the part of students (and their teachers) about the nature and practice of science.


If science really were a linear process based on logic and carried out by objective observers following the scientific method, then aspirational documents about research integrity along with courses based on theory of science would be sufficient for RCR training. However, because practice of science is a more ambiguous enterprise, a more nuanced approach to research integrity education is required, one that acknowledges and makes explicit the ambiguities inherent in practice and the ethical challenges to which they give rise. Achieving research integrity requires creating a research environment that openly recognizes and engages these ethical challenges and makes explicit their sources.



Thanks to Mark Frankel, Kenneth Pimple, William Snell and Thomas Mayo for their helpful comments and suggestions regarding this essay.


  1. 2nd World Conference on Research Integrity. (2010). Singapore statement on research integrity, from
  2. American Society of Biochemistry and Molecular Biology. (1998). Code of ethics, from
  3. Bernard, C. (1957). An introduction to the study of experimental medicine (1865). New York, NY: Dover Publications, Inc.Google Scholar
  4. Billingham, R. E. (1974). Reminiscences of a “transplanter”. Transplantation Proceedings, 6, 5–17.Google Scholar
  5. Conant, J. B. (1951). Science and common sense. New Haven, CT: Yale University Press.Google Scholar
  6. de Solla Price, D. (1983). The science/technology relationship, the craft of experimental science, and policy for the improvement of high technology innovation. In National Science Foundation (Ed.), Role of basic research in science and technology. Washington, DC: U.S. Government Printing Office.Google Scholar
  7. DeHaan, R. L. (2011). Science education. Teaching creative science thinking. Science, 334, 1499–1500.CrossRefGoogle Scholar
  8. Etkina, E., Karelina, A., Ruibal-Villasenor, M., Rosengrant, D., Jordan, R., & Hmelo-Silver, C. E. (2010). Design and reflection help students develop scientific abilities: Learning in introductory physics laboratories. Journal of the Learning Sciences, 19, 54–98.CrossRefGoogle Scholar
  9. Feyerabend, P. (1975). Against method. New York: Verso.Google Scholar
  10. Fleck, L. (1979). Genesis and development of a scientific fact (1935). Chicago, IL: University of Chicago Press.Google Scholar
  11. Goldstein, J. L. (2007). Creation and revelation: Two different routes to advancement in the biomedical sciences. Nature Medicine, 13, 1151–1154.CrossRefGoogle Scholar
  12. Goodstein, D. (2001). In the case of Robert Andrews Millikan. American Scientist, 89, 54–60.Google Scholar
  13. Grinnell, F. (1992). The scientific attitude (2nd ed.). New York, NY: Guilford Press.Google Scholar
  14. Grinnell, F. (2009). Everyday practice of science: Where intuition and passion meet objectivity and logic. New York: Oxford University Press.Google Scholar
  15. Hayes, W. (1982). Max Ludwig Henning Delbruck. Biographical Memoirs of the Fellows of the Royal Society, 28, 58–90.CrossRefGoogle Scholar
  16. Holton, G. (1973). Thematic origins of scientific thought: Kepler to Einstein. Cambridge, MA: Harvard University Press.Google Scholar
  17. Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2, e124.CrossRefGoogle Scholar
  18. Jackson, C. I. (1984). Honor in science. New Haven, CT: Sigma Xi, The Scientific Research Society.Google Scholar
  19. Jacob, F. (1988). The statue within. New York, NY: Basic Books Inc.Google Scholar
  20. Jacob, F. (2011). The birth of the operon. Science, 332, 767.CrossRefGoogle Scholar
  21. James, W. (1975). Pragmatism’s conception of truth (1907). In Pragmatism and the meaning of truth (pp. 95–113). Cambridge, MA: Harvard University Press.Google Scholar
  22. Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press.Google Scholar
  23. Kuhn, T. S. (1979). Objectivity, value judgement, and theory choice. In T. S. Kuhn (Ed.), The essential tension. Chicago, IL: University of Chicago Press.Google Scholar
  24. Levi-Montalcini, R. (1988). In praise of imperfection. New York, NY: Basic Books Inc.Google Scholar
  25. Manoharan, J. (2011, 04/22/2011). Scientific misconduct starts early, from
  26. Martinson, B. C., Anderson, M. S., Crain, A. L., & de Vries, R. (2006). Scientists’ perceptions of organizational justice and self-reported misbehaviors. Journal of Empirical Research on Human Research Ethics, 1, 51–66.CrossRefGoogle Scholar
  27. McClintock, B. (1983). Nobel banquet speech—December 10, 1983. In T. Frängsmyr (Ed.), Les Prix Nobel. Stockholm: Almqvist & Wiksell International.Google Scholar
  28. Medawar, P. B. (1963). Is the scientific paper a fraud? The Listener (September 12), pp. 377–378.Google Scholar
  29. Medawar, P. B. (1968). Lucky Jim. The New York Review of Books, March 28, 1968.Google Scholar
  30. National Academies National Research Council. (2006). America’s lab report: Investigations in high school science. Washington, DC: National Academies Press.Google Scholar
  31. National Academies Panel on Scientific Responsibility and the Conduct of Research. (1992). Responsible science: Ensuring the integrity of the research process. Washington, DC: National Academies Press.Google Scholar
  32. National Academies—Institute of Medicine. (2002). Integrity in scientific research: Creating an environment that promotes responsible conduct. Washington, DC: National Academy Press.Google Scholar
  33. National Institutes of Health. (2011). Update on the requirement for instruction in the responsible conduct of research, from
  34. National Science Foundation. (2010). Chapter IV—grantee standards; Part B. Responsible Conduct of Research (RCR), from
  35. Peirce, C. P. (1958). Harvard lectures on pragmatism (1903). In C. Hartshorne, P. Weiss & A. Burks (Eds.), Collected papers of Charles sanders Peirce (Vols. 1–6, 5, pp. 188–189). Cambridge, MA: Harvard University Press.Google Scholar
  36. Piaget, J. (1970). Genetic epistemology (E. Duckworth, Trans.). New York, NY: W.W. Norton & Co.Google Scholar
  37. Plato. (380 B.C.E.). Meno 80 d-e, from
  38. Polanyi, M. (1983). The tacit dimension (1966). Gloucester, MA: Peter Smith Publishers.Google Scholar
  39. Popper, K. R. (1959). The logic of scientific discovery. New York, NY: Basic Books Inc.Google Scholar
  40. President’s Science Advisory Committee. (1960). Scientific progress, the universities, and the federal government (President’s Science Advisory Committee). Washington, DC: U.S. Government Printing Office.Google Scholar
  41. Roth, W.-M. (1994). Experimenting in the constructivist high school physics laboratory. Journal of Research in Science Teaching, 31, 197–223.CrossRefGoogle Scholar
  42. Schachman, H. K. (2006). From “publish or perish” to “patent and prosper”. Journal of Biological Chemistry, 281, 6889–6903.CrossRefGoogle Scholar
  43. Schilling, H. K. (1958). A human enterprise. Science, 127, 1324–1327.CrossRefGoogle Scholar
  44. Schutz, A. (1967). The phenomenology of the social world (G. Walsh & F. Lehnert, Trans.). Evanston, IL: Northwestern Univ. Press.Google Scholar
  45. Shore, B. M., Delcourt, M. A. B., Syer, C. A., & Schapiro, M. (2007). The phantom of the science fair. In B. M. Shore, M. W. Aulis, & M. A. B. Delcourt (Eds.), Inquiry in education, volume II: Overcoming barriers to successful implementation. New York, NY: Routledge.Google Scholar
  46. The National Academies. (2003). Policy on committee composition and balance and conflicts of interest for committees used in the development of reports (May 12, 2003), from
  47. U.S. Department of Health and Human Services. (1989). Requirement for programs on the responsible conduct of research in national research service award institutional training programs, from

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.Department of Cell Biology, Program in Ethics in Science and MedicineUT Southwestern Medical CenterDallasUSA

Personalised recommendations