Keywords

1 Introduction

1.1 The Expert’s Sin

If plagiarism is the crime of the novice, then fabrication is the expert’s specialty. Unlike ‘falsifying’ (the subject of the next chapter), which is the deliberate misrepresentation of data, fabrication involves the presentation and reporting of fake or non-existent research procedures, data, and findings. Fabrication is a form of cheating. It is about turning science upside down, it starts, rather than ends, with the answer to a question.

Fabrication probably occurs less frequently than plagiarism, but it is a much more serious form of misconduct. In a systematic review of the literature on the prevalence of scientific misconduct, Daniele Fanelli (2009) found that 2% of scientists admit to serious forms of misconduct, such as fabrication or modifying data, at least once. Additionally, 14% of respondents observed this misconduct in colleagues. The discrepancy between these findings, in which people perceive themselves to be more honest than their peers, is known as the ‘better than averageeffect’ (see; Festinger 1954).

There is another bias in these figure. Self-reporting tends to underestimate the real frequency of scientific misconduct. The incidence of fraud may be higher than we know. This triggers one’s imagination, spawning a number of questions: How many more cases actually exist that just haven’t been discovered yet? How likely is it that fabrication eventually comes to be discovered (unlike with plagiarism, there is no ‘fabrication detection software’ available)? How vulnerable are the social sciences to this type of fraud? Or is it more inherent to certain research environments? As Jennifer Crocker (2011) notes, fraud ‘starts with a single step,’ an observation relevant to the infamous case of Diederick Stapel, which we will discuss in our case study.

While the take-home message of this chapter is that we must arm ourselves against these forms of fraud, we should also realize that the dividing line between proper and fraudulent behavior can be thin. Many fraudsters start their criminal careers with small transgressions that gradually increase in scale, especially when no one stops them in their tracks.

In this chapter, we will explore three specific forms of fabrication: forgery, cheating, and ghostwriting. We will then discuss the factors that facilitate fabrication, concluding with an examination of institutional counterstrategies.

2 Forgery

2.1 The Manufacturing of Science

The invention of complete datasets, and the fabrication of entire cohorts of respondents and their responses may be more difficult to accomplish than appears at first sight. A ‘successful fraud’ must not only know what ‘good results’ are but must also know how data convincingly corroborates conclusions. How does forgery work? What are its tell-tale signs? And what happens once the fraud is exposed?

2.2 Telltale Signs of Fraud

Diederik Stapel, a prolific writer and charismatic figure in social psychology in the Netherlands, succeeded in conning many of his colleagues with what is considered one of the greatest cases of fraud in the social sciences. He was exposed after three junior colleagues found his findings suspicious. The affair created a shockwave throughout the world of social psychology, leading to what is called ‘a crisis of confidence’ with the public.

Were there any tell-tale signs in Stapel’s publications that indicated fraud? A commission that later investigated his work found sloppy mistakes and ‘unbelievably high factor loadings’ (a statistical term understood as an indication of an item’s relative importance).

The question was raised as to why peer reviewers had never noticed his fraud. Interestingly, the tell-tale signs of fraud were revealed in a linguistic analysis of his work, in which Stapel’s fraudulent studies were compared with his genuine work. The fraudulent writing contained ‘significantly higher rates of terms related to scientific methods and empirical investigation,’ suggesting that fraudulent papers involve an ‘overproduction of scientific discourse’ (Markowitz and Hancock 2014, p. 2). In other words, Stapel’s studies were not only too good to be true, they were often also too wrapped up in scientific jargon to be true.

2.3 The Student Who Almost Got Away with It (Until Another Student Blew It)

The following story, reported by Jesse Singal in New York Magazine on May 29th 2015, gives us insight into the case of a student involved in data fabrication, and of another student who blew the lid off.

Michael LaCour was a political science student at UCLA (University of California, Los Angeles), who rose to fame in 2013 when he discovered information that contradicted everything that was then known about ‘canvassing.’ ‘Canvasses’ are short conversations between people, with one person attempting to persuade the other, often occurring during political campaigns. Typically, these forms of contact are known to have little to no lasting effect on an individual’s political ideals. That is, until LaCour claimed to have found that brief talks (lasting roughly 10 min) about marriage equality, with a canvasser who revealed during the chat that they are gay, had a significant, lasting effect on the voter’s views (as measured by an online survey administered before and after the conversation).

LaCour managed to get his results published in the prestigious journal Science (with senior co-author Donald Green). It instantly attracted nationwide attention. When LaCour discussed his work with David Broockman, a third-year political science grad student at Berkeley, the latter was so impressed that he sought to replicate the study. It wasn’t long before Broockman became suspicious. Not only did he fail to replicate the original findings, he also found irregularities in LaCour’s original data. They were ‘too orderly.’ When he subsequently contacted the firm that supposedly performed the surveys for LaCour, he learned that they had undertaken no such survey.

Broockman discussed his misgivings with Neil Malhotra, professor at Stanford’s business school, who advised him not to blow the whistle to avoid possible repercussions. Broockman decided to come forth with his findings regardless, contacting LaCour’s co-author Green. Green confronted LaCour, who failed to alleviate any of his doubts. Thereupon Green requested that their paper be retracted (against LaCour’s wishes).

The story ended badly for LaCour. An offer to become an assistant professor at Princeton was rescinded. But Green too suffered repercussions, seeing a fellowship worth $200,000 fall through. Broockman, on the other hand, got a tenured-track professorship at Stanford University.

With the event behind him, Broockman spoke with Jesse Singal, the journalist who covered the case, reflecting on his experience as the whistleblower. Broockman compared it to what he went through when, as a teenager, he came out as a gay. ‘Part of the message that I want to send to potential disclosers of the future is that you have a duty to come out about this, you’ll be rewarded if you do so in a responsible way […].’ (quoted in Singal 2015).

2.4 Whistleblowing

Whistleblowers such as Broockman fulfill an important but often risky and thankless role in science. Unlike in Broockman’s case, the outcome for many whistleblowers falls far short of a happy ending. Perhaps this is because in many cases, whistleblowers are in a vulnerable position (which was why Malhotra advised Broockman against it).

Consider the case of Saskia Vorstenbosch, a PhD student at the Leiden University Medical Centre (LUMC) in the Netherlands (the following details are drawn from reporting by De Vrieze, 2017).

Vorstenbosch worked with a cell biologist in the early 2000s. One day, while preparing a presentation, she discovered anomalies in a number of experiments performed by the biologist. She ‘started digging’ and found evidence that suggested some of the data had been ‘manipulated.’ After reexamining her findings, Vorstenbosch reported her suspicions to the head of the department, who was reluctant to start an investigation. The department head only initiated an investigation after Vorstenbosch insisted she would report the case with or without him.

After an 18-month investigation, the integrity commission at LUMC indeed found irregularities, but only in one of the biologist’s papers. Dissatisfied with this outcome, Vorstenbosch took the report to the integrity commission of the Dutch National Academy of Sciences (KNAW), who researched the case more thoroughly and concluded that other forms of misconduct had taken place. The commission advised that four of the biologist’s articles be retracted. By this point, the researcher no longer worked in the Netherlands and managed to keep her name out of the press. No actions were taken against her, nor were any more of her publications withdrawn.

Tragically, Vorstenbosch, whose area of study was partly based on the biologist’s research, had only achieved in undermining her own work, because her data were now also contaminated. She withdrew from science altogether even though LUMC offered her a new PhD trajectory. Speaking with a reporter about her experience, she reflected: ‘People don’t seem eager to undertake action [against fraud] because it might damage their own name. It’s true that I too have been damaged, but should that have been reason for me to say: I’ll leave it like that, I’ll just keep silent? Sure enough [after fraud is discovered] publications are going to be withdrawn, but you don’t want your name attached to something which you know is not true, do you?’ (Fig. 5.1)

Fig. 5.1
A sketch of a man's face with hair and a beard and 2 human shapes behind windows, one with outstretched arms and the other leaning with arms stretched in front.

The Whistleblower

2.5 Exposing Fraud

From cases such as these, two provisional conclusions can be drawn. The first is that scientific frauds oftentimes betray themselves, leaving traces of their misdeeds. That’s because fraudulent researchers mimic real research. However, since they work backward, from conclusions to data, their results are often unnaturally orderly. Ironically then, a successful fraud must build in imperfections and create small deviations from the expected outcome. This might actually involve more work than performing real research.

The second conclusion is that exposing fraud can prove to be surprisingly difficult. Fraudsters are often unwilling to hand in their material, so how is fabrication proven without this data? Regularly, when suspicion of misconduct does arise, seemingly valid excuses are produced to explain the lack of material: ‘it was a long time ago’; ‘the original data was destroyed’; or ‘my computer crashed’. These excuses are sometimes accompanied with authoritarian arguments, like ‘who are you to criticize a tenured researcher?’. Sometimes these arguments even resort to downright threats, along the lines of ‘this will destroy your career.’ Facing these types of situations undoubtedly makes whistleblowing an unattractive, if not risky undertaking (Box 5.1).

Box 5.1: Self-Correction: A Dilemma

A student reaches out for help on ‘r/AskAcademia’, a discussion platform on the website Reddit. The student writes: ‘I graduated three months ago and now my teacher wants to publish the paper. While most of the data is accurate and real, for some of it I made an educated guess using some economic forecast data. I was hoping I could postpone having it published until I find accurate data, but because this is an economic topic that is so new I wasn’t able to do that. So what do I do? Is there a realistic chance of me being found out? Do I have him submit the paper?’

Of several dozen responses, here are four answers posted on the message board (paraphrased by us). Which one do you prefer and why?

  1. (a)

    Do not under any circumstance allow that paper to be published! You have somehow missed the point that it is your professor’s reputation on the line here.

  2. (b)

    For the university’s sake, tell your professor the truth. They will be so relieved that they didn’t publish fabricated data that they will forgive you, and possibly even praise and appreciate your honesty. Everybody screws up every now and then. But we need to try to fix our screw-ups when possible.

  3. (c)

    You should say something like: I revisited the analysis and I found out that I made a critical error. I’m sorry, I should have checked more completely before turning in the assignment but I’m glad I caught it before we published it.

  4. (d)

    I suggest you just keep quiet and let it publish. The Chinese GDP data and a lot of developed world data is made up, twisted, or seasonally adjusted. If concerned, build in an appendix explaining how some data was created as ‘line of best fit’ based on your assumptions.

3 Cheating

3.1 Cheating: A Shortcut to Knowledge?

There is good reason to consider the practice of cheating on a test to be akin to data fabrication, rather than a form of plagiarism or falsification (although, admittedly, there is an overlap between these categories – see Chaps. 3 and 6).

Just as scientific claims need to be grounded in real research findings, the results of an exam must also be based on ‘real work.’ Thus, cheating as a ‘short cut to knowledge’ is nearly synonymous with presenting a conclusion based on fictitious data – whether the outcome is correct or not makes no difference.

What exactly is cheating? Lim and See (2001) offer a list that covers a wide range of betrayals to academic integrity. To name a few, cheating can come in the form of using unauthorized material, stealing exams, lying about circumstances (to get special consideration), allowing team members to do the bulk of the work, inventing data, listing unread or even nonexistent sources, copying from a neighbor during a test, or allowing a neighbor to copy from you.

A discussion of the many tricks used by students to cheat on exams can be found in Harold Noah and Max Eckstein’s instructive 2001 book Fraud and Education: The Worm in the Apple. The strategies they identified are far ranging and many involve a fair share of creativity; scribbling notes on their skin, tapping codes on the floor, stealing test papers, printing and attaching cheat sheets on the inside of a water bottle’s label, and even sending impersonators to take tests on their behalf.

In the decades since Noah and Eckstein published their book, strategies today likely employ the services of the digital age, such as messaging apps, smart watches, and Bluetooth earbuds. Vincent Versluis and Arie de Wild (2015), of the Rotterdam University of Applied Sciences, investigated ‘digital cheating’ during exams in higher education and concluded that institutions seriously lag behind. Neither teachers nor administrators seemed aware of the scale or magnitude of modern forms of fraud, let alone how to counter it.

3.2 Dealing with Deception

Before exploring the prevalence of cheating, we must first examine a few actual cases that have come before a university board of examination. What are the common forms of cheating that universities experience and how do they respond?

Cheat sheets Recently, a student at Utrecht University was caught using the oldest trick in the book, a cheat sheet. They scribbled extensive notes and figures in their dictionary, and were caught during a routine patrol of the room. The case was reported to the board of examination, who decided to annul their exam. Furthermore, they received an official slap on the wrist that went into their record, and they were excluded from the course for a year (Figs. 5.2 and 5.3).

Fig. 5.2
A photograph of a page in a student note. It contains the brain diagram with parts marked and some text above it.

Sheet sheet of a psychology student. Sample confiscated by a teacher and reproduced with permission of the UU board of examination

Fig. 5.3
A photograph of a page in the note. There are 3 blocks of text.

Cheat sheets. Sample found by the author between the pages of a second-hand book. The actual size of the sheets is about 3 x 4 cm.

Falsifyinggrade lists A law student wanted to switch from Erasmus University to Leiden University and believed it would be necessary to falsify their grades before applying. In the process, they also forged the signature of a university employee. This was regarded as a criminal act when the forgeries were discovered, and the student ended up in court. Before the judge, they dramatically declared: ‘I saw no other way out. It felt like either a diploma or death.’ They received a suspended jail sentence of 2 weeks and 60 h community service for the forgery (Bonger, 2015).

Scheming Two students at Utrecht University developed the following scheme. Both showed up at the same exam and when it was time for submission, they got up in unison, proceeded to the examiners table, and bumped into each other ‘by accident’ on the way before dropping their paperwork on the floor. While they scooped up their belongings, they swapped papers, thus allowing one to hand in the exam of the other. The other, never having enrolled in this class, slipped away in the confusion unnoticed.

The scheme would have worked had two fellow students not witnessed the deceit and decided to report it to the teachers. The two were thereupon interviewed, but they categorically denied all allegations. A forensic expert was then consulted, who examined their handwriting and the allegations were confirmed. Both students were expelled from the university on account of severe academic misconduct (case reported to the author by a member of the board of examination at UU).

Photographing exams In 2012, at Tilburg University, a student was reported by several anonymous peers photographing tests with their cell phone at multiple exams, and placing the images on Facebook. When confronted with the accusations, they confessed, maintaining that they ‘had not wanted to profit from the situation, financially or otherwise’. They said they merely wanted to be able to study the questions at home, although agreeing it was wrong. The teacher, however, suspected the photos were already put up on Facebookduring the exam, and believed the student might have been soliciting help from the outside. This could not be proven though.

The board of examination ruled that for two of her exams, where fraud could be proven, the results would be annulled. The student was furthermore excluded from all courses for the remainder of the year, as well as the entirety of the next. The student appealed against the ruling, claiming that the sentence was ‘disproportional’ and that they would be unable to finish their BA in time, thus facing a significant financial drawback. They additionally could also lose their position in the master’s program the next year. The appeal was dismissed, on grounds of the fraud being ‘extraordinarily serious’ (ruling 972 of the board of examination at Tilburg University 2012).

Logging in twice In April 2014, large-scale fraud was detected during a digital exam in a statistics course for business and economy students at Amsterdam University. Students would log into the exam twice, using two different browsers. The first browser was used to work on the questions. Once the questions were answered, the digital exam revealed the correct responses. Students would then submit the now-known answers into the exam open in the second browser. Some 400 students passed the exam with abnormally high marks, which lead to suspicion of fraud. Closer inspection further revealed that the students had completed the test unrealistically fast. The exam was annulled for all 400 students (Anonymous, 2014).

Exams annulled In October 2016, some 100 pedagogy students at Salzburg University completed a test but never received the results. The entire examination was annulled after it was discovered that a number of students had discussed the multiple-choice questions used in previous exams in a closed Facebook group. Students protested against the ruling, and a discussion arose as to whether their behavior was in fact illegal. The vice-rector of the university said that copying questions and distributing them in itself wasn’t wrong as long as the answers were not included. The course coordinator discovered, however, that all of his examination questions (a total of 14 pages) had been photographed by students, including the answers, and hence the annulment remained (Anonymous, 2016).

3.3 Is There a Cheating Crisis?

In May 2016, the Irish Mirror, using the Freedom of Information Act, revealed that between 2012 and 2015, over 800 students had been caught cheating across seven universities in Ireland, and that only a few students had been reprimanded, with none being expelled. ‘Cheating’ was broadly understood here to cover a range of exam conduct violations, including plagiarism, impersonation, and ghost writing.

Just a few months earlier, English newspapers, taking advantage of the same legislation, reported that almost 50,000 students at British universities had been caught cheating in the last 3 years. A disproportional percentage of whom, it was added, were international students from outside the EU. Similarly, in April 2016, The Adelaide Advertizer reported that more than 1800 students at Flinders and Adelaide Universities in south Australia had been caught copying one another’s work and cheating on exams since 2010.

Has dishonest behavior among university students reached endemic proportions? Current research on academicmisconduct seems to support this dramatic conclusion, at least to a certain point. Diekhoff et al. (1996) found a significant rise in (self-reported) cheating attitudes and behaviors between 1984 and 1994 in a group of students at midwestern universities in the United States. The prevalence of cheating behaviors (on exams, quizzes, or assignments) went up from 54.1% in 1984 to 61.1% in 1994. Twenty years later, Vandehey et al. (2007) repeated the study and found a slight decrease rather than increase among a similar student demographic. Overall, they concluded that instances of cheating behavior dropped to 57.4%, but strikingly, was still represented in a majority of students.

Similar trends have been described by fellow researchers in the study of academic dishonesty. McCabe and Trevino (1996) observed that self-reported admissions of academic misconduct (cheating on an exam, for example) saw a substantial increase from 39% to 64% between 1963 and 1993. While some researchers reported more conservative figures, others painted an even darker picture, claiming that only a small minority of students didn’t engage in some form of cheating (of note, it is difficult to assess how accurate these reports are; see Franklin-Stokes and Newstead 1995). Following the findings of Anderson and Murdoch (2007), it can be safely said that cheating is both fairly common, and at the same time, seriously underestimated by teachers.

Is the ‘cheating crisis’ perceived universally in universities around the world? This question is hard to answer. It has been reported that, for example, post-communist central eastern countries in Europe have a higher prevalence of cheating behavior than other European countries (Pabian, 2015), and that Hong Kong business students are less likely to engage in cheating behavior than American business students (Chapman and Lupton 2004).

These isolated comparative studies of specific academic communities reveal little about the national character of academic (mis)conduct. Given the confusion over what exactly constitutes ‘cheating’ (see Box 5.2 for an overview of academic dishonesty), the scarcity of studies into academic cheating, and the notorious unreliability of self-reporting, on which most studies are based, the exact magnitude and impact of the ‘cheating crisis’ will probably remain clouded for some time to come.

Box 5.2: ‘Classification of Forms of Academic Dishonesty’ (List Compiled from Different Encyclopedic Works)

  • Cheating: Use of illegal tools, attempt to obtain external assistance during an examination, and use of unauthorized prior knowledge.

  • Deception: Providing false information to an instructor concerning a formal academic exercise—i.e., giving a false excuse for missing a deadline or falsely claiming to have submitted work.

  • Fabrication : Presentation and reporting of fake or non-existent research data and findings.

  • Facilitation : Helping or attempting to help another commit an act of academic dishonesty.

  • Ghostwriting: Submitting work written by a third party.

  • Impersonation: Assuming another student’s identity with the intent to provide the student an advantage.

  • Plagiarism: Appropriation of someone else’s work (or ideas) and passing it off as one’s own.

4 Ghostwriting

4.1 Ghost in the Machine

Ghostwriting’ is a practice that emerged in the 1980s and 1990s. David Healy from University of Wales College in Medicine, UK described one of his experiences with the phenomenon. He once received an email from a pharmaceutical company with a paper attached, its premise based on Healy’s own published work. It looked as if he had written it himself; in fact, it was ‘a recognizable Healy piece’ (2005, p. 41). The paper was offered to him as an article he could publish under his own name. He declined on grounds that it’s unethical to publish papers you haven’t written yourself.

However, when a different company sent him a similar offer some 2 years later, he decided to see what would happen if he accepted but altered the content of the paper significantly. In spite of the assurance that he was ‘free to edit the original article,’ his changes were not accepted. Healy thereupon withdrew his name from the article. The paper, written for him but not by him, was eventually published under someone else’s name.

Horace Freeman Judson reveals in The Great Betrayal (2004) the motives behind this type of ‘ghostwriting’ (presenting finished manuscripts to acknowledged scientists as a ‘gift’): they are created by large pharmaceutical companies to present their products in a favorable light. The ready-made ghostwritten papers invariably report positively on a specific product (a certain drug, therapy, or medication). These papers could constitute a form of ‘product placement.’ They are well-written and mimic real tests and are therefore difficult to distinguish from proper research.

Ghostwriting has not only dramatically increased in frequency, it has also ‘professionalized.’ Sismondo (2009) writes how pharmaceutical companies now plan publications strategically in advance of the actual research taking place. Companies map out key messages, determine information relevant to various audiences and journals, and identify potential authors for their papers. Once research becomes available, ‘publication planners’ hire writers, negotiate with potential authors, and ‘shepherd the papers through journals’ submission and review procedures’ (p. 175).

This may seem shady enough, but defenders of this practice claim that science is a collaborative enterprise. ‘Jointly authored papers are the rule in science, not the exception, and medical writers often produce clearer, more readable papers than medical researchers themselves’ (Moffatt and Elliott 2007, p. 21).

On the other spectrum of authorship, there is a second form of ghostwriting that targets unsuccessful authors. Instead of getting compensated for having their name on a paper, these authors are offered an opportunity to pay for a ‘slot’ in a paper written by someone else. In a publish-or-perish culture, researchers are sometimes willing to go to great lengths to keep up with the pace. Science reported in its issue on January 10th, 2014 on how Chinese brokers sell ‘co-authorship’ in papers already accepted for publication. Fees range from $1600 to $26,300, depending on the impact factor of the journal (Fig. 5.4).

Fig. 5.4
A sketch of a man lying down on the ground with head lifted up and his hand holding a pencil.

The Ghostwriter

To be clear, journal editors and peer reviewers do not appreciate either form of ‘ghost authorship,’ but are sometimes hard-pressed by industries and publishing companies to accept the practice as a fact of modern life. Thus, one publisher is quoted as saying in response to his editor’s opposition of ghostwriting: ‘Fine, you may have that view, but what you’re actually doing is driving it underground. It’s far better to be transparent and get this out into the open’ (quoted in Sismondo2009, p. 181).

Ghost authorship is more dominant in the medical sciences, where the interests at stake are far greater than anywhere else. Specialized agencies offer to write research proposals for researchers at a no-cure-no-pay basis. They claim a percentage of the grant money if the application is accepted. Few would consider this ‘cheating,’ yet one must ask where the involvement of such ghostwriters will end. Should they be made responsible for the formulation of research questions or the development of instruments too? Wouldn’t this allow them to steer research in a particular direction? (Box 5.3).

Box 5.3: ‘Putting the Supervisor First: A Dilemma’

You have just finished your master’s project and you want to submit your thesis as an article to a journal. Next year you plan to continue as a PhD student at the same institution. The supervisor of your master’s project has announced that if you are accepted into the program, they will also be the supervisor of your PhD project. Additionally, they tell you that they want their name on your article as a first author, even though they contributed little to the project. This will improve your chances for the PhD position, they inform you. How do you respond? Choose one of the following options and prepare an argument defending your selection.

  1. 1.

    Ignore the request and submit the paper in your name only, running the risk of not getting accepted to the PhD program.

  2. 2.

    Accept the request and put forth the supervisor’s name as a first author.

  3. 3.

    Report the incident as unethical behavior to the integrity officer.

  4. 4.

    Go to the dean of the faculty and discuss it with them first.

[Adapted with permission from the Erasmus Ethical Dilemma game].

4.2 Hiring a Helping Hand

At various stages in their academic careers, students are sometimes confronted with ‘ghost authorship’ as well. Paid services offer a ‘helping hand’ in writing papers or preparing for exams by providing ‘exercise materials.’ In the last decade, commercial ‘abstracting desks’ have materialized in and around universities, who advertise with summaries and abstracts of course books, practice questions, and even lecture notes.

Most of these texts are written by students who get paid a nominal fee for their work, which is then offered for sale to other students. With little or no quality control, many of these texts are subpar. Despite this, there are students who claim to have successfully finished courses relying solely on these commercially produced abstracts, without even so much as opening up the course book.

Some desks offer additional ‘writing guidance’ to students, assuming (part of) the task of teachers, in service of helping students improve their writing skills. Others go one step further. They are called ‘paper mills’ or ‘essay mills’ and offer fully formed essays for sale. The first practice (writing guidance) is wholly legitimate, the second (paper mills) clearly isn’t (Dickerson 2007).

Are students aware of the ethical dilemmas attached to these services desks? Zheng and Cheng (2015), who themselves were students when they did research for their article, interviewed peers at the University of San Francisco on their perspectives on hiring ghostwriters, and found to their surprise that a number of them (especially international students with English as a second language) did not see the practice as cheating, as long as it was only done once or twice. Some would argue that ‘ghostwriting is a cooperative form of work and both parties [i.e. the student who gets paid for his or her services and the one who pays] gain mutual benefits.’ Other students using ghostwriters agreed it was wrong but said in their defense that they did so because they were pressed for time, found the assessment too difficult or unclear, or just wanted a good grade.

When Zheng and Cheng subsequently interviewed a ghostwriter and asked how they felt about their work, the ghostwriter appeared not at all troubled by the ethical implications. Matter-of-factly, they remarked: ‘the good thing that I’ve gained from this job is not just money but also the writing skill’ (2015, p. 128) (Box 5.4).

Box 5.4: ‘Paper Mills’

‘Paper mills’ or ‘essay mills’ are sketchy organizations that claim to offer ‘original’ and even ‘custom-made’ essays on any topic, at any level. Allegedly, they work with authors who have earned a PhD degree, or possess otherwise respectable credentials. However, many of these agencies operate in the shadows, and some are downright swindlers.

Paper mills could pose a greater threat to academic integrity than plagiarism (Thomas, 2015), and the production quality of these organizations often leaves much to be desired, at least judging from one of the clients, who goes by the nickname ‘Thanatos’ and has a complaint about a company called ‘IVY dissertations writing services’:

I used IVY thesis recently and not only did they send me a paper copied and pasted from other sources, it was the wrong paper all together! After I complained to them about the paper, they sent me a ‘revised’ paper on the correct subject, but again it was a simple copy and paste from 1 or 2 different websites. A simple Google search revealed they didn’t even attempt to change the writing from the original websites. When I complained to IVY, they sent me a ‘revised’ paper that was exactly the same as the first, but now it had misspelled words and words used in the wrong context throughout the paper. I complained again, and they sent the exact same paper without any revisions made. After that point, they stopped responding to my e-mails (Thanatos, 2006).

4.3 Where to Draw the Line?

With regard to ghostwriting, there are two ethical issues that must be considered. One affects the individual researcher, who must decide where to draw the line. Hiring others to do your work is wrong, but on the other hand, collaboration is becoming more and more common practice in science (even though it’s often not properly acknowledged, see Farrell 2001). This raises the question of who involved in the research should be granted co-authorship. Would that include fellow researchers? The project manager? Even the lab technician?

The second question affects the academic community, who has an obligation to protect objectivity and transparency. Peer review plays an important role in this obligation. It comes down to critical scrutiny for any internal and technical flaws. Some fear that ghostwriting bypasses this critical process. Virginia Barbour, chairperson of COPE (Committee on Publication Ethics), expressed her concerns that academic peer review is being subverted by ‘almost industrial attempts by groups outside of normal publishing’ (Barbour 2017) (Box 5.5).

Box 5.5: ‘Free Riding: Misunderstood or Underreported?’

Imagine you are working with two other students on an assignment. There are certain items each of you need to work on, and the deadline is in 3 weeks. During that time, you realize that Alexandra, one of your teammates, continues to find new excuses for why she can’t do her share of the work. Last week, she explains, she fell ill, before that she had to move, yesterday her computer crashed, and this morning she got into a fight with her boyfriend. When she finally does deliver, her work is inadequate. You and your teammates end up doing the lion’s share of the work, even rewriting parts of Alexandra’s section. When the paper authored by your team gets a ‘B-’, you feel cheated by Alexandra.

Most students are all too familiar with behavior like Alexandra’s, known as ‘free riding.’ It tops the list of common student annoyances, although the ‘better than average’ principle seems in operation here: the free rider is always the other student.

Free riding is known to be demotivating even for diligent students, causing the entire team to perform suboptimally (a phenomenon called the ‘sucker effect’, see Swaray, 2011). Most universities recognize free riding as a negative side effect of group work and find it neither desirable nor acceptable in an academic environment. Thus the board of examination at the University of Twente declared that ‘free-riding behavior, that is benefiting from other people’s efforts in group assignments while not putting in the same effort as the other group members, can be considered as fraud’ (source: www.utwente.nl/en/bms, emphasis added).

Indeed, free riding is no different than cheating and should therefore be filed under ‘fraud’ – but is it treated as such at universities? In general, the answer is no. Firstly, free riding often flies under the radar, since students (understandably) don’t want to ‘snitch’ on their peers. Secondly, even when reported, it is difficult to prove. Alexandra, in our example, could claim that she did contribute to the team’s effort. Is it her fault that the other two decided to rewrite her work?

Although acknowledging the problem, many universities find it hard to counter free riding. However, recently developed programs seem to be helping reduce its prevalence. Swaray (2012) reports that randomly selecting one group member to present the group’s work increases participation, and cooperative learning is stimulated as a result. Further research by Maiden and Perry (2011) report that identifying individual contributions to group work is important, especially because groups can request that underperformers account for their behavior.

Romy Nefs (2019) proposed the following strategies to counter free riding:

  • Use of small groups to allow for easy identification of individual contributions

  • Clear assignments with well-structured schedules and strict deadlines

  • Team kickoff meetings with mandatory division of labor taking place right away

  • Team progress evaluation at midway point

  • Evaluation of the team’s work at the project’s end

  • Training on how to give constructive feedback to team members

5 Fraud Facilitating Factors

5.1 What Causes Fraud?

We return to the question posed at the beginning of this chapter: what circumstances or factors lead researchers to fabricate research findings (and students to cheat)? Researchers find this to be a particularly difficult question to answer, offering a host of explanations. We review four different dimensions to the problem.

5.2 Psychological Dimension

There are indications that scientific misconduct may be a sign of ‘moral weakness,’ as the virtue approach would predict it (see Chap. 3). A modern-day idiom for behaviors of ‘moral weakness’ could be ‘anti-social personality disorder,’ which is associated with irresponsible behavior, grandiose feelings of self-worth, and a general lack of guilt. An example of this can be seen in the case of fraud Cyril Burt (see Box 5.6), who was later described as a ‘sick and tortured’ man; the enormity of his trickery was anything but rational (Gould 1981, p. 236).

Were frauds to be understood in terms of their psychological condition only, it would help explain why their behavior can be so reckless and self-destructive. After all, how could high profile authors producing fraudulent studies expect their deceit to go undetected? But would this explanation also help understand less serious forms of fraud (such as cheating on an exam), that people from all walks of life may commit?

Social psychologist Scott Wowra of the University of Florida probed first year psychology students at a southeastern university in the US, using an ‘integrity scale’ to measure the strength of their ‘moral identity’ (i.e. the incorporation of ideals of justice and fairness). He related student’s moral identity to their ability to recall anti-social behavior, including academic dishonesty, and found a negative correlation. Thus, the ‘relative centrality’ of a college student’s moral identity appears to affect his or her willingness to engage in academic dishonesty’ (Wowra 2007, p. 317).

5.3 Situational Dimension

From an economic perspective, fraud in science may be all but irrational. To those seeking the highest outcome for the lowest cost, misconduct may be considered rational behavior. Economist James Wible of the University of New Hampshire argues that statistically-inclined, opportunistic scientists ‘estimate the probability and the expected utility of successful evasion from discovery and then make a conscious choice to commit or not commit fraud’ (1992, p. 21).

If this is true, then their decision-making depends on (a) the relative gains of committing fraud, (b) the probability of getting caught, and (c) the sort of punishments one can expect to encounter when caught. In this calculation, the chances that one will engage in fabricating data can be expected to decrease with the probability of being discovered and the weight of the penalty.

The same applies, of course, to students who may be seduced into engaging in cheating behavior when the situation appears inviting or rewarding enough. Consequently, it can be argued that a lack of reliable systems in place to monitor for cheating, unfamiliarity with university policies, and the atmosphere of secrecy that so often surrounds fraud at universities, all contribute to the continuation of the conditions that breed cheating.

5.4 Cultural Dimension

A third approach explaining academic fraud places emphasis on the institutional teaching and research cultures at universities. Various cultural factors have been said to influence the incidence of fraud.

One such factor is publicationpressure. According to sociologist Patricia Woolf of Princeton University, academic ‘publication is no longer just a way to communicate information. It has come to be a way of evaluating scientists; in many cases it is the primary factor in professional advancement’ (1986, p. 254). In the decades since this was written, competition among universities, individual researchers, and even students has risen, as has the drive toward more scientific productivity and the call for ‘excellence.’ Some argue that this pressure caused scientists to cut corners (see; Fanelli, 2010a, b, 2012). We return to this issue in Chap. 9.

Another factor is peer culture, the pressure one feels to conform to the prevailing attitudes of their peers. In a survey of US college students, Rettinger and Kramer found that decisions to cheat depended at least partly on one’s perception that others were cheating too. They concluded that ‘seeing cheating is the beginning of a social learning process. New students learn how to behave by observing their peer(s)’ (2009, p. 310). More particularly, performance-oriented teaching styles in class, coupled with poor instruction, can lead students to justify cheating (Murdock in Rettinger and Kramer 2009). Similarly, Shu Ching Yang (2012, p. 235), who examined academic dishonesty among Taiwanese students, found that the behaviors and attitudes of peer groups influenced student decision making regarding such conduct.

Box 5.6: ‘The Case of Cyril Burt’

Educational psychologist Cyril Burt (1883–1971) has been regarded as one of the greatest frauds of the social sciences, at least until Diederik Stapel later assumed this dubious distinction.

A leading figure in his field between the 1940s and 1960s, Burt’s most important research examined the heritability of intelligence. In particular, his work on monozygotic (or identical) twins was considered groundbreaking at the time. Having collected data on identical twins from 1909 to 1930, Burt used then state of the art statistics to calculate the correlation of the Intelligence Quotient (IQ) of identical twins who had been raised together and those who had not, comparing those with IQs of fraternal (non-identical) twins. Based on these findings, he claimed that intelligence has a very strong genetic driver.

In papers published between 1943 and 1966, Burt reported IQ correlations of 0.771 for identical twins raised apart, and 0.944 for identical twins raised together, fueling rhetoric that compensatory education is ‘wasted money.’ In the 1960s, Arthur Jensen, following Burt’s lead, argued that ‘for many people, there is nothing they can learn that will repay the cost of the teaching’ (quoted in Tucker, 1997, p.156).

However, just months after Burt’s death in 1971, Leon Kamin, a Princeton psychologist, pointed out several problems in Burt’s work. For one, the number of monozygotic twins raised apart grew with every publication. Burt had started with a mere 15 pairs in 1909 and ended roughly 50 years later with 53, even though he had long since stopped collecting data. Identical twins separated at birth are a rare commodity. Additionally, Kamin found that the correlations reported remained exactly the same. He mused that the chances of finding the exact same correlation every time is close to zero. Remarkably, Kamin identified even more of Burt’s foibles, finding that the two assistants he had supposedly worked with were seemingly nonexistent. Further still, it appeared Burt’s data was constructed from ideal statistical distributions, rather than measured in reality (Gould, 1981, p. 235). To add insult to injury, Burt burnt his scientific papers shortly before his death, making foul play difficult to prove.

Burt’s supporters attempted to explain away some of the most ostensible problems, interpreting them as ‘sloppiness,’ not fraud, and accusing ‘left winged environmentalists’ of slandering his name. However, even Burt’s official biographer, Leslie Hearnshaw, who had access to his diaries, gradually came to the realization that his research was completely fraudulent. By the late 1970s, the verdict was accepted that Burt, once called the ‘dean of the world’s psychologists,’ had likely fabricated most of his data. A meticulous historical analysis of the case by William Tucker (1997) showed that Burt was a fraud beyond reasonable doubt (Fig. 5.5).

Fig. 5.5
A photograph of sir Cyril Burt.

Sir Cyril Burt in the 1930s. (Source: Wikicommons)

5.5 Integrative Perspective

Various attempts were undertaken to integrate personal, situational, and cultural dimensions into a unified model for analyzing cases of fraud. One such model, presented by Donald Cressey, is coined as the ‘Fraud Triangle’ (1973). It combines three factors: incentives to commit fraud (‘opportunity’), various contextual factors (‘pressure’), and the perception of an action as fitting into one’s personal code of ethics (‘rationalization’). Thus, when students claim to be unclear about what behaviors constitute academic dishonesty or say a particular course ‘isn’t relevant for their future career,’ they rationalize. When they cite increased competition for academic positions, they perceive pressure. And when they make use of a gap in an exam’s procedures, they take advantage of an opportunity (Hayes et al. 2006).

Becker, Connoly, Lentz, and Morison (2006) found that all three factors predict dishonest behavior in business students (who rank the most likely to cheat). Their conclusion was largely confirmed by Choo and Tan (2008), who also identified that the three factors all held influence on a student’s propensity to cheat (Figs. 5.6 and 5.7).

Fig. 5.6
A triangle model with ethical risk in the center triangle, and perceived pressure, rationalization, and perceived opportunity in the surrounding edges.

FraudTriangle. (After Cressey, 1973)

Fig. 5.7
4 photographs of conversations between a woman and a young lady, Alex. The lady wonders about Alex writing her own thesis and exclaims to Reinier about it. Alex is frustrated thinking they are making a joke of her.

‘Alex, why are you so stressed out?’. (Photo cartoon © Ype Driessen, 2019, reproduced with permission from the author)

Breaking the Fraud Triangle (opportunity, pressure, rationalization) is regarded as a key to its deterrence. Since the three elements strongly interact, removing one would significantly reduce the risk of unethical behaviors emerging. Of the three, opportunity is ‘most directly affected by the system of internal controls and generally provides the most actionable route to deterrence of fraud’ (Cendrowski, Martin, & Petro, The Handbook of Fraud Deterrence,2007, p.41) (Box 5.7).

Box 5.7: ‘Hoaxing’

A ‘hoax’ is a prank, a small con committed on an individual or group of people, who are made to believe something only to find out that the joke’s on them. Hoaxes typically involve the production of some form of falsehood, but they aren’t classified as ‘fraud’ because the intention is not to profit from the deceit.

The notorious 1996 ‘Sokal Hoax’ was a practical joke played on French postmodernist sociologists and their followers. Alan Sokal, professor of physics at New York University, composed a text, entitled ‘Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,’ which was made up largely of (attributed as well as non-attributed) quotations from prominent French postmodernists, including, to name a few, Gilles Deleuze, Jacques Derrida, Jacques Lacan, and Bruno Latour. In the paper, Sokal argued that ‘physical “reality”, no less than social “reality” is at the bottom a social and linguistic construct’ (Sokal and Bricmond 1998, p. 2).

Sokal submitted the article to Social Text, a leading American cultural-studies journal, despite believing it to be complete gibberish and full of logical errors. Shortly after Social Text accepted the article and ran it in the Spring 1996 issue, Sokal came out, declaring it a ‘parody.’ He proclaimed that his intentions were to expose postmodernist discourse as pretentious drivel. Sokal argued that despite frequent references to subjects like quantum mechanics, string theory, and Einstein’s general theory of relativity, postmodernists possessed a completely flawed understanding of the natural sciences. A follow-up book, entitled Fashionable Nonsense. Postmodern Intellectuals’ Abuse of Science (Sokal and Bricmond 1998) spelled out his argument in further detail.

Following in the footsteps of Sokal, Peter Boghossian and two of his colleagues at Portland State University carried out a similar but more elaborate hoax in 2018, known as the ‘grievance studies affair.’ They wrote no less than 20 articles, promoting deliberately absurd ideas on morality and morally questionably acts, and submitted them to journals on post-colonial theory, gender studies, queer theory, and intersectional feminism – which they dubbed ‘grievance studies’ because in these fields ‘grievances are put ahead of objective truth.’ Seven articles were accepted (four even published), nine were rejected, and the remaining were under review or in the process of resubmission when the hoax was revealed.

The hoax, aimed to expose the lack of scientific rigor in postmodern research, backfired when Boghossian and his colleagues were critiqued for the same reason, as they had not included a control group in their experiment, and even had to face a research misconduct inquiry on the grounds of conducting human subject-based research without approval, and for fabricating data.

Hoaxes such as these are not just ‘practical jokes.’ They are meant to be critiques of scientific practices, directed at the shortcomings of quality control in the publishing process, and purported to raise awareness of the lack of critical faculties in some academic circles. However, they also raise questions themselves. Is it, for example, ethically acceptable to waste recourses in this way? And do these authors not act in bad faith, deliberately misrepresenting the fields of research they purport to expose?

6 Clearing Science: Measures to Counter Fabrication

6.1 Fake Science

Fabrication is a form of academic misconduct that belongs in the realm of ‘fake science.’ It is a deliberate attempt at deceit. What can be done to counter it? We discuss three general strategies.

6.2 Academic Peers

An important role in exposing fraud is reserved to the academic community. Both Stapel and Burt were unmasked by fellow researchers, cheating students by their peers. But whistleblowing is an unappealing option as we have seen, and not always appreciated in the academic community.

Lim and See (2001) report that Singaporean students are quite tolerant of academic dishonesty, with the majority of them preferring to ignore the problem rather than report it. One student commented: ‘Nobody will report another student for cheating as you may be the one cheating someday’ (p. 272). Malgwi and Rakovski (2009) found that American students were just as reluctant to report academic dishonesty to the relevant authorities, and preferred other counter measures (including stronger penalties, parental notifications, or use of an anonymous tip line).

6.3 Proctoring or Disciplining?

Many notorious frauds were in the position to hide their actions. Would putting more checks in place, and not allowing the opportunity to fabricate data in the first place play a role in diminishing the ethical risks?

With forms of online and distance learning rapidly expanding at universities, ‘proctoring’ (supervising students taking exams, verifying their identities, and other forms of vigilance) becomes indispensable to not ‘giving an opportunity’ to cheaters.

Research by Prince, Fulton, and Garsombke (2009) suggests that some form of vigilance is justifiable, but it can easily transform into ludicrous distrust, as the ‘Classroom Management and Student Conduct’ page of WikiHow reveals. On the page with tips for teachers, we find such suggestions as this: Greet the student as they come into the classroom, look them in the eye, and watch for signs of nervousness, while simultaneously inspecting their arms to see if notes are written on them. Also: Know that some female students might write on their legs but be aware that that observing this behavior might lead to an accusation of harassment.

In a climate of mistrust and suspicion, students will complain that campus integrity policies are biased against them (see McCabe 2005). Or worse, argues Zwagerman (2008, p. 6909), in a climate that is entirely designed to eliminate every opportunity to cheat, suppression of academic dishonesty becomes ‘more important than anything that might be sacrificed in the effect – including education.’

6.4 Sanction or Honor Code?

Would it help to decrease incidents of cheating by increasing the penalty? In an examination of several classical cases of fraud, Bridgestock (1982) argues to the contrary. He observed that for many offenders, career pressure, or even an unusual commitment to a certain set of ideas, overrides considerations of ethics. Sanctions are ‘at best a partial deterrent to fraud’ (pp. 378–9).

Stephen Davis (1993) corroborates this finding. Confronted with the question: ‘If a professor has strict penalties and informs the class about them at the beginning of the semester, would this prevent you from cheating?’ some 40% of male students responded ‘no.’ Female students were only slightly more responsive. Closer examination of the data showed that the majority of the students who responded with a ‘no’ had reported previously cheating in college. In short: ‘if students have cheated in the past and plan to cheat again, there is precious little that will sway their course of action’ (1993, p. 28).

On the other hand, would an approach that capitalizes on fairmindedness and justice help? Can cheating be deterred if students are made more familiar with academic integrity, and offered an honor code to abide by? Jordan (2001) finds that indeed, non-cheaters have a greater understanding of institutional policies than cheaters do, but since cheaters and non-cheaters received the same information, the difference between them seems to lie in their attitude towards it.

McCabe advocates for a ‘just community approach,’ which cherishes democratic values and promotes moral reasoning. He further adds that it’s not just students that need to be enlightened: ‘the real key to building and sustaining an atmosphere of student integrity on any campus may be involving all members of the campus community – students, faculty, and administration’ (1993, p. 656).

7 Conclusions

7.1 Summary

This chapter dealt with a wide variety of a very serious form of fraud in science, namely the fabrication of data, research findings, and test results, which can be accomplished in a number of ways. Well-known cases of forgery from the likes of disgraced academics Cyril Burt and Diederik Stapel were discussed, and the question of how to identify and expose these frauds was explored. On the flip side, the fate of those who do the exposing, the ‘whistleblowers,’ saw our attention.

Cheating among students, as a fraudulent ‘shortcut to knowledge’ is discussed, and examples of cheating are presented. From this, we examined whether cheating has increased over the years and if there was in fact a ‘cheating crisis,’ as some proclaim.

Furthermore, debates were presented on the practice of having others write your papers, hoaxes as a specific form of fabrication and whether they have a cleansing function, and how institutional mechanisms can help liberate academia from fraud.

7.2 Discussion

Cheating, fabrication, and forging of research data, among other forms of fraud, have plagued science from its humble beginnings, but has it increased in the past few decades? Is there truly a ‘cheating crisis,’ perhaps even beyond academia?

There are certainly indications that such a crisis exists, but at the same time, the scientific community appears more concerned with research ethics than ever before. From this, we identify two important questions to ponder. Are we doing enough to prevent or at least combat this crisis? And have conditions in science changed such that fabrication has become more lucrative or attractive? Both questions will be the subject of further discussion in subsequent chapters.