After Reading This Chapter, You Will:
Have an understanding of how modern scientific research is influenced by economic interests
Be able to identify conflicts of interest and conflicts of values
Be capable of recognizing competing interests
Understand conflicts of ownership
Know how various conflicts are resolved
- Academic capitalism
- Disclosure (of interest)
- Conflict of interest
- Competing interests
- Funding bias
- Reporting bias
- Research alliance
1.1 Science in a Global Society
In the last quarter of the twentieth century, scientists began noticing that the social and political landscape was changing rather rapidly. This change was beginning to impact the organization of science too, including the way it is financed. Also, the process of production and dissemination of scientific knowledge transformed, calling into question the very values of science.
Two related trends brought about these changes. One was the onset of ‘globalization,’ a series of processes that caused the world’s economies, cultures, and populations to become more interdependent, simultaneously intensifying interactions among people, companies, and governments on a global scale (Bartelson 2000; Hoekman 2012; Mosbah-Natanson and Gingras 2013). The other is the introduction of neoliberalism in politics (resurgent of nineteenth century economic laissez-faire ideas and free market capitalism). This soon found its way into university administration (see Fine and Saad-Filho 2017, for a discussion of neo-liberalism, with its internal contradictions, tensions, and sources of dynamics).
During this time period, more emphasis was placed on ‘output,’ ‘quality control,’ and ‘cost-efficiency,’ on science’s ‘added value’ to society. Many scientists saw new opportunities in these developments and began to establish new collaborative relationships, especially with commercial parties. More space for entrepreneurial activity was created and universities began to enhance their social impact on society. Incubators (start-up organizations and businesses involved with advancedtechnology) found their way onto college campuses, and universities flourished as creative business environments.
At the same time, new challenges and tensions emerged, having to do with the growing competition over dwindling public funding and a ‘pressure to produce.’ A trend towards valorizationof research was also observed, which meant that the economic value of scientific activities began to predominate, not its intrinsic value (see Box 8.1). Some even began to speak of ‘academic capitalism’ (Slaughter and Roades 2004 (Fig. 8.1).
It has been argued that a growing emphasis on protection of intellectual property, and a strong preference for profit, has corroding effects on some of the core values in science, such as openness, transparency, and autonomy. In turn, these invited a host of unwanted behaviors in research, ranging from fraud to questionable decisions, which we will discuss in greater detail below (see Washburn 2005; Healy 2002; Greenberg 2009).
In this chapter, we explore the links between the three parties involved in these developments, namely science, private industry, and government. Traditionally, these three parties had clear, distinct roles. Sciences’ task was restricted to research and investigation; industry focused on development and production; and government in creating and enforcing regulation. In the age of academiccapitalism however, the boundaries between these roles gradually dissolved, resulting in a number of emergent conflicts. It is these various conflicts that we will discuss in this chapter.
Box 8.1: Valorization and Academic Careers
‘Universities, Academic Careers, and the Valorization of “Shiny Things”’ is the title of a 2016 paper by Joseph Hermanowicz, professor of sociology at the University of Georgia. Hermanowicz researched the careers of some 60 physicists employed at universities across the U.S. between the mid-1990s and mid-2000s, when neoliberalism ‘enabled academic capitalism to flourish with its attendant effects in privatization and marketization’ (2016, pp. 303–4). How did this affect their careers and aspirations? Here are a two notable changes Hermanowicz found:
Publish and Perish Institutions began to favor research which brought about a ‘press for productivity.’ By the time the older generation (who entered university before 1970) became tenured professors, they had published on average of about 20 papers. The younger generation, in comparison, published over 40 by the time they became professors. Doubling the number of publications in roughly the same time span meant an increased pressure to publish.
Disappointment Greater output did not result in more job satisfaction or commitment. On the contrary, actually. Many scientists were disappointed, even bitter, about the insistence on greater output. Although it differs whether one works at an ‘elite’ institute, which is largely committed to research, or one that emphasizes (mass) teaching and services, job satisfaction and work attitude seems to be affected negatively by valorization throughout different cohorts. ‘Professors increasingly understand themselves and their work in terms of free agency, geared to a market, and interested especially if not exclusively in themselves’ (Hermanowicz 2016, p. 324).
Students’ Change of Focus Adding to this picture, Saunders (2007) found a change in the focus of students during the same period. In his findings, the majority of students switched from being intrinsically motivated, having an interest in developing a ‘meaningful philosophy of life’, to being extrinsically motivated, having an interest in being ‘well off.’ Many students now agree with the statement that ‘the chief benefit of a college education is to increase one’s earning power.’ Students today are more competitive, less interested in liberal arts and sciences, and seem to subscribe to a neoliberal agenda.
1.2 Interests at Stake
New forms of collaboration between researchers and third parties bring special interests into play which hitherto did not have such importance. How do the interests of collaborative parties influence a researcher’s judgement?
Consider the case of a researcher who was commissioned by a political party to investigate a certain social question. Would it matter if the researcher has strong political convictions themselves? Would it make a difference if these convictions align, or differ, with those of the employing political party? Or consider a researcher investigating the effectiveness of certain policies implemented by an organization. Would it matter if these researchers are also being paid to provide training to members of that same organization? Would that affect their assessment of the policies being examined?
Most would agree that these situations contain an element of suspicion. How do we know the researcher in question is not influenced by certain external interests, either financial or otherwise?
When there is reason to believe that a researcher’s judgement may be compromised by certain interests related to their ties with other parties, we commonly speak of a conflictof interest (see Davis 2001). Some authors, such a Aubert (1963), and more recently Bero and Grundy (2016) and Wiersma et al. (2017), have suggested that we distinguish ‘conflicts of interest proper’ (having to do with financial motives only) from ‘conflicts of values’ (having to do with morality and belief systems), which are both structured in fundamentally different ways and require divergent strategies to resolve.
We accept the suggestion to separate financially induced conflicts of interest (COI), which mostly appear in privately funded research, from conflicts of values (COV), which arise out of various types of partnerships with thirds parties (non-profit organizations, interest groups, civil society organizations, and others). We furthermore propose to highlight two additional categories, namely competing interests, which are more often found in publicly funded research and do not directly impact judgement calls, and conflicts of ownership, which have to do with questions of copyrights and patents.
In the coming sections, we discuss how these various conflicts impact scientific norms in different ways (see Fig. 8.2 for an overview).
2 Conflicts of Interest
2.1 What Is a Conflict of Interest?
The classical situation in which a researcher’s decision-making may be compromised because of certain financial interests is called a conflict of interest (COI). Conflicts of interest are more common in the bio-medical and pharmaceutical sciences, where large financial gains are at stake, and the development of new medication is a costly affair. In the social sciences, financial conflicts of interest do exist but the temptations differ from those of the bio-medical and pharmaceutical science.
Let’s start with an example from the pharmaceutical sciences. Resnik (1998) cites a classic case of a scientist who researched the effects of a certain medication on the alleviation of common cold symptoms. The scientist also owned stock in a company that produced the same medication he was researching (a tablet of zinc lozenges). When their findings showed a positive result, the company’s stock soared, from which the researcher benefited. This raised a serious question: Was the researcher’s scientific judgement being influenced by the expectation of a financial profit?
In the social sciences, direct financial gains are rarer. Rather, the problem lies in indirect gains, having to do with the formation of dependency on the research itself. Soudijn (2012) quotes the case of a Dutch psychologist, who set up a project offering help to clients suffering from phobias. The clients received free treatment (in the form of experimental therapy, given by his students) on the condition that they agreed to participate in the research project. Thus, the clients became reliant on the research as a means of free therapy. These dependency relationships obfuscate the research project to the point that by today’s standards, the data would no longer be considered valid, and although the research participants did not profit from the research financially, financial gains (free therapy for the client) posed a COI in this case.
Whether these influences actually impair a researcher’s judgement is not of importance in our understanding of a COI. It is the potential to cloud or impair judgement that defines the problem.
In any conflict of interest, objectivity as one of sciences’ key values is at stake:
How do I know your conclusions are not biased?
How can I trust your judgement?
In the coming sections, we discuss cases from within the social sciences where differing financial interests were at stake to differing degrees (Box 8.2). Note that not every situation with financial interests at stake automatically leads to a conflict of interest. Furthermore, it can be difficult to establish whether a researcher acts in bad faith or not.
Box 8.2: Funding Bias
Often regarded as a specific form of COI, the term fundingbias indicates the tendency found in scientific studies to support the interests of the study’s financial sponsor. Funding bias is a well-documented effect (see Krimsky 2012). A study by Turner and Spilich (1997) well illustrates this conclusion. The authors compared 91 papers written about the effects that tobacco and nicotine have on cognitive performance. The authors differentiated between industry sponsored and independent studies. Both types of studies reported on the positive effects of tobacco and nicotine use (indications that tobacco enhances cognitive performance). However, all non-sponsored studies also reported negative effects (indications that tobacco does not enhance cognitive functioning), while only one sponsored study did so.
While non-sponsored studies thus present more balanced results, there can be disadvantages to not having external sponsorship too. It can mean that the researchers have not succeeded in developing links to the field of expertise, and that their work is ‘sterile.’ Martens et al. (2016), reporting on the field of counseling research, noted that ‘sponsored research has the potential to advance the professional goals of a field by allowing researchers to conduct studies of a scope and magnitude that they would not be able to do without external funding’ (p. 460).
2.2 Promotion Bonus
In a commercialized academic climate at Tilburg University, a sudden increase in the number of successfully completed PhD dissertations in the field of the humanities and the social sciences was observed a few years ago. They were carried out under the supervision of a small number of professors. Suspicion arose that this increase had to do with financial interests, as each dissertation produced financial yields, and some of the revenue was being paid to the supervisors directly (they received a promotion bonus).
A complaint was issued. The commission that investigated the cases found that while many of the dissertations were rather weak and held very little scientific value, there was no evidence that fraud was being committed. Did this cancel out the possibility of a conflict of interest? Not entirely. The commission strongly recommended that the promotion incentives be abolished to preemptively avoid any possible suggestions of COIs. Tilburg University complied with this advice and no longer pays these incentives out to supervisors (case reported in Universe, September 20th, 2018).
2.3 Financial Ties
In a report on the financial links between panel members who were asked to contribute to the development of diagnostic criteria for the Diagnostic and Statistical Manual of Mental Disorders(DSM) and the pharmaceutical industry, Cosgrave et al. (2006) found notable examples of said links. More than half of the panel members queried had financial ties with a pharmaceutical company; 40% of them had consulting income; 29% served on a speakers bureau; and 25% received other professional fees (2006, p. 156).
Does the presence of financial ties imply that these panel members can’t be trusted? Not necessarily. Cosgrave et al. (2006) argue that receiving financial support does not automatically disqualify members, but it does pose a conflict of interest. ‘The public and mental health professionals have a right to know about these financial ties, because pharmaceutical companies have a vested interest in what mental disorders are included in the DSM’ (p. 159) (Box 8.3).
Box 8.3: Mutual Favors: A Dilemma
Consider and discuss in groups of three or four students the following case and select the best course of action:
Jacky is a student enrolled in the same course as you. When you are assigned to write a paper on a certain subject together, Jacky makes you the following offer. If you do the work this time and put her name on the paper, she will do the same for you next time. You are both pressured for time, but Jacky has been particularly overloaded and could use some help. The syllabus allows students to team up on papers – but not in this manner.
How do you respond? Choose the option you prefer and present your rationale to your group.
You allow Jacky co-authorship on the paper but you decline her offer to return the favor.
You accept the offer, on the condition that you both commit to a critical peer review of each paper.
You ask for advice from a professor outside the course, who also happens to know Jacky.
You decline the offer and report the unethical behavior to the professor.
(Case adapted after the Erasmus University Dilemma Game)
2.4 Paid Consultancies
Do financial interests resulting from paid lectures and expert consulting pose a conflict of interest? This question was raised in the case of Jean Twengy, a psychologist at San Diego State University. Twengy studied people born after the mid-1990s, which she dubbed ‘iGen’ (after the iPhones this generation grew up with). In 2010, she started a business called ‘iGen Consulting’. As a consultant, she gave paid lectures and advised large commercial corporations, acquiring a considerable side income. In her academic papers, she did not mention this revenue.
Some argue this poses a conflict of interest. Psychology journals require that personal fees be declared, and since Twengy had not done so, her side activities qualify as a COI by default, they said. Others however, including psychologist Steven Pinker from Harvard University, argue they do not, because ‘these activities do not provide incentives to make certain judgement calls.’ Pinker argues that Twengy’s case does not compare to ‘evaluating a drug produced by a company in which one holds stock’ (case discussed in Chivers 2019). It is true that these financial interests did not relate directly to the researcher’s field of inquiry. However, they do so indirectly, since ‘iGen’ is the basis of both her research and her advisory work.
In pursuit of any research project, accepting gifts or ‘gratuities’ should be avoided or kept to the bare minimum (such that the gifts represent no substantial value). At the very least, it can leave the impression that favorable opinions can be bought. Accepting something of value without the other party expecting something in return constitutes a conflict of interest because of the suggestion of favoritism. And if something of value is accepted and the other party does expect something in return, this is even worse. It’s called bribery.
A bibliography study by Volochinsky et al. (2018) concluded that conflicts of interest produced by gift giving result in an ‘undeniable dependence on relationships that implicitly lead to reciprocity and deprive the professionals from the necessary autonomy and impartiality’ (p. 101). But there is another side to this story as well.
Anthropologists, who have long emphasized the importance of gift giving in human societies, argue that a strict view may be too narrow (Mauss 1970; Sherry 1983). Some noted that specifically in Asia, gift giving is part of an ‘exchange system,’ and non-participation will be considered rude and disruptive of social relationships. Respondents in the research of Zhuang and Tsang (2008) were found to have ‘different ethical evaluations of different marketing practices.’
Gift giving in a natural setting poses an ethical dilemma for researchers: either go along with the practice of gift giving and run the risk of a possible conflict of interest, or stay within the strict boundaries of research ethics and accept that certain research goals may not be achieved as a result. (Box 8.4)
Box 8.4: Big Pharma
The expression ‘big pharma’ refers to the ‘pharmaceutical industry,’ the private sector that invests billions into medical research and services. Pharmaceutical companies employ various (often aggressive) means to protect their investments, and for this reason the term ‘big pharma’ is loaded with a fair amount of distaste or even mistrust (see for example: Angell 2004, The Truth About the Drug Companies: How They Deceive Us and What to do About It).
One such critic is David Resnik, whose 2007 book The Price of Truth provides ample evidence of how academic values such as objectivity, truthfulness, and openness have been corrupted by financial interests. Another critic, Ben Goldacre (2013), wrote critically on the naive view that the public has of medical research and doctors, and how they often fail to acknowledge just how much financial interests shape the medical field.
Johan Hari (2018) writes in a similar vein, in his case focusing on the field of clinical psychology, which he argues has become equally dominated by the pharmaceutical industry. Drug companies, he claims, are aggressively advertising an image of depression as a ‘chemical imbalance’ in the brain, with the sole intent of selling drugs to the clinically depressed. They fund scores of studies, ‘cherry pick’ those that corroborate the positive effects of the drugs they produce, and ‘kill’ the ones that challenge their narrative.
3 Conflicts of Values
3.1 What are Conflicts of Values?
Similar to conflictsof interest (COI), conflicts of values (COV) compromise the researcher’s judgement, or decision-making. The main difference being that the source of commitment in a conflict of values is not financial in nature.
Thus, if a researcher has close working relationships with research affiliates, is on personal terms, or has a dependent relationship with them, conflicts of values may arise. For example, if a researcher is asked to assess the work of one of their personal associates, it is difficult to avoid the suspicion that their judgement is influenced by this relationship.
In a straightforward example of a conflict of values, a researcher from Shanghai University, acting as the handling editor of a research journal, was asked to process a recently received manuscript. After receiving a positive review from an anonymous revierwer, the paper was published. Not long thereafter, it was discovered that the handling editor had co-authored several papers with the author of the recently published paper. It was also revealed the author was a former PhD student of the handling editor. The paper was subsequently retracted.
Although it is entirely possible that the relationship between the author and handling editor played no factor in the matter, for the sake of scientific disinterestedness, journals must avoid any suggestion of ‘favoritism.’ And indeed, the journal changed its review policy after the fact, such that no future editor would be allowed to deal with manuscripts submitted by colleagues or ‘research alliances’ (partners involved in the same research). (case derived from Retraction Watch, ‘We would now catch’ this conflict of interest, entry September 8th, 2017).
In our understanding of conflicts of values, the values of reliability are at stake:
How do I know your assessment is fair?
How can I trust your valuation is not prejudiced?
Below we discuss a few cases in which conflicting values factored into a researcher’s ability to weigh the situation fairly or justly.
3.2 Allegiance Effect
Maj (2008) discusses a phenomenon known in clinical research as the allegiance effect, the propensity of researchers to favor the school of thought they belong to. This effect plays out in ways similar to how financial conflicts of interest impact drug trials. Maj (2008, p. 91) found that the allegiance effect occurs through the ‘selection of a less effective intervention to compare with the researcher’s favorite treatment; unskillful use of the comparison treatment; focusing on data favoring the preferred treatment in study reports; and failure to publish negative data.’ In short, researchers purposefully restrain themselves to a specific, desirable body of knowledge.
3.3 Disciplinary Bias?
Scientists often feel that grant application review processes are skewed. Some believe that certain disciplines are favored over others, or that certain approaches are more likely to get selected over others. They liken it to rivalries and disciplinary prejudices in peer reviewing. Should this be the case, then conflicts of values, rather than conflicts of interest, are at play, because there are no direct monetary gains to be expected by the reviewers.
A team of French researchers interviewed 98 scientists involved in grant review processes (either as a reviewer or as an applicant) and found that many did in fact believe that disciplinarybias played a role in the process, although they admitted that this was impossible to prove. Nonetheless, many still felt that certain disciplines strongly support their research topics, and that favoritism was almost self-evident. One reviewer even admitted to being ‘much more lenient […] with the people we know’ (Abdoul et al. 2012, e35247).
4 Competing Interests
4.1 What are Competing Interests?
Any situation where collaborating researchers have unaligned intentions or desired outcomes is a situation of competing interests. Throughout the research process, different perspectives and sometimes incompatible expectations are brought into the play.
This can be seen, for example, in commissioned research, when a researcher is asked to provide advice in a matter that interests multiple stakeholders, such as governmental bodies, individual clients, and industry professionals. These stakeholders may envision contrary approaches to and focus of the research; they may bring in different values, risk assessment strategies, and expectations.
To help differentiate conflicts of interest and conflicts of values from competing interests, note that the former is in play when a researcher’s judgement is compromised, but that in the latter it is not the researcher’s decision-making that is called into question. Rather, what is at stake is a researcher’s ability to reach consensus or agreement with multiple stakeholders. Will the research fulfill the needs of all parties? Will they be able to assess the situation justly?
In our understanding of competing interests, the key value at stake is carefulness:
How do I know that the parties involved are represented fairly?
How do I know that the stakeholders’ interests are weighed properly?
From here, we will discuss cases in which competing interests affected a researcher’s ability to weigh a situation fairly or justly.
4.2 Wicked Problems
Head and Alford (2015) discuss the difficulties inherent to researching so called wicked problems in public policy and management research. Wicked problems are unpredictable and open-ended questions that involves a great number of people and present a major challenge to researchers (for example climate change, or pandemic influenza). Problems become ‘wicked’ when they elicit the unforeseen consequences of policy interventions.
Most policy researchers acknowledge that wicked problems cannot be solved through the traditional ‘engineering approaches.’ Instead, researchers must pay attention to the deep-rooted disagreements between stakeholders about the nature, significance, goals, and solutions to these problems. This requires that researchers address ‘value perspectives’ that allow for certain forms of ‘bargaining’ in the political marketplace, aimed at reaching a shared understanding, agreed purpose, and mutual trust. The researcher’s ethical challenge here is to negotiate a balanced perspective that fairly addresses differing perspectives while remaining open-ended at the same time.
4.3 Ideological Interests
An ideologically informed bias, as explored by MacCoun (2015), serves as a basis of competing interests. MacCoun argues that in public policy, the ultimate goal (for example, income equality, reproductive rights, welfare entitlements) are not self-evident and in fact are often contested. This can lead to suspicion that a researcher may be biased in their assessment of the outcomes. However, as MacCoun found, these politically bent biases can also be harmful. This can be seen when adversaries who don’t like a researcher’s findings are granted a greater chance of questioning their opponent’s motives when an apparent bias is present (Box 8.5).
Box 8.5: Sources of Conflicts
Conflicts of Interest (COI)
Conflicts of Values (COV)
Competing Interests (CI)
Conflicts of Ownership (COO)
Employment in a commercial firm
In-kind support of materials
Research funding or grant support
5 Conflicts of Ownership
5.1 What Is a Conflict of Ownership?
There are situations in which the use of findings, data, or other aspects of research is restricted by parties who claim ownership over them. They may be external financers, sponsors, commissioners, commercial parties, or any other entity who has secured rights to intellectual property (the right to own an idea or discovery). These parties may demand that certain restrictions apply were the research to be shared or utilized. If these restrictions contradict scientific norms (for example, refusing open access), we propose to speak of them as conflicts of ownership (COO).
In our understanding of conflicts of ownership, there are two key values at stake, namely autonomy and accountability:
What level of freedom do researchers have to make their own decisions?
What restrictions imposed by others apply to their work?
The difference between a conflict of interest and a conflict of ownership lies in the fact that in a COO, it is not the judgement of the researcher that is called into question. Rather, it is the people with whom the researcher is working. Conflicts of ownership differ from competing interests in that the researcher is not at liberty to decide how or what to research, or how to use any subsequent findings.
Intellectual property is the right to own an idea. It allows the holder control over intellectual property. Patents challenge the communal ideal that science belongs to everybody, and that property right should be kept to a minimum (see Chap. 2 for a discussion of these ideals). Though patents are mostly associated with the natural sciences, the life sciences, and pharmacology, they are issued in the social sciences as well. Social science patents can be seen, for example, psychological testing protocols; biofeedback methods (devices that determine the psychological state of a subject); or educational games, methods, and instruments (see Box 8.6).
Box 8.6: Patenting Psychological Tests: Costs and Benefits
Since the beginning of the twentieth century, social scientists have been searching for instruments that allowed them to measure ‘psychological constructs,’ such as intelligence, dimensions of personality, emotions, and other ‘qualities of the mind.’ From the beginning, economic considerations were the primary impetus for psychological assessment. The driving thought was that cheap and cost-effective testing should replace inefficient and costly interviews that would provide the same information (see Yates and Taub 2003).
As a result of this mission, a large number of psychological tests have been developed and are now being used in clinical and developmental psychology, industrial and organizational psychology, forensic psychology, education, and in other applied fields of the social sciences. In these areas, psychological tests serve a variety of purposes, such as in the selection of candidates, assessment of professional abilities, prediction of developmental trajectories, or planning of treatment.
As psychological testing has become standing practice in various occupational areas of Western society, the amount of money invested in these practices has increased accordingly. Psychological assessment has become an extremely profitable business and consequently, the copyright holders of psychological tests (often the publishers) have guarded access to and use of these instrument, which they consider ‘trade secrets,’ similar to how pharmaceutical industries patent their products.
For a prime example, the publisher Pearson, who owns a number of psychological tests, asserts that ‘test questions and answers, manuals and other materials divulging test questions or answers constitute highly confidential, proprietary testing information which Pearson takes every precaution to protect from disclosure beyond what is absolutely necessary for the purpose of administering the test.’
Their products are sold ‘only to qualified individuals who are bound by the ethical standards of their profession to protect the integrity of the materials by maintaining the confidentiality of the questions and answers’ (www.pearsonclinical.co.uk/information/legal-policies.aspx).
Owners of intellectual property have the right to reproduce or disseminate the research data they possess. The rights afforded to them may include the right to withhold publication or to frame research in certain ways. Copyrights very much challenge the communal ideal of science when they are abused.
When sponsors of a study are interested in certain findings only, and they own the copyrights, they can decide to insist on incomplete or partial reporting. The result is referred to as reporting bias (under-reporting of unexpected or undesirable results).
Song et al. (2010) found ample evidence of reporting bias in their comparative research of 300 empirical studies, identifying that pressure from research sponsors, instruction from journal editors, and intricacies of the research award system (who is eligible to receive which grants?) play substantial roles in their work and how they report it. ‘Clearly, commercial and other competing interests of research sponsors and investigators may influence the profile of dissemination of research findings (Song et al. 2010, p. 81).
6 Resolving Conflicts
6.1 Resolving Conflicting Situations
Identifying the various types of conflicts in research is just a first step in dealing with them. The act of resolving those conflicts come next, and will require different strategies, somewhat dependent on the type of conflict involved (see Resnik 2014, for further discussion).
On the one hand, competing interests (CI) and conflicts of ownership (COO) often demand strategies that rely more heavily on judicial solutions. The resolution of conflicts of interests (COI) and conflicts of values (COV), on the other hand, are sought in ways that rely on regulation, instruction and even mediation.
6.2 Disclosure of Financial or Other Conflicts of Interest
To the parties involved in research, including institutions, government agencies, journals, and research subjects, disclosure is paramount. Disclosure makes explicit and transparent the important details related to the interpretation, credibility, and value of the information presented. It can be used as a simple tool to counteract bias and restore trust.
Most journals require that authors report potential conflicts of interest, both during the online submission process as well as explicitly within the article. They further run a ‘declaration of interest’ at the end of their articles.
Some researchers argue that such disclosure policies are inadequate, however. For example, with regard to the development of the DSM 5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition), Cosgrove and Wheeler (2013) maintain that current approaches, particularly with regard to the transparency of conflicts of interests, are insufficient to solve the problem of industry’s grip on organized psychiatry, which is simply too powerful to be countered by ‘disclosure measure(s).’ They argue a more committal policy be put in place.
6.3 Regulation and Management of Competing Interests
In order to regulate and manage these issues, special committees or independent bodies have been founded that regulate and oversee projects involving conflicts of interest, flexing authority to approve or deny applications. Institutional Review Boards (IRBs) can fulfil this function, as they oftentimes do, and they too will demand the disclosure of any potential COIs that researchers may have. Agencies that dole out research grants may require that applications meet certain non-COI criteria, specifying, for example, source or amount of payment or service that is admissible. Finally, the installment of complaint offices, or trust committees may have a further preventative effect.
6.4 Prohibition and Penalizing Researchers Who Violate Disclosure Policies
Some argue that the role of institutions as the primary means for managing conflicts of interest should be reduced. This is because institutions have become players in the field and run the risk of harboring conflicts of interest themselves. Instead, as Resnik (2007) argues, strict rules should be imposed that limits potential COIs, like research funding or stock ownership. Further, these rules must carry penalties for those who break them.
On the other hand, a complete prohibition of any possible conflict of interest could deprive science and society of important benefits. Indeed, [forbidding] universities from taking funds from industry would stifle collaboration between academia and industry’ (Resnik 2007, p. 124).
6.5 Resolution in Court
Sometimes conflicts can only be settled in court. Regrettable as that may seem, it can also shed light on certain questions, because ownership rights come in the form of copyrights and patents, and they hold legal weight. In cases when the owners (sponsors) of research determine what may or may not be disseminated, challenging their decisions in court may be a last resort to settle differences of opinion, though they will be expensive for individual researchers or research institutes. However, it can be expected that these types of conflicts will increase as academics collaborate more with thirds parties and challenges over the rights to the knowledge at hand ramps up concurrently.
6.6 What to Make of All This?
Conflicting interests cast doubt on researchers motives and they have an undermining effect on overall trust in science. However, not all conflicting interests result in COIs. The question is how special interests impact a scientists’ research, and more specifically, how open scientists are about that impact. By disclosing a researcher’s affiliations, external positions, and (financial) ties, conflicts of interest and value can be avoided. That being said, additional regulation and legislation will also be necessary in the future.
In this chapter, we’ve looked at science from the perspective of competing markets. We highlighted how different parties (in particular, industry, and government) and their differing (commercial) interests hold influence over research agendas. We examined the valorization of scientific knowledge (focus on economic value of science) and the emergence of academic capitalism in the past 30 years.
We differentiated between conflicts of interest (COI) and conflicts of values (COV), and discussed their potential to cloud a researcher’s judgement, but we also noted that not all interests lead to conflict situations. Disclosure of a researcher’s affiliations, external positions, and (financial) ties, as well as further regulation may help resolve future COIs.
In addition, we proposed to use the term competing interests (CI) when looking into situations where researchers have to strike a balance between the interests of collaborators. Finally, we explored considerations within conflicts of ownership (COO), where researchers are not at liberty to report on their research as they believe fit, because they don’t own it themselves. Copyright and patenting restrictions are at the base of these conflicts.
While we investigated the reality of conflicts of interest in research, we neglected to examine the forces that drive the functioning of science itself. Under what conditions do researchers actually do their work? How are they affected by political considerations, or by administrative decisions, in practice? These questions will be highlighted in the next chapter.
Abdoul, H., Perrey, C., Tubach, F., Amiel, P., Durand-Zaleski, I., & Alberti, C. (2012). Non-financial conflicts of interest in academic grant evaluation: A qualitative study of multiple stakeholders in France. PLoS One, 7(4), e35247. https://doi.org/10.1371/journal.pone.0035247.
Angell, M. (2004). The truth about drug companies: How they deceive us and what to do about it. New York: Random House.
Aubert, V. (1963). Competition and dissensus: Two types of conflict and of conflict resolution. Journal of Conflict Resolution, 7(1), 26–42. https://doi.org/10.1177/002200276300700105.
Bartelson, J. (2000). Three concepts of globalization. International Sociology, 15(2), 180–196.
Bero, L. A., & Grundy, Q. (2016). Why having a (Nonfinancial) interest is not a conflict of interest. PLoS Biology, 14(12), e2001221. https://doi.org/10.1371/journal.pbio.2001221.
Bok, D. (2003). Universities in the marketplace. The commercialization of higher education. Princeton/Oxford: Princeton Universities Press.
Chivers, T. (2019, July 4). Does psychology have a conflict-of-interest problem? Nature, 571, 21–23.
Cosgrove, L., & Wheeler, E. E. (2013). Industry’s colonization of psychiatry: Ethical and practical implications of financial conflicts of interest in the DSM-5. Feminism & Psychology, 23(1), 93–106. https://doi.org/10.1177/0959353512467972.
Cosgrove, L., Krimsky, S., Vijayaraghavan, M., & Schneider, L. (2006). Financial ties between DSM-IV panel members and the pharmaceutical industry. Psychotherapy and Psychosomatics, 75(3), 154–160. https://doi.org/10.1159/000091772.
Davis, M. (2001). Introduction. In M. Davis & A. Stark (Eds.), Conflict of interest in the professions (pp. 3–19). Oxford: Oxford University Press.
De Jonge, B., & Louwaars, N. (2009). Valorizing science: Whose values? EMBO Reports, 10(6), 535–539. https://doi.org/10.1038/embor.2009.113.
Fine, B., & Saad-Filho, A. (2017). Thirteen things you need to know about neoliberalism. Critical Sociology, 43(4–5), 685–706. https://doi.org/10.1177/0896920516655387.
Goldacre, B. (2013). Bad pharma: How drug companies mislead doctors and harm patients. London: Faber & Faber.
Greenberg, D. S. (2009). Science for sale: The perils, rewards and delusions of campus capitalism. Chicago: University of Chicago Press.
Head, B. W., & Alford, J. (2015). Wicked problems: Implications for public policy and management. Administration & Society, 47(6), 711–739. https://doi.org/10.1177/0095399713481601.
Healy, D. (2002). The creation of psychopharmacology. Cambridge, Mass: Harvard University Press.
Hermanowicz, J. C. (2016). Universities, academic careers, and the valorization of “shiny things”. In The university under pressure (research in the sociology of organizations, Vol. 46) (pp. 303–328). Emerald Group Publishing Limited.
Hoekman, J. (2012). Science in an age of globalisation: The geography of research collaboration and its effect on scientific publishing (diss) Eindhoven. Technische Universiteit Eindhoven.
Hari, J. (2018, Jan 7). Is everything you think you know about depression wrong? The Observer.
Krimsky, S. (2012). Do financial conflicts of interest bias research? An inquiry into the ‘funding effect’ hypothesis. Science, Technology, & Human Values, 38(4), 566–587. https://doi.org/10.1177/0162243912456271.
MacCoun, R. J. (2015). The epistemic contract: Fostering an appropriate level of public trust in experts. In B. H. Bornstein & A. J. Tomkins (Eds.), Motivating cooperation and compliance with authority. The role of institutional trust (pp. 191–214). London: Springer. https://doi.org/10.1007/978-3-319-16151-8_9.
Maj, M. (2008). Non-financial conflicts of interests in psychiatric research and practice. British Journal of Psychiatry, 193(2), 91–92. https://doi.org/10.1192/bjp.bp.108.049361.
Martens, M. P., Herman, K. C., Takamatsu, S. K., Schmidt, L. R., Herring, T. E., Labuschagne, Z., & McAfee, N. W. (2016). An update on the status of sponsored research in counseling psychology. The Counseling Psychologist, 44(4), 450–478. https://doi.org/10.1177/0011000015626271.
Mauss, M. (1970). The gift. Forms and functions of exchange in archaic societies. London: Cohen & West Ltd.
Mosbah-Natanson, S., & Gingras, Y. (2013). The globalization of social sciences? Evidence from a quantitative analysis of 30 years of production, collaboration and citations in the social sciences (1980–2009). Current Sociology, 62(5), 626–646. https://doi.org/10.1177/0011392113498866.
Resnik, D. B. (1998). The ethics of science. An introduction. London: Routledge.
Resnik, D. B. (2007). The price of truth. How money affects the norms of science. Oxford: Oxford University Press.
Resnik, B. (2014). Science and money: Problems and solutions. Journal of Microbiology and Biology Education, 15(2), 159–161. https://doi.org/10.1128/jmbe.v15i2.792.
Saunders, D. (2007). The impact of neoliberalism on college students. Journal of College and Character, 8(5), 1–9. https://doi.org/10.2202/1940-1639.1620.
Sherry, J. F. (1983). Gift giving in anthropological perspective. Journal of Consumer Research, 10(2), 157–168. https://doi.org/10.1086/208956.
Slaughter, S., & Rhoades, G. (2004). Academic capitalism and the new economy: Markets, state and higher education. Baltimore: Johns Hopkins University Press.
Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., Hing, C., Kwok, C. S., Pang, C., & Harvey, I. (2010). Dissemination and publication of research findings: An updated review of related biases. Health Technology Assessment, 14(8), 1–193. https://doi.org/10.3310/hta14080.
Soudijn, K. (2012). Ethische codes voor psychologen [Ethical code for psychologists] (3rd ed.). Amsterdam: Nieuwezijds BV.
Turner, C., & Spilich, G. J. (1997). Research into smoking or nicotine and human cognitive performances: Does the source of funding make a difference? Addiction, 2(11), 1423–1426. https://doi.org/10.1111/j.1360-0443.1997.tb02863.x.
Volochinsky, D. P., Soto, R. V., & Winkler, M. I. (2018). Gifts and conflict of interest: In shades of gray. Acta Bioethica, 24(1), 95–104. Retrieved from https://revistaidiem.uchile.cl/index.php/AB/article/view/49382/57559.
Washburn, J. (2005). University inc. the corporate corruption of higher education. Perseus Books: New York.
Wiersma, M., Kerridge, I., & Lipworth, W. (2017). The dangers of neglecting non-financial conflicts of interest in health and medicine. Journal of Medical Ethics, 44(5), 319–322. https://doi.org/10.1136/medethics-2017-104530.
Yates, B. T., & Taub, J. (2003). Assessing the costs, benefits, cost-effectiveness, and cost-benefit of psychological assessment: We should, we can, and Here’s how. Psychological Assessment, 15(4), 478–495. https://doi.org/10.1037/1040-3518.104.22.1688.
Zhuang, G., & Tsang, A. S. (2008). A study on ethically problematic selling methods in China with a broaden concept of gray-marketing. Journal of Business Ethics, 79(1–2), 85–101. https://doi.org/10.1007/s10551-007-9397-1.
References for Case Study: Complicit to Torture?
Fink, S. (2009, May 5). Tortured profession: Psychologists warned of abusive interrogation, then helped craft them.https://www.propublica.org/article/tortured-profession-psychologists-warned-of-abusive-interrogations-505
Zagrin, A. (2005, June 20) Inside the interrogation of detainee 063. Time Magazine.
1 Electronic Supplementary Materials
1 Case Study: Complicit to Torture?
Near the end of World War Two, the US government felt the need to set up a program to help teach soldiers how to cope with torture, were they to find themselves captured. This program, known as SERE (Survival, Evasion, Resistance, and Escape), was extended through the Vietnam War era, and became relevant again after the 9/11 attacks and the ensuing wars in Iraq and Afghanistan.
Decades after implementation, it was realized that reverse-engineering SERE techniques could be used to interrogate detainees. Psychiatrist Paul Burney and psychologist John Leso (with assistance from an unnamed technician) were instructed to form a Behavioral Science Consultation Team (BSCT) and ramp up intelligence collection. In June of 2002, they were sent (under highly contested conditions) to Guantánamo Bay, the US detainment camp on the shores of Cuba that held over 700 prisoners suspected of terrorism.
As explored by Sheri Fink in her 2009 exposé ‘Tortured Profession’ (from which we draw the following details, including the quotations from several of the people involved), Burney and Leso seemed to have transgressed ethical boundaries when they devised this reverse-engineered interrogation method for the US Army. Burneyand Leso had no formal training in interrogation and there were no standard operating procedures in place to guide their work. They confessed that they did not know what they were supposed to do, and therefore contacted psychologist Larry James for guidance. James had served as an Army psychiatrist at Abu Ghraib (the infamous prison in Iraq where gruesome violations of human rights by US military personnel later took place against detainees). Apparently, James too felt unsure of how to proceed, and turned to Louis Banks for guidance. Banks was experienced with preparing soldiers to deal with mild ‘forms of torture,’ including slapping, pushing (into walls for example), forced placement into uncomfortable positions, and waterboarding (simulating the feeling of drowning).
Francisco de Goya: Why? Etching, from the series ‘The disasters of war’ (1810–1820)
They furthermore offered three approaches for different categories of prisoners, with the third reserved for ‘high-priority detainees’ who showed ‘advanced resistance.’ These prisoners could be subjected to food restriction for 24 h, once a week. They could also be led to believe they would be subjected to experiences with ‘painful or fatal outcomes.’ Furthermore, they could be exposed to cold weather or water ‘until such time as the detainee began to shiver.’
However, the authors warned in their memo that ‘physical and/or emotional harm from the above techniques may emerge months or even years after their use,’ and that ‘the most effective interrogation strategy is based on building rapport.’ In fact, they continued, ‘interrogation techniques that rely on physical or adverse consequences are likely to garner inaccurate information and create an increased level of resistance.’ They had fewer reservations regarding the use of psychological pressure, such as sleep deprivation, withholding food, and isolation, which they deemed ‘extremely effective.’ They further noted that it was ‘vital’ to disrupt normal camp operations through the creation of ‘controlled chaos’ (quoted in Fink 2009).
Banks (the experienced Army psychiatrist) who received a copy of the memo, expressed his own misgivings about the use of physical pressures, which he believed could bring about ‘a large number of potential negative side effects.’ For one, when ‘individuals are gradually exposed to increasing levels of discomfort, it is more common for them to resist harder.’ Further, ‘it usually decreases the reliability of the information.’
Detainee at the Abu Ghraib prison, with bag over their head, standing on a box with wires attached, Nov. 4, 2003. (Source: public domain)
In June 2005, a log of el-Khatani’s interrogation was published by Time Magazine, causing a good deal of public indignation (Zagrin 2005). Bioethicists condemned the procedures as a clear violation of the ‘golden rule’ in bioethics (do not harm, contribute to beneficence). A week later, the American Psychological Association (APA) set up a ten-member committee (with six members having worked or consulted for the military or the CIA) to investigate the ethical aspects of the case.
The APA ruled that it was ‘consistent with the APA Ethics Code for psychologists to consult with interrogators in the interests of national security.’ However, psychologists should themselves not participate in torture, and ‘have a responsibility to report it.’ Practicing psychologists are ‘committed to the APA ethics code whenever they encounter conflicts between ethics and law,’ and if a conflict cannot be resolved, ‘psychologists may adhere to the requirements of the law.’
Banks and James (both members of the APA) also defended the Burney-Leso team (who drafted the memo), arguing that these psychologists had helped fix the problems rather than cause them. In 2005, James wrote: ‘the fact of the matter is that since Jan 2003, wherever we have had psychologists no abuses have been reported’ (quoted in Fink 2009).
The APA’s lenient attitude ultimately backfired. Two years later, a petition by APA members was accepted that banned psychologists from working in detention settings where international law or the U.S. Constitution was violated.
Of course, since this case broke and caught public attention, many more details about the conditions at Guantánamo have come to light. Legal battles have been fought in court over the legality of detaining prisoners under such poor conditions, with little access to counsel and most of them having never been tried. Attempts by President Obama to close Guantánamo were unsuccessful. As of early 2020, some 40 detainees remain.
Consider this case from the perspective of conflicts of interest, conflicts of values, and competing interests.
Identify any COI, COV and/or CI. Which values, norms, or interests are at stake in this case?
Consider whether you find Burney and Leso liable of professional misconduct. Are they responsible for facilitating unethical (irresponsible) behavior towards detainees, or should their expression of doubt about the use of such harsh interrogation procedures be regarded as a protection against liability?
What recommendations would you give to a psychologist who is asked to engage in a task they feel to be unethical?
1 Suggested Reading
For a critical appraisal of the corrupting effects of financial conflicts of interest on scientific norms, we recommend Derek Bok, Universitiesin the Marketplace (2003), Jennifer Washburn, University, Inc. (2005), David Resnik, The Price of Truth (2007), and Daniel Greenberg, Science for Sale (2009), though these sources mostly target the pharmaceutical and biomedical sciences. A similar study for the social sciences is a need we hope a future scholar will address. On the subject of ‘valorizing science’ we direct readers to the very informative, and short piece by De Jonge and Louwaars (2009). For an illuminating perspective on conflicts of interest, we refer to Resnik (1998).
© 2020 The Author(s)
About this chapter
Cite this chapter
Bos, J. (2020). Conflicts of Interest. In: Research Ethics for Students in the Social Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-48415-6_8
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48414-9
Online ISBN: 978-3-030-48415-6