Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications
- First Online:
- Cite this article as:
- Lexchin, J. Sci Eng Ethics (2012) 18: 247. doi:10.1007/s11948-011-9265-3
- 863 Views
Pharmaceutical companies fund the bulk of clinical research that is carried out on medications. Poor outcomes from these studies can have negative effects on sales of medicines. Previous research has shown that company funded research is much more likely to yield positive outcomes than research with any other sponsorship. The aim of this article is to investigate the possible ways in which bias can be introduced into research outcomes by drawing on concrete examples from the published literature. Poorer methodology in industry-funded research is not likely to account for the biases seen. Biases are introduced through a variety of measures including the choice of comparator agents, multiple publication of positive trials and non-publication of negative trials, reinterpreting data submitted to regulatory agencies, discordance between results and conclusions, conflict-of-interest leading to more positive conclusions, ghostwriting and the use of “seeding” trials. Thus far, efforts to contain bias have largely focused on more stringent rules regarding conflict-of-interest (COI) and clinical trial registries. There is no evidence that any measures that have been taken so far have stopped the biasing of clinical research and it’s not clear that they have even slowed down the process. Economic theory predicts that firms will try to bias the evidence base wherever its benefits exceed its costs. The examples given here confirm what theory predicts. What will be needed to curb and ultimately stop the bias that we have seen is a paradigm change in the way that we treat the relationship between pharmaceutical companies and the conduct and reporting of clinical trials.
KeywordsBiasClinical trialsConflict-of-interestGhostwritingPharmaceutical industry
For the past couple of decades the pharmaceutical industry has operated on a blockbuster model, relying on drugs that generate $1 billion or more in worldwide sales to provide the rate of return that shareholders have come to demand. Clinical trials, trials that test drugs in humans, form the basis for the evidence used in the practice of medicine (Wyatt 1991) and trials that fail to demonstrate effectiveness or that raise significant safety concerns can dramatically affect the sale of products. Witness what happened following the July 2002 publication of the results of the Women’s Health Initiative trial that found that the estrogen/progestin combination caused an increased risk of cardiovascular disease and breast cancer in postmenopausal women (Writing Group for the Women’s Health Initiative Investigators 2002). By June 2003 prescriptions for Prempro®, the most widely sold estrogen/progestin combination, had declined by 66% in the United States (US) (Hersh et al. 2004) and sales of estrogen replacement therapy were off by a third in Ontario (Austin et al. 2003).
In the US pharmaceutical and biotechnology firms contributed almost half of the $94 billion spent on biomedical research in 2004 with the bulk of industry spending going towards clinical research, that is research aimed at testing medications in humans (Moses et al. 2005). (While the pharmaceutical industry pays for the great majority of the clinical studies on drugs, 84% of the funding for the basic research that produces these drugs comes from the public sector and only 12% from companies (Light 2006)). In Norway most of the clinical trials approved by five regional medical research ethics committees were conducted by industry (Hole et al. 2001). Over the period 1994–2003 the vast majority of the most cited randomized controlled trials (RCT) received funding from industry, and the proportion increased significantly over time. Eighteen of the 32 most cited trials in the medical literature that were published after 1999 were funded by industry alone (Patsopoulos et al. 2006).
Given that pharmaceutical companies fund most clinical research and how critical the results of trials are to industry, there is a strong financial pressure to ensure that results from the research are favourable to the product being studied. In 2003 Lexchin and colleagues found that industry funded research was 4 times more likely to produce positive outcomes compared to research with any other source of sponsorship (Lexchin et al. 2003). Their results applied across a wide range of disease states, drugs and drug classes and were consistent over a period spanning at least two decades. Since that publication there have been multiple others looking at the same issue using different methodologies and examining different classes of drugs. A recent qualitative systematic review examined the evidence subsequent to the Lexchin article and found 17 additional articles that supported his conclusion with only 2 dissenting (Sismondo 2008b).
The knowledge that pharmaceutical companies try to ensure that research generates the outcomes that are commercially favourable to them is becoming more widespread both among the medical profession and the general public. Beyond recognizing that this phenomenon exists, it is important to understand specifically the various ways that bias can be introduced. This knowledge can help ensure that the medical literature is properly interpreted by alerting people to biases in what they read and also by pointing the way to how the system of generating clinical knowledge can be reformed. Other authors have summarized the literature on this topic but those articles are dated (Bero and Rennie 1996) or draw examples from trials on just one particular class of drugs (Safer 2002). Sismondo’s article (2008a) is more recent but his focus is on general mechanisms that are used. The additional value that this article brings is that it uses published concrete examples of commonly used techniques. Providing concrete examples has the effect of taking the discussion out of the theoretical and grounding it in reality, thereby hopefully increasing the understanding that there are real consequences to what the pharmaceutical industry is doing. In addition, the many examples cited here helps to counter the argument that instances of industry malfeasance are relatively innocuous and infrequent (Stossel 2005).
After first looking at one commonly cited explanation for the positive outcome of industry studies, I turn to an examination of issues related to the design of trials, the difference in how positive and negative trials are published, how data is reinterpreted between the time it is submitted to regulatory agencies and when it is published, the discordance between results and conclusions, how conflict-of-interest on the part of investigators affects conclusions, the effect of ghostwriting on how trials are presented and the use of seeding trials.
The second main contribution that this article makes to the topic of industry-induced bias is the discussion in the penultimate section of how to combat this bias. Here I critically examine the most commonly suggested remedies and then go on to consider more radical proposals.
Better Power and Use of Preliminary Data Do Not Explain Why Industry Trials Turn Out Better
One major defense offered as to why industry funded trials are more likely to be positive is that drug companies have the resources to mount trials with large numbers of patients that are powered to find statistically significant differences. Another, as articulated by Fries and Krishnan (2004), is that extensive use of preliminary data allows industry to design studies with a high likelihood of being positive. Of course, large numbers and preliminary data may mask the other biases (to be discussed below) that are the real predictors of a positive trial. Moreover, the former argument raises the question of whether a statistical difference translates into a clinical difference while the latter ignores the issue of whether laboratory and animal data is good enough to predict the performance of new drugs in humans. In this regard, the success rate for new drugs entering clinical trials is only 1 in 5 (DiMasi et al. 2010).
Leaving aside these questions, these explanations are still inadequate to explain the superiority associated with industry sponsorship in situations where head-to-head trials of two different medications have different outcomes depending on which company is sponsoring the trial. In an examination of trials of second generation drugs used in the treatment of diseases such as schizophrenia (usually referred to as “atypical antipsychotics”), Heres et al. (2006) looked at different trials that examined the effectiveness of the same two drugs. They found that different trials led to contradictory overall conclusions, depending on who sponsored the study. For example, there were 9 studies comparing olanzapine and risperidone. Five of these were sponsored by Eli Lilly, makers of olanzapine, and all favoured that drug, whereas 3 of the 4 sponsored by Janssen, makers of risperidone, favoured that medicine. Similarly, RCTs of head-to-head comparisons of statins were more likely to report results and conclusions favouring the sponsor’s product compared to the comparator drug (Bero et al. 2007). While the funding and outcomes of meta-analyses and pharmaco-economic (cost:benefit) studies are outside the scope of this article, both also show the same relationship with funding as clinical trials do (Bell et al. 2006; Franco et al. 2005; Hartmann et al. 2003; Miners et al. 2005) and explanations related to trial size and power are not applicable in either case.
Explanations for Bias
There are likely to be multiple different reasons to explain the superior outcome of industry funded research but lower quality methodology (that is, poorer design) as it is usually measured has not been a consistent finding. While some studies do report that industry funding is associated with poorer methodology scores (Jørgensen et al. 2008; Montgomery et al. 2004) others have reported that there is no difference in methodologic quality between industry and non-industry funded research or even that industry sponsorship is associated with higher quality (Heres et al. 2006; Lexchin et al. 2003; Perlis et al. 2005a).
One reason why industry funded trials may appear to be of high methodologic quality is the way that quality is typically determined, e.g., calculating the Jadad score that incorporates items such as method of randomization, blinding and accounting for dropouts and withdrawals. While trials may score well on the Jadad (or other) scale, exclusive use of the items in these scales may not be sensitive enough to pick up other more subtle forms of bias such as the ones explored below.
Inappropriate Choice of Doses, Dosing Intervals and Comparators
In head-to-head trials, companies can use low doses of a comparator agent to make their drug seem more effective or use high doses of the comparator to make their drug appear to have fewer side effects. Using unequal doses violates the scientific principle of equipoise, the principle that it is only ethical to enroll patients in clinical trials when there is substantial uncertainty as to which of the trial treatments would most likely benefit them (Djulbegovic et al. 2003).
Safer (2002) notes that companies comparing their new atypical antipsychotic medications to the older drug haloperidol have frequently used a fixed high dose of haloperidol to virtually ensure that their product will have fewer extrapyramidal side effects (side effects involving involuntary body movements). In head to head trials, dose ranges of clozapine and olanzapine, both atypical antipsychotics, often are too strictly limited resulting in low mean daily doses and the conclusion that they are not as efficacious as products made by the sponsoring company (Heres et al. 2006). Commercially sponsored studies comparing two antidepressant drugs often schedule an unusually rapid and substantial dose increase in the one not manufactured by the sponsoring company (Safer 2002) resulting in that product appearing to have more side effects.
In 13 comparative trials of the antifungal fluconazole versus amphotericin B in cancer patients with low white blood cell counts who were therefore susceptible to fungal infections, nearly 80% of patients were given the poorly absorbed oral formulation of amphotericin B instead of the intravenous form. Three antifungal trials in the same type of patients grouped amphotericin B with nystatin thereby creating a bias in favour of fluconazole as nystatin is ineffective in this group of patients (Johansen and Gotzsche 1999). Not only does the conduct of these trials lead to misleading information but they are probably unethical in so far as they have the potential to expose patients to harm or to prolong suffering because of a lack of benefit from inappropriate doses.
Evidence of significant biasing of the published literature is widespread and systematic. The fact that drug companies keep unfavourable results from being published and publish favourable ones more prominently and often in multiple publications has been increasingly coming to light. The net result of this practice is to introduce a bias into the assessment of the effectiveness of a product.
Melander and colleagues compared published versions of trials for 5 selective serotonin reuptake inhibitors (SSRI) antidepressants with the versions of these studies submitted to the Swedish regulatory authority in order to get marketing approval (Melander et al. 2003). They demonstrated that studies showing positive effects from these drugs were published as stand alone publications more often than studies with non-significant results; many publications used a form of statistical analysis more likely to yield favourable results (per protocol analyses versus intention to treat analyses); and 21 studies, out of 42, contributed to at least two publications each, and three studies contributed to five publications each. The latter point echoes what Gøtzsche (1989) and Huston and Moher (1996) found for publications about nonsteroidal anti-inflammatories (NSAIDs) and risperidone, respectively—favourable trials are frequently published more than once. Inclusion of duplicate data in a meta-analysis of ondansetron lead to a 23% overestimate of its antiemetic efficacy (Tramèr et al. 1997). Finally, Spielmans et al. (2010) found that 6 clinical trials on duloxetine had their data utilized as part of 20 or more separately published pooled analyses. The vast majority of the analyses had at least one author employed by the manufacturer.
Although the SSRI class of antidepressants was never approved for the treatment of depression in children or adolescents, drugs in this class were frequently prescribed off-label to these groups of patients. A meta-analysis of the published literature indicated that there was a favourable benefit:harm profile for some SSRIs. However, the equation changed when the unpublished studies were added into the meta-analysis. When all of the studies, published and unpublished, were combined the conclusion was that, except for Prozac® (fluoxetine), the risks could outweigh the benefits (Whittington et al. 2004).
Reinterpreting Data Submitted to Regulatory Agencies
Out of 74 studies registered with the Food and Drug Administration (FDA) dealing with 12 antidepressants “a total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published.” An additional 11 studies that produced either negative or questionable results were published in a way that conveyed a positive outcome. According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive (Turner et al. 2008). Just as disturbing as this selective publication was the fact that for all 12 drugs the effect size in the published trials was greater than the effect size reported to the FDA by a mean of 32%, meaning that the drugs appeared much more effective to clinicians reading the medical literature than they likely were. Wyeth attempted to dismiss its failure to publish two negative Effexor® (venlafaxine) studies by claiming that they were ‘failed studies’ instead of studies showing that the drug didn’t work (Ninan et al. 2008).
What Turner and colleagues found about antidepressant studies is also what occurs more generally. Another study looked at 164 efficacy trials submitted to the FDA in the 2001–2002 period in support of 33 new drug applications (NDA). Many trials were not published 5 years after FDA approval of the drug and those that were published were much more likely to have positive results. In the 164 trials there were 43 primary outcomes that did not favour the drug and 20 of these 43 were excluded in the publications. The statistical significance of 5 of the remaining 23 outcomes was changed in the published literature and 4 of the 5 changes favoured the drug in question. In total there were 99 conclusions that were present in both the NDA and publications. Nine of these were changed from the former (NDA) to the latter (publications) and all favoured the companies’ products (Rising et al. 2008).
Discordance Between Results and Conclusions
Although the results that are reported in clinical trials may be accurate, authors may distort the meaning of the results and present conclusions that are more favorable than are warranted from the data. In head-to-head trials of NSAIDs, all paid for by pharmaceutical companies, 86% (19/22) concluded that the drug made by the sponsor of the trial was less toxic than the comparator. However, this conclusion was only justified by the data presented in 12 of the trials (Rochon et al. 1994).
The results of industry-supported meta-analyses also appear to be “spun” to yield favourable conclusions. Yank et al. analyzed meta-analyses of antihypertensives looking at clinical outcomes in adults. Although financial ties with industry were not associated with favourable results the same was not true of the conclusions that these meta-analyses reached. Even when controlling for other characteristics of the meta-analyses, the only factor associated with positive conclusions was if there was a relationship to industry (Yank et al. 2007). Finally, findings about safety information are also subject to misinterpretation. If studies of inhaled corticosteroids found a statistically significant increase in adverse effects associated with the study drug, the authors of industry-funded trials were still more likely to conclude that the medication was “safe” than were authors of trials without industry funding (Nieto et al. 2007).
Conflict-of-Interest and Conclusions
Conflict-of-interest (COI) was not explicitly looked at as a factor in distorting results in the articles cited in the last section but there is a body of literature that demonstrates that when authors have COI they are more likely to make a favourable recommendation about a product. As Sismondo (2008a) points out, COI probably does not operate on a conscious level but rather the act of accepting funding from a pharmaceutical company creates a gift relationship between the investigator and the sponsor wherein the person receiving the “gift” feels an obligation to repay the present in some manner. “When a gift or gesture of any size is bestowed, it imposes on the recipient a sense of indebtedness. The obligation to directly reciprocate, whether or not the recipient is conscious of it, tends to influence behavior” (Katz et al. 2003). In this light, researchers need not have any material interest in the outcome of their research but subconsciously they create conditions that yield the results most favourable to the company providing the resources to undertake the study.
Kjaergard and Als-Nielsen (2002) looked at all original clinical RCTs published in the BMJ between 1997 and mid-June 2001. In those publications where authors declared a financial COI, the conclusions that they reached were significantly more likely to be positive towards the experimental intervention than if COI was not present. The association between financial COI and authors’ conclusions was not explained by methodological quality, statistical power, type of experimental intervention, type of control intervention or medical specialty. These results from articles in the BMJ were replicated using material from the New England Journal of Medicine and JAMA. Once more, there was a strong association between those studies whose authors had COI and positive findings and that association persisted after controlling for sample size, study design, and country of primary authors (Friedman and Richter 2004).
What applies to RCTs in general also applies to individual clinical areas. Compared to studies where authors did not report a COI, RCTs in dermatology where there was a COI were significantly more likely to report a positive outcome (Perlis et al. 2005a). Once statistical adjustment was made for three factors: industry funding, the Jadad score and the number of participants in the trial, the relationship between COI and positive outcomes was no longer significant. In pharmaceutical company funded RCTs comparing psychiatric drugs to placebo the chance that the study would report a positive outcome was 8.4 times greater if one of the authors had a COI. In the absence of industry funding there was no association between author COI and positive outcomes (Perlis et al. 2005b). Finally, amongst randomized oncology trials that looked at overall survival, those with COI were more likely to have positive findings (Jagsi et al. 2009).
Ghostwriters are men and women specifically recruited to take data from clinical trials and write an article with a “spin” favourable to the drug. The company making the drug, or someone working on its behalf, then recruits a well-known academic or doctor to sign the write-up and masquerade as the author. When the article eventually appears in print there is no acknowledgement of the role played by the ghostwriter in its production.
In the course of legal proceedings, a document outlining the involvement of the medical information company Current Medical Directions (CMD) in the preparation of 85 studies about Zoloft® (sertraline), a SSRI made by Pfizer, was made available to Healy and Cattell (Healy and Cattell 2003). There were a number of manuscripts that the document suggested originated within communication agencies, with the first draft of articles already written and the authors’ names listed as ‘to be determined’. Using a search of the medical literature Healy and Cattell looked for evidence of publication of these papers. Out of 55 publications that they found, only 13 did not appear to have had any involvement with CMD. When articles with CMD involvement were compared to ones on Zoloft that were independently produced, the former were cited more often and published in more prestigious medical journals. Since these articles were almost uniformly favourable to Zoloft there is a high likelihood that the overall literature regarding this drug was biased towards presenting a more positive view of the drug than was warranted.
Court documents that became public revealed how Merck used ghostwriting to ensure that articles about Vioxx® (rofecoxib) would present a positive picture about the safety and effectiveness of the drug. “Merck employees work[ed] either independently or in collaboration with medical publishing companies to prepare manuscripts and subsequently [recruit] external, academically affiliated investigators to be authors. Recruited authors were frequently placed in the first and second positions of the authorship list. For the publication of scientific review papers, documents were found describing Merck marketing employees developing plans for manuscripts, contracting with medical publishing companies to ghostwrite manuscripts, and recruiting external, academically affiliated investigators to be authors. Recruited authors were commonly the sole author on the manuscript and offered honoraria for their participation. Among 96 relevant published articles … 92% (22 of 24) of clinical trial articles published a disclosure of Merck’s financial support, but only 50% (36 of 72) of review articles published either a disclosure of Merck sponsorship or a disclosure of whether the author had received any financial compensation from the company” (Ross et al. 2008).
Ghostwriting is not only used to ensure that clinical trials report an outcome favourable to the sponsoring pharmaceutical company but is also utilized to sow doubt about unfavourable research. The Heart and Estrogen/progestin Replacement Study (HERS) trial found that hormone therapy offered no benefit in preventing cardiovascular events in women with cardiovascular disease (Hulley et al. 1998). After the publication of this trial Wyeth commissioned ghost written articles questioning the results and maintaining that hormone therapy had a protective effect (Fugh-Berman 2010).
Finally, the increasing use of postmarketing studies, studies done after a drug is already on the market (Dembner 2002), offers another avenue for introducing bias into the research process. Doctors who participate in clinical trials involving medicines are known to increase their use of trial drugs (Andersen et al. 2006). Companies take advantage of this knowledge to sponsor these trials, referred to as “seeding” trials, which have the sole purpose of getting doctors to start to use a product with the aim of establishing the drug as a regular part of the doctor’s prescribing. One executive at a contract research organization, a company hired by a pharmaceutical firm to conduct clinical trials, is quoted as saying that “We have been approached by several pharmaceutical manufacturers to conduct ‘seed’ studies. These studies are usually intended to increase the use of the manufacturer’s product and sometimes lack scientific integrity …. The intent is to influence physician and patient behavior” (Psaty and Rennie 2006).
The most widely publicized seeding trial has been the ADVANTAGE study undertaken by Merck to promote the use of Vioxx®. Based on an analysis of Merck internal and external correspondence, reports, and presentations Hill and colleagues showed that “the trial was designed by Merck’s marketing division to fulfill a marketing objective; Merck’s marketing division handled both the scientific and the marketing data, including collection, analysis, and dissemination; and Merck hid the marketing nature of the trial from participants, physician investigators, and institutional review board members” (Hill et al. 2008).
Protecting the Public Interest
The Reformist Package
Thus far the task of separating pharmaceutical companies from control of the data originating out of clinical trials has focused on more stringent rules regarding COI, the use of clinical registries and giving more responsibility to the researchers who actually carry out the trials. Declaration of COI in medical journals has been progressively tightened (Drazen et al. 2010) but as recently as 2004 even in journals with detailed COI policies 8% of articles had relevant author COI that was not reported to readers (Goozner 2004). Furthermore, author policies are variable, depending on the type of manuscript submitted and information collected is often not published (Cooper et al. 2006). Surveys of academic health science institutions in Canada and the US have shown considerable variation and laxity with regards to regulations regarding COI for individual investigators (Lexchin et al. 2008; van McCrary et al. 2000) and at the institutional level (Rochon et al. 2010; Campbell et al. 2007). While there may have been improvements since these surveys were undertaken, the latest update of the annual COI survey of US medical schools undertaken by the American Medical Student Association gave only 9 of 149 institutions an A with 35 receiving a grade of F (AMSA PharmFree Scorecard 2009, 2009). One reflection of the overall weakness of COI policies is that only 13 of the top 50 US academic institutions expressly prohibit ghost writing while 26 had no published policies on the subject (Lacasse and Leo 2010).
The initial idea behind clinical trial registries was to create a public database that provides basic information about clinical trial protocols in an effort to identify the existence of trials that pharmaceutical companies (and others) choose not to submit for publication (or that are rejected for publication). The first large scale registry came out of the FDA Modernization Act, section 113, that lead to the establishment of ClinicalTrials.gov in 2000 by the National Library of Medicine on behalf of the National Institutes of Health (NIH) (Zarin et al. 2005). In September 2005, the International Committee of Medical Journal Editors, with membership from many of the world’s leading journals, announced that registration of trials was a prerequisite for publication. This announcement lead to a significant increase in registration although entries of industry sponsored trials varied markedly in their degree of specificity and for some fields, e.g., primary outcome, many left the field blank (Zarin et al. 2005). Trials funded solely by industry were also significantly less likely to identify the individual responsible for scientific leadership and to provide a contact email address than were trials with non-industry or partial industry sponsorship (Sekeres et al. 2008). Crucially, a study of oncology drugs revealed that registration before publication did not appear to reduce a bias towards results and conclusions favouring new drugs in the clinical trials literature (Rasmussen et al. 2009).
Further FDA legislation now requires that, effective September 2008, the results of all clinical trials of drugs except phase I drug trials (preliminary safety studies on new products) must be reported on ClinicalTrials.gov within a year of the completion of the trial. Starting in September 2009 this reporting requirement was extended to adverse effects observed in the trials. These requirements apply to all drug trials that were ongoing on or after September 27, 2007 if the products under study have already been approved by the FDA. The results for drugs not yet approved by the FDA do not have to be posted until the drug receives approval. These reporting requirements are backed up by substantial fines for those who fail to comply. However, the results of trials of drugs approved before September 2007 do not need to be posted and these constitute the vast majority of all drugs in use. Furthermore, there is no requirement to post results for drugs that were never approved (Wood 2009).
Bero and Rennie (1996) were early advocates for a clinical registry to reduce bias in publications and in addition proposed measures to reduce bias in study design and in the conduct of drug studies. To achieve the former they suggested that pharmaceutical companies “should support investigator-initiated research that focuses on questions that are shaped by broad scientific interests rather than narrow commercial” ones. They also advocated reform of the drug approval process to require data comparing new drugs with available alternatives. In an effort to reduce bias in study design, they called for researchers to think carefully about possible design biases that would favour the sponsor’s product. They felt that bias could be reduced in the way that studies are conducted if the industry left the planning and monitoring of the research design entirely in the hands of the researchers.
In a commentary on the issue of biomedical COI, Schafer (2004) looked at these proposals, which he termed the “reformist package”. While he accepted that the measures encompassed by this term would, if rigorously implemented, “almost certainly improve the quality and scientific integrity of published biomedical research” he was not at all optimistic about this package being realized in any meaningful way. In particular, he felt that the recommendations from Bero and Rennie required an unshakable optimism in the willingness of drug companies to act in opposition to the best interests of their shareholders.
The Sequestration Thesis
As an alternative to the reformist package Schafer proposed what he called the “sequestration thesis” or the separation of researchers from the process of commercialization that would include the complete isolation of industry from clinical trial data (Schafer 2004). There are, what I term “weak” and “strong” variations to this thesis. The weak model is exemplified by the proposal from Finkelstein and Temin (2008). Although they are primarily concerned with drug prices, they suggest the creation of an independent, public, nonprofit Drug Development Corporation (DDC) that would act as an intermediary to acquire new drugs that emerge from private sector R&D and then transfer the rights to sell the drugs to a different set of firms. In addition to its role in helping to reduce drug prices, the organization would be mandated to submit “the results of all basic scientific studies and clinical trials … for publication in peer-reviewed journals as soon as patent and other intellectual property considerations permit.” The DDC would also make all negative trials public. Although this function of the DDC would certainly be helpful in increasing the availability of information it would still leave the research design and conduct of trials in the hands of the pharmaceutical industry.
The stronger version of this model would see an institution such as the NIH organize and manage clinical trials and the data that comes out of them with funding coming from taxes collected from the pharmaceutical industry and/or general tax revenue (Lewis et al. 2007; Angell 2004). “Drug companies would no longer directly compensate scientists for evaluating their own products; instead, scientists would work for the testing agency” (Lewis et al. 2007). In both cases, the authors argue that the companies should continue to fund a significant portion of the research agenda “in order to discourage the wholesale testing of marginal drugs with little therapeutic value, or candidate medicines with little chance of clinical adoption" (Lewis et al. 2007). While companies would continue to develop and market their products they would be separated from the process of generating and interpreting the clinical data about them.
Baker goes even further in arguing for a system whereby all clinical trials would be publicly financed with the cost of the trials in the US being covered through lower drug prices under the Medicare drug program and other public health care programs (Baker 2008).
In an unpublished paper the British economist Alan Maynard notes “Economic theory predicts that firms will invest in corruption of the evidence base wherever its benefits exceed its costs. If detection is costly for regulators, corruption of the evidence base can be expected to be extensive. Investment in biasing the evidence base, both clinical and economic, in pharmaceuticals is likely to be detailed and comprehensive, covering all aspects of the appraisal process. Such investment is likely to be extensive as the scientific and policy discourses are technical and esoteric, making detection difficult and expensive.” This article has shown that what Maynard predicted has a factual basis, pharmaceutical companies have used techniques leading to bias in the content of clinical research at every stage in its production.
Defenders of the pharmaceutical industry have tried to minimize its role in biasing clinical research by pointing out that the pursuit of profits is not the only motivation for trying to influence the outcome and use of clinical research and that individuals, government and medical journals are equally guilty (Hirsch 2009). Hirsch is correct, bias can come from many sources, but no individual or organization has the resources and the ability to influence the entire process the way that the pharmaceutical industry can. In this respect, the industry is in a class of its own.
We can reasonably ask that pharmaceutical companies not break the law in their pursuit of profits but anything beyond that is not realistic. There is no evidence that any measures that have been taken so far have stopped the biasing of clinical research and it’s not clear that they have even slowed down the process. What will be needed to curb and ultimately stop the bias is a paradigm change in the relationship between pharmaceutical companies and the conduct and reporting of clinical trials.