As much as 90% of the published medical information is flawed according to John Ioannidis, one of the true experts on credibility of medical research [1], and former BMJ editor-in-chief, Richard Smith, has claimed that “most of what is published in journals is just plain wrong or nonsense.” The poor quality of medical research is not a new criticism [2]; however, concern has been expressed within a broad field of specialties in parallel with reports that studies are fraught with problems including poor reproducibility [3].

Publishing research is complex. It includes a wide range of steps and the interaction of many collaborators (coauthors, sponsors, editors, reviewers and peers), and this may contribute to problems including scientific misconduct, methodological issues/errors and bias (Table 1).

Table 1 Possible causes for quality issues in published research

Scientific misconduct

The extent of scientific misconduct is essentially unknown. The major three types of misconduct (falsification, fabrication and plagiarism) are categorized as “scientific dishonesty”. Minor breaches of “responsible conduct of research” (bad practice) include concealed double publication, dubious accreditation as author, “salami” publication etc. Retraction of scientific articles because of fraud has increased tenfold since 1975 [4], but such articles still make up less than 1‰ of the huge number of papers published annually in scholarly journals [5]. Of retracted scientific publications, a major proportion are retracted because of misconduct and another major proportion because of unintentional errors or untrustworthy data/interpretation [6]. The Lancet recently questioned the credibility of medical research, especially research from China, because of many retractions [7].

A remarkable study reported in Nature in 2005 indicated that manipulation of data (“gray zone” behavior) is common among scientists, and based on survey data on average 2% of scientists admit to having falsified research at least once and 34% admit other questionable research practices [8]. This is a very serious challenge to the integrity and reputation of science in general [9]. Although science has a tendency to be “self-correcting” over the long term, flawed papers often live on unnoticed, which is inappropriate and potentially damaging. In this context, it is unacceptable that scientific misconduct discovered by the US Food and Drug Administration is not reported to the public [10]. Basically, research should be founded on the sound principle of “responsible conduct of research” [11].

Methodological issues

Much scientific research is poorly planned or executed, or both [12]. Methodological flaws are many, but include design, conduct, analysis (including statistical issues: type I/II errors, lack of power, multiplicity etc.), interpretation and reporting. Many websites are available to help researchers improve the design, conduct and reporting of clinical trials; for example, SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials; http://www.spirit-statement.org/), and EQUATOR (Enhancing the QUAlity and Transparency Of health Research; http://www.equator-network.org/). Major sources for the distortion of results are drug companies and researchers wanting certain results to make their drug look good or prove an eccentric idea. In these cases all the fundamental steps of a study may be profoundly biased. It is well known that a false idea can be erroneously proved through a combination of error, fluke (e.g. type I error) and clever selection of data [1]. Bias in any form will more or less always contribute to overestimating the treatment effect. The list of biases is long but includes selection, performance, attrition, detection and reporting biases. The efficacy of pharmacotherapy for depression is a good example of the overestimation of drug effectiveness due to selective reporting of positive results [13]. Shoddy methodology may of course be unintentional. Poor studies obviously lead to interpretation problems and may be misleading.

Unpublished or withheld research results (even those withheld with the support of a professional society) are not uncommon, and negative results are often not submitted for publication. A recent Cochrane review found that only 52.6% of studies presented as abstracts at scientific meetings had been published after 9 years [14]. Only 22% of studies registered with ClinicalTrials.gov, which mandates reporting, had been reported within 1 year of completion [15]. This skews the body of evidence, and may lead to a waste of money and effort exploring ideas already investigated by others. Financial conflicts have been found in 29–69% of published clinical research studies [16]. Lack of transparency decreases the value of published research. A recent study of a random sample of 441 biomedical journal articles published during the period 2000–2014 showed that the majority did not mention anything about funding or conflicts of interest. This of course is worrisome since stakeholders can operate in stealth mode and have a significant influence on the design, conduct and analysis of biomedical studies. It is encouraging, however, that the percentage of articles with statement about “no conflicts of interest” decreased and the percentage with a statement increased between 2000 and 2014 [16].

Publication process

Journal editors and scientists are more interested in new results than in refuting of old results. This attitude inevitably leads to publication bias since trials with a statistically significant result are more likely to be published than those with a nonsignificant result. Sources of publication bias are many, but include failure to report “negative results”, and withholding data (sponsor priorities, personal investigator opinion, carelessness, lack of resources, etc.) Also the corrupting force of financial conflicts of interest via reprint orders, especially for articles sponsored by pharmaceutical industry, may lead to publication bias. Reprint orders represent a large source of income for major journals. Rofecoxib (Vioxx) was a widely prescribed drug found to be safe and effective in large randomized controlled trials but was removed from the market as unsafe and ineffective. More than 950,000 reprints were ordered from the New England Journal of Medicine representing a revenue of at least $697,000; Merck bought most of the reprints [17].

A publisher’s strategy for a journal may include manipulation of the impact factor by promoting or suppressing particular articles, thus leading to publication bias. Although many believe that peer review is the best review process we have, it is not evidence-based. Whether studies should be blinded or unblinded is a matter of controversy [18]. Assessors nominated by grant applicants themselves seem to induce systematic bias, and this may be the same for medical papers [19]. There is overwhelming support for peer review and a huge majority of researchers claim that peer review has improved the quality of papers. However, the peer review process is essentially very subjective and thus may be influenced by the reviewer’s/editor’s personal opinions, ties (which may be unrevealed) especially with medical companies, and other conflicts of interest (scientific). Conflicts of interest during the peer review process may lead to delay and even plagiarism. Some studies have shown that authors may be discriminated against if they are women, from a minor institution, have few publications, or present a new/controversial hypothesis/theory [20]. Scientific misconduct is difficult to identify on peer review. This has been clearly shown by sending articles containing deliberate mistakes to reviewers [21].

Improving the quality of research

Innovation is important for the development of medicine. However, the present state in which published articles are not trustworthy because of fraud, methodological issues, conflicts of interest or inappropriate peer review, is untenable. Research results should be reliable, transparent and easily interpretable for decision makers, researchers and patients. The present situation where research results may be misleading, exaggerated or plain wrong means that applying new findings to the clinic (e.g. new high-price cancer drugs) should be performed very cautiously, especially if the evidence is not based on rigorous, robust, high-quality, transparent studies. The current situation inevitably leads to waste of resources and ultimately may put patients’ lives at risk.

During the period 2000–2010 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties [22]. It is estimated that US$240 billion is spent globally every year on health research. The output from this research is documented in about 3 million articles of which around half are published by 6,000 publishers in 25,000 journals (with a much larger number of editors) [23]. The status quo is “unprofessional, arguably unethical and certainly unacceptable” [2]. However, since medical research is complex, there is no simple solution. Many initiatives have been suggested to improve the quality of research.

One fundamental issue is that authors need training in writing articles based on the principles of “responsible conduct of research”. Therefore, at least those studies that are publicly funded should have the design and methodology approved. A “publication office” to approve the science before funding has been proposed [23]. A second issue is improvement in the peer review process. A huge body of evidence shows that editors and reviewers need education. The transparency of the review process also needs to be improved. Conflicts of interest (especially ties to industry) and the expertise of the assessment team, and whether the editorial decision making is dependent on or independent of economic considerations, need to be made clear. A third aspect is to secure open access to research data so that studies can be scrutinized after publication by other researchers, eventually leading to retraction of poor studies. There is growing agreement on this issue, but how it should be achieved in practice is still a matter of debate. It is encouraging that the European Medicines Agency (EMA) has started to publish all data on clinical trials. Verification of reported research data is crucial to ensure reliability. Although there is no consensus as to how repeatability should be defined and achieved, it is crucial that repeat studies are founded on robust and transparent designs.

Conclusions

The present status of research that is “misleading, exaggerated or plain wrong” is reminiscent of the news media. The attitude that scientists are always right should be changed; they are most often wrong [1]! Instead of trying to make cosmetic changes to their results, they should openly and frankly recognize the weakness of the results. Researchers need to change from a “butterfly behavior” [24] to a more altruistic approach so that an issue (the “flower”) can be fully exploited in search of a breakthrough, before moving on to the next flower [24]. As pointed out by Douglas G. Altman in 1994 we still need “less research, better research and research done for the right reasons” [2].