Background

Educated in the application of epidemiological methods, epidemiologists study where diseases occur, what causes them, and how to prevent them. According to A Dictionary of Epidemiology [1], the knowledge derived from epidemiological inquiry is not used solely for discovery purposes. It is also applied to control and prevent health problems and is used to restore, promote, and protect population health across all levels of society. Hence, by virtue of their focus on protecting the public’s health, epidemiologists, as a profession, are expected to serve the public, with the public interest trumping all others [2].

As an applied interventionist science, epidemiology is used not only to study health problems, but also to provide evidence to inform rational policy debate among interested stakeholders [3]. This evidence provides the scientific basis for correcting and, ideally, preventing health problems through government-driven health and social policy. Aside from informing policy, epidemiological data also provide the basis for individuals’ choices about lifestyle, diet, and other critical factors that influence health. Whether working as scholars, researchers, public health, or non-government agency professionals, as consultants, or even as expert witnesses in legal proceedings, the work-product and ultimate goal of the epidemiologist should be to promote and protect the public’s health, both at the population as well as the individual level.

Yet, in a world of conflicting interests, some parties may use the methods and language of epidemiology for personal gain or for corporate profit. They do so by manufacturing and casting doubt [4, 5] to confuse both policymakers and the public to the detriment of the public’s health. Goldberg and Vandenberg [6] have most recently identified commonly applied tactics used to misrepresent scientific discovery: spinning the facts to manufacture doubt, generating or perpetuating falsehoods. They point out that deceit can result in confusion that delays action by calling into question the scientific basis for concern.

Documents presenting best practices and ethics guidelines have been developed and adopted by the major epidemiology professional organizations to support the discipline and protect its integrity [7,8,9]. These provide the moral basis for epidemiology’s mission; they guide the normative practices of the discipline. While professionals who are not adherent to the guidelines can be called to account, there is no mechanism to ensure their implementation; moral suasion through peer pressure is the only enforcement mechanism.

In this commentary, our focus is on the discipline of epidemiology. The problems resulting from conflicting interests, and the strategies that can be used to protect public health from them, however, also apply to other public health disciplines, including risk assessment, toxicology, and exposure assessment.

The role of undue influence in increasing uncertainty

Policy decisions are influenced by factors and inputs related not only to health risk assessments based on epidemiological data; they are also influenced by economic, political, social values, and special interest stakeholder considerations [10]. When policies informed by epidemiological evidence are debated in government, the preference is to make policy decisions in the presence of the greatest possible certainty. However, absolute certainty is not possible in science, given the inherent uncertainty that accompanies scientific inquiry. Consequently, epidemiologists are usually cautious and provide caveats for their findings. This creates an entry point for those bent on manipulating policy to promote confusion and engage in disinformation [11].

Poorly or inappropriately designed and executed epidemiological research that makes its way into the scientific literature serves to increase uncertainty. This renders the policy maker less likely to vote in favor of a policy change in support of public health. If the science can be muddied to foment uncertainty, or perhaps to mislead, a policy could ensue that leads to even more adverse population health risks.

Aware of this, a well-developed strategy among those with a vested self-interest in influencing and undermining policy, in a manner that is not consistent with the health of the public, is to find ways to increase scientific uncertainty, or to outright mislead. Science can be misused, either intentionally, through error, or from bias. In epidemiology, bias is defined as “an error in the conception and design of a study—or in the collection, analysis, interpretation, reporting, publication, or review of data—leading to results or conclusions that are systematically (as opposed to randomly) different from truth [1].”

Financial conflict-of-interest (COI), including author financial ties, review sponsorship, and journal funding, introduces a bias at all levels of the research and publication process [12]. Contrary to what many scholars might believe, this bias is not prevented by the peer review process [12]. Distortion and disinformation practices regarding scientific methods and evidence were intentionally employed by the lead industry in the early 1900s [13] and, since the 1950s, by the tobacco industry [14], and have since been honed by the asbestos industry [15,16,17,18]. The methods have become more sophisticated over time as played out from one industry to the next [19]. The goal is to pollute the scientific literature with studies designed to serve the interests of powerful sponsors and special interests. While scientists routinely disagree, the most intransigent disagreements arise when financial [20, 21], as well as political [22] interests are at play. When people become ill, die prematurely, and the health of future generations is placed in jeopardy, then transparency about the stakes involved becomes even more pressing.

Most recently, COVID-19 has taught us the importance of following epidemiological evidence in policy and health decision-making, especially in a global pandemic [3]. It has starkly revealed science’s politicization, corruption, and suppression [22]. Indeed, the pandemic has exposed relationships that need to be confronted if profiteering is to be contained, calling on values that support the public interest rather than self-serving relationships with industry [23]. Health harms are likely when the public is exposed to misinformation. Confusion ensues, which in turn creates a space for the mistrust of science, and the amplification of conspiracy theories through social media—resulting in aberrant behaviors that hamper vital public health measures [24].

Recognizing the range of factors affecting the policy process, and how they compete with one another, would help public health scientists appreciate the vulnerability of their discipline to being perverted for manipulating science, misguiding policy development, and supporting special interests. By following the money, one can identify the role that influence has played, and how this has encouraged the misuse of epidemiology [25]. The conduct of invalid science for generating “evidence” involving researchers financially supported by special interests (e.g., [26,27,28,29]), is a common and worrisome practice.

Methods

Forces having direct or indirect financial stakes in policy interventions, especially those with a short-term focus on reports reflecting profits or personal gain to stakeholders, have been shown to be the most active in effectively working against the public’s health. Particularly, there has been a precipitous increase in the corporate funding of epidemiological research and an ever-growing reliance of academic institutions on such sources of funding. This has resulted in increasing instances of conflicting interests [30] which were brought to attention in the 2020 International Network for Epidemiology in Policy (INEP) Position Statement on Conflict-of-Interest and Disclosure in Epidemiology [31].

INEP is the major global network of epidemiologists with a focus on providing a bridge between epidemiological research and evidence-based, rational, government-formulated health policy that serves the public interest. It thus provides a unique forum to protect and promote public health, and works to ensure scientific integrity, promote ethical conduct in research, and support evidence-based research findings that are both independent and transparent. Its byline states: “Integrity, Equity, and Evidence in Policies Impacting Health.”

Thanks to investigative journalism, exposés of corporate and political influence in the United States (U.S.) in the period 2017–2021, reveal how the Environmental Protection Agency (EPA) under the Trump administration, the American Chemistry Council (ACC), and industry law firms colluded to weaken the EPA’s new chemical safety reviews [32]. The exposés also reveal how the fossil fuel industry has persisted over decades in influencing policy by obfuscating and denying negative impacts on human and planetary health [33,34,35,36]. Two seminal volumes, rich in well-established examples, were produced by the European Environment Agency [37, 38]. Furthermore, the harmful impacts of powerful influence through research sponsorship have been recently recognized, such that research and related professional sponsorship by Big Oil and Tobacco are being strongly discouraged [39].

On June 10, 2013, a few years prior to the aforementioned exposés, Dr. Margaret Chan, World Health Organization (WHO) Director General, made the following statement to the 8th Global Conference on Health Promotion, held in Helsinki, Finland: “… In the view of WHO, the formulation of health policies must be protected from distortion by commercial or vested interests [40].” INEP began to develop its Position Statement in 2014, soon after the WHO Director General’s pronouncement. What Dr. Chan noted indeed is an ongoing phenomenon.

With INEP working at the interface of research and policy, its mission includes recognizing and highlighting the misuse of data and potential corruption of the science practiced by epidemiologists. INEP comprises 24 national and international volunteer member associations and societies of epidemiology across five continents. It is registered as a 501(c)(3) public charity in the U.S. It is thus well positioned internationally to develop strategies to combat the misuse of epidemiological science. The INEP Position Statement [31] addresses two questions:

  1. a)

    How is it that public health policy remains under siege?

  2. b)

    Could public health be better protected through the improved management of Conflict-of-Interest and Disclosure in Epidemiology?

To address these questions, the INEP Position Statement [31] equips scientists with a set of tools to expose and root out so-called science that is designed to mislead and deceive. Hopefully, the actions of those drawing from the methods exposed in the Toolkit Table 1 (presented in the Results section below) to distort science should become less influential. Their influence will diminish because reviewers of epidemiological studies, be they peer reviewers or otherwise, should, by virtue of the toolkit, be more effective in identifying invalid science introduced to delay policy actions for protecting public health.

The toolkit’s role in the litigation process, from deposition to cross-examination in court proceedings, should also be helpful in both the pursuit of truth and for ensuring social justice. With the potential of the toolkit for bolstering the integrity of the discipline, we recognize that there are many journals with no or ineffective peer review; and, that industries have bought their own journals, limiting the extent to which the literature could be freed of corrupted science. The once-revered peer-review process is at risk, especially in journals controlled by vested interests.

Consolidation of the toolkit was thus included in the INEP Position Statement [31]. It is now made accessible as a standalone and expanded commentary. The compendium of tools provided in this commentary brings together work initially identified by Cranor [41, 42], and subsequently expanded upon by Soskolne [43,44,45] who saw the importance of expanding and consolidating this work to better arm epidemiologists, policymakers, and the scientific community with a greater appreciation for how epidemiological methods can be misused, abused, and perverted, counter to the advancement of knowledge and the public’s health.

This commentary has a role to play in not only bringing attention to, but also shining a light on, mechanisms of demonstrated influence and their harmful impacts on the advancement of science and the protection of public health. It therefore should be used as a teaching and training resource in graduate programs in epidemiology and other related public health disciplines. Every student emerging from any such program should be prepared to confront the world of malfeasance. Ideally, reviewers of manuscripts will be better positioned to separate public interest science from inappropriately designed studies that infiltrate the literature and hence the policy debate specifically to mislead science in the service of special interests.

Results

As noted above, application of the epidemiological method can be influenced by interests that manipulate it in ways to produce findings that cast doubt, foment uncertainty, and seek to mislead decision makers. Unfortunately, some epidemiologists are susceptible to incentives that induce unprofessional conduct, thereby undermining the integrity of science [46,47,48,49]. The increasing reliance of public health institutions and epidemiologists on corporate funding, as well as the influence of politics on public health research, further exacerbate harms resulting from misusing the methods of the discipline and/or misinterpreting research findings.

To counter the types of forces noted above, a listing of key methods/techniques, arguments, and tactics has been assembled in the Table 1. It is provided to help identify how epidemiologists, usually financially supported by or under the influence of vested interests, manipulate, misuse, or inappropriately apply the methods of epidemiology, or misinterpret findings, to skew results and produce invalid science. The Table 1 is a toolkit that can be used as follows:

  • By peer reviewers as a checklist of, or guide to key methodological parameters;

  • To train epidemiologists and other healthcare professionals on the ways in which epidemiology can be distorted;

  • To review the literature for invalid science or uninformative studies (e.g., underpowered studies, or misleading samples); and

  • To identify who is misusing epidemiology.

The benchmark against which the toolkit can be compared is assembled from a selection of 12 foundational epidemiological textbooks, developed since the 1970s, with more recent editions cited here and used in epidemiology training programs [50,51,52,53,54,55,56,57,58,59,60,61]. This selection is somewhat arbitrary; any well-established textbook should suffice to gain understanding about the correct use of epidemiological methods.

Biostatistical methods are relied on for the design of specific epidemiological studies. As such, statistical methods are a critical component of the epidemiologist’s toolkit. Statistics is a discipline that has been in play for a longer period than what epidemiology has. It is not surprising, therefore, to find in the statistical literature articles extending over a longer timespan that bring attention to statistical mistakes that both researchers and practitioners can make in their work [62, 63].

Inappropriate techniques applied in epidemiology, including those that manipulate findings in ways that bias them toward the null, are assembled in the Table 1. These techniques may apply to the full realm of epidemiologic inquiry, including descriptive and analytical study designs. They include the use of unbalanced discussion that emphasizes findings not supported by the data, selective disclosure of competing interests, and publication in ‘pay-to-play’ journals without appropriate peer review, and with issues involving undisclosed conflicting interests.

Table 1 Toolkit of inappropriate applications of the epidemiological method

In brief, the above Table 1, constituting the toolkit, is organized in three parts:

Part A of the Table 1 reflects on how the findings from epidemiological inquiry are affected by the design of studies, as well as on the how and what is being measured. We have compiled epidemiology-specific methods/techniques used to foment uncertainty and cast doubt about cause-and-effect through biased study designs and measurements producing invalid science.

Part B of the Table 1 reveals arguments that impose inappropriate standards and methods of suppression counter to the principle of openness and transparency. We have compiled arguments used to delay action, maintain the status quo, and create divisions among scientists by imposing inappropriate standards and methods of suppression.

Part C of the Table 1 identifies tactics imposed by those serving special interests to upset the very foundation of reason as it pertains to the core values and methods of the discipline. We have compiled tactics invoked to misdirect policy priorities through influence imposing undisclosed values from the positions taken by special interests.

Discussion

Since the compilation of this toolkit, the literature has, over the past year, seen many more examples of conflicting interests and failures to disclose them. Each example exposes the inappropriate role of influence-wielding at all levels of scientific inquiry and knowledge advancement.

In this commentary, we focus on the toolkit aspect of the INEP Position Statement [31], and thus limit ourselves in this discussion to one recent contribution to the topic of bias assessment because of its focus on methods. It appears in a 2020 commentary by Steenland et al. [102] in which they consider risk of bias (RoB) assessments and evidence syntheses for observational epidemiological studies of environmental and occupational exposures. RoB tools are used to evaluate epidemiological studies as part of evidence synthesis, the latter requiring a broader approach than simply evaluating RoB in individual studies. Those authors recognize the need to include classical considerations for judging causality in human studies, “as well as triangulation and integration of animal and mechanistic data.”

As with the INEP Position Statement [31], Steenland et al. [102] recognize conflict-of-interest, which can create the potential for bias, a bias that is not always assessed in RoB tools. They point to strong evidence that “studies authored by those with vested interest are generally favorable to those interests, hence the need to disclose potential conflict of interests.” In the view of Steenland et al. [102], if specific biases are present, reviewers should be able to detect them in evaluating studies. However, “generally not included in current risk of bias tools is potential bias because of problems in statistical methods. Concerns include choice of an inappropriate and badly fitting model, failure to model exposure–response or to evaluate different exposure–response models, incorrect use of mixed models, incorrect use of Bayesian techniques, violation of statistical assumptions (e.g., normal residuals in linear regression), overadjustment for covariates related to exposure but not to outcome, adjusting for causal intermediates, etc.”

We note that statistical models and methods are quite complex. As such, many epidemiologists and peer reviewers, as well as the general reader, may not be able to evaluate their appropriateness. Yet, bias due to COI has been increasingly considered and assessed in systematic review methodologies and RoB tools of epidemiological studies, including the Navigation Guide [103], and the WHO/ILO Joint Estimates of the Work-related Burden of Disease and Injury [104].

Another domain of evidence synthesis that does not entail bias per se is “informativeness.” Consideration in this domain includes whether the study has a large enough sample size, whether the study has sufficient latency, whether results have been reported selectively, and whether the study has sufficient exposure contrast to see an effect of exposure on outcomes. This domain is sometimes called sensitivity in some evidence syntheses.

There is considerable overlap between the strategies identified in the toolkit of Goldberg and Vandenberg [6] and those independently identified in our Table 1 (above). This lends credence to our respective approaches for addressing the challenge of manufactured doubt. It adds a degree of validation to each of our respective Tables revealing strategies, arguments, and tactics used in doubt mongering. In the clinical realm, regarding disclosure as a mechanism for mitigating the effects of COI, Rimmer [105] notes that, until the introduction of a mandatory register of doctors’ interests, patients would have no idea who was funding their doctor’s voice, or who might be biased towards certain treatments. Related health professional bodies are thus calling out the biases to health and science induced by commercial interests.

In practice, broad opportunity exists to publish invalid science owing to: (1) the existence of predatory pay-to-play journals; (2) open access journals with little peer review; and (3) editors/peer reviewers who themselves have a COI and/or little-to-no knowledge of the topic under review. Given this, those who rely on the published literature, in both government and among the public, including the media, should be aware that research strategies exist that can be misleading. Above all, since professional epidemiologists are the gatekeepers of the discipline, they have the moral responsibility to execute its mission. It therefore behooves them, along with other healthcare professionals, to be familiar with this toolkit as but one mechanism for better ensuring the maintenance of professional standards of integrity [43] throughout the public health sciences.

Recommendations

Common practices to distort and misapply epidemiological science should be recognized and called out professionally when they occur. INEP member organizations, academic institutions, and other public health professionals can adopt INEP recommendations and strategies for COI management that include identification, avoidance, disclosure, and recusal [31]. It would be of added benefit to incorporate this commentary into the curriculum of graduate training programs in the health sciences and in medical schools to equip entry-level professionals to better serve as gatekeepers of the discipline.

The toolkit can be used as a guide in what to look for, to train epidemiologists and others on how epidemiology can be distorted, to evaluate the literature for invalid science or uninformative studies (e.g., underpowered studies), and to identify who it is that is misusing epidemiology along with their motivations. It can be used as a checklist for critically appraising descriptive or analytical studies pre- and post-publication, policies, and arguments in legal proceedings.

In summary, techniques to manufacture and cast doubt (i.e., irrational skepticism), targeted at policymakers and consumers through the misapplication of the epidemiological method, claim that:

  • The science is unclear

  • There is dissent (where the evidence is clear)

  • The data are inconclusive

  • Scientists are biased—“You can’t trust scientists”

  • Regulation is unjustified—“It’s a slippery slope.”

This is achieved through:

  • Delaying action

  • Influencing policy decisions—risk factors for bias

    • ◦ Pulls: Vested interest (stand to gain personally)

    • ◦ Pushes: Lobbying.

Defenses that work against epidemiology being misapplied include:

  • Correctly applying and clarifying the methods of statistical inference

  • Exposing undisclosed COI

  • Recognizing erroneous and misleading interpretations of underpowered studies

  • Acknowledging the scientific assessment of uncertainty

    • ◦ Bias; statistical (aleatoric) uncertainty; epistemic uncertainty

    • ◦ Model uncertainty; parameter uncertainty

    • ◦ Expected value = (value of outcome) X (probability of outcome)

    • ◦ Uncertainty intervals

  • Highlighting when the logic of an argument is invalid

    • ◦ False premises

    • ◦ Invalid argument

    • ◦ Misapply conclusions

  • Exposing the motives of researchers, journal editors, peer reviewers, decision makers and other stakeholders in the policy process

  • Critically appraising the evidence as presented

  • Publishing standards for good practice, e.g., the INEP Position Statement

  • Calling out malpractice.

While the Council on Publication Ethics (COPE) has guidelines designed to keep the literature free of corrupted or poor science, they are known to be inadequately enforced and are insufficient to stop the manipulation of the literature [16, 106, 107]. Actions on the part of the epidemiology community, as well as the broader health sciences, could help to change this as the problems are recognized and addressed. The scientific community should engage by recognizing and professionally calling out common practices used to distort and misapply epidemiological and other health-related sciences.

To demonstrate the seriousness of serving as gatekeepers with the moral responsibility to uphold professional standards, epidemiologists could expand upon the INEP Position Statement, using it as a launching pad to write other documents (e.g., other position statements, policy briefs, commentaries, letters, case studies, and editorials) to extend the reach of INEP’s Position Statement. Ultimately, exposing the public and policymakers to the INEP Position Statement will provide reassurance about the seriousness that professionals hold in protecting the public’s health. It is possible that, in return, with enhanced credibility in the profession, funding could be made available to support organizations like INEP as valued counterweights to the manipulation of this key public health science whose mission it is to serve the public interest above any other.

Epidemiologists and other health professionals must not be naïve. They need to remain vigilant to the variety of forces at play that influence both science and policy. In addition to vigilance, personal integrity is required to counter the influence of economically powerful entities and corrupt and/or morally bankrupt governments whose focus is not on protecting public health, but rather on protecting narrow, special interests.

This said, there are frailties in both human beings as well as in governmental structures. Sensitive to this reality, we provide specific short-term objectives that each epidemiologist could immediately implement: Recognize our professional obligation to be vigilant and especially careful in peer review to avoid contaminating the literature with invalid or poor science; and, support added oversight, as in Human Research Ethics Boards (HREBs) or Institutional Review Boards (IRBs), on the need to keep ourselves on track with the moral responsibility for being aware of and compliant with our profession’s ethics guidelines.

We recommend accepting that uncertainty is inherent in science. In our role as scientists, we strive to be value-neutral or value-free, but the human instrument is, in fact, incapable of achieving this point of neutrality or impartiality. Consequently, we need to look first to ourselves, because causal inference is a function of who it is that is making the inference which, in turn, is a function of how we apply our scientific methods. Anything that we can do to build protections into the system of self-governance that is expected of professions like epidemiology, we ought to engage with and embrace.

Conclusions

This novel toolkit exposes the negative impacts of the misuses of epidemiology. As such, it provides an essential foundation for expanding the science and methods of argumentation (i.e., disagreement) through formal logic and dialectics. While beyond the scope of this commentary, the challenge posed to develop an application (i.e., an app) based on the Table 1—to more efficiently review the literature and for rooting out invalid science and misleading conclusions—warrants further exploration in this philosophical context.

The toolkit, consistent with INEP’s mission, is made available to protect the public. It is provided to assist public health professionals whose mission includes protecting, maintaining, and improving the public’s health. Its utility lies in our more specific roles as educators, reviewers, and researchers. It is to be used to detect and professionally expose the misuse and distortions of epidemiology that result in misinformation that contaminates the literature, a domain on which the advancement of science and public policy rely.