It is well recognized that weak exposure estimates may hamper the identification of effects in epidemiological investigations (Checkoway et al. 2004; Rothman et al. 2008). However, toxicologists and regulators should be aware of recent discussions about false positive findings generated by epidemiological studies and that these biased results were reported rather uncritically in peer-reviewed journals. Boffetta and co-workers showed in their commentary that publication bias was at work in studies on dioxin exposure and non-Hodgkin lymphoma, they identified overstated relative risk estimates in a couple of epidemiological studies on DDE and breast cancer, acrylonitrile and lung cancer, coffee and pancreatic cancer, induced abortion and breast cancer and herpes simplex/chlamydia exposure and cervical cancer (Boffetta et al. 2008). The often reported epidemiological association of passive smoking and breast cancer was proved to be an artefact probably caused by a reporting bias in studies with retrospective exposure assessment (Pirie et al. 2008). In addition, empirical evidence has accumulated that most newly discovered true associations are inflated (Ioannidis 2008a). Such distortions were shown for studies in occupational epidemiology and for some highly cited epidemiological papers. Both reviews (Boffetta et al. 2008; Ioannidis 2008a) demonstrated convincingly that wrong risk estimates were communicated by papers published in peer-reviewed journals due to uncontrolled upward biases in epidemiological studies.

Furthermore, epidemiological risk estimates suffer from a bias away from the null even if the studies are unbiased in the usual sense (Senn 2008). This indirect bias is due to the “regression scissor” phenomenon: the regression line of y on x differs from the line of x on y if the correlation is not perfect. Given the study is unbiased in the usual sense the expectation of the model-predicted response is identical with the true response. However, because the predicted response almost always suffers from limited precision the converse is almost never true. It follows that the association measured in a study is in fact inflated. This indirect bias is unavoidable in the usual situation of imprecise exposure estimates, imprecise covariate data and imprecise response information and it will operate in the epidemiological studies even if theses errors are not systematic.

However, over and above these scientific problems generated by direct and indirect biases a discussion has started about distorted analyses and presentations of epidemiological studies in peer-reviewed journals (Ioannidis 2008b). In the following I’d like to discuss a recent example from environmental epidemiology (Slama et al. 2007). This study was published in environmental health perspectives (EHP), a journal that presents itself as a leader in the field “by publishing in a balanced and objective manner the best peer-reviewed research and most current and credible news of the field” (http://www.ehponline.org/docs/admin/mission.html). Unfortunately, EHP hesitated to accept a Letter to the Editor because the Journal rules do not allow publishing letters that are submitted more than 6 months after publication of the paper. Thus, I am grateful to the Editor of Archives of Toxicology for offering me the opportunity to discuss the example in this Editorial.

Slama et al. (2007) investigated the possible link between traffic related atmospheric pollutants and birth weight in offsprings. Because this topic is of major relevance for public health I would like to comment on a methodological drawback of the study, and I would like to point at a biased presentation of the study results in this article.

The outcome studied (birth weight) is a quantitative (continuous) variable. Thus, the most appropriate analysis should try to model it as such—the authors themselves correctly mentioned the continuous outcome approach as the a priori choice of analysis (Slama et al. 2007, p. 1284). However, they dichotomized birth weight at 3,000 g and analyzed the impact of pollutants on this derived binary variable. Slama and co-workers summarized their finding in the abstract as an association of PM 2.5 levels and of PM 2.5 absorbance with the variable “birth weight < 3,000 g”: odds ratios were significantly elevated at the 5% level. It is well known that an unnecessary categorization of variables may lead to potential distortions and loss of information (Rothman et al. 2008). The reason why the authors did not follow usual recommendations and did not stay with their own a priori choice of analyzing birth weight as a continuous variable is somewhat perplexing: the continuous outcome “…did not turn out to be associated with air pollution (data not shown)” (Slama et al. 2007, p. 1284). It is even more confusing that the authors stayed with this decision although the standard analysis of the binary outcome showed numerical instability: “…log-binomial models failed to converge.” (p. 1284). That alone speaks for a continuous outcome analysis.

Moreover, Table 3 in Slama et al. (2007) summarized findings of the binary analysis while adjusting for other pollutants simultaneously: odds ratios decreased and were no longer significant at the 5% level in comparison to the single pollutant models. However, the authors presented only the higher and “more significant” odds ratios from the single pollutant modelling results in the abstract. This matches with the sad statement of Boffetta and co-workers that “there is an unfortunate tendency to highlight “positive” and “statistically significant” findings in the abstract of both, observational studies and randomized trials, even when the results are doubtful or open to criticism” (Boffetta et al. 2008; Gotsche 2006).

Ioannidis argued that most published research findings are false (Ioannidis 2005). In a recent discussion in Epidemiology he pointed at tricks how to publish results in an unbalanced manner (Ioannidis 2008b). He emphasized that “empirical evidence shows that it is seasoned epidemiologists that use these tricks par excellence”. The article written by Slama and co-workers adds to Ioannidis’ evidence of biased analyses and presentations—published in peer—reviewed journals like EHP.

There is a need for rigorous and honest science. Biased analyses and distorted presentations of results undermine the credibility of epidemiology and add to the scientific problems caused by direct and indirect biases. I agree with Boffetta and co-workers who pled for epistemological modesty (Boffetta et al. 2008). Distorted, overstated and oversimplified interpretations are a danger for the reputation of epidemiology. Editors and reviewers should be more alert to these problems.