Findings

Attention to statistical quality in medical research has increased in recent years owing to the greater complexity of statistics in medicine and the focus on evidence-based practice. The editors and statistical reviewers of medical journals are charged with evaluating the scientific merit of submitted manuscripts, often requiring authors to conduct further analysis or content revisions to ensure the transparency and appropriate interpretation of results. Still, many manuscripts are rejected because of irreparable design flaws or inappropriate analytical strategies. As a result, researchers undertake the long and arduous process of submitting to decreasingly selective journals until the manuscript is eventually published. Aside from padding the authors' résumés, publishing results of dubious validity benefits few and makes development of clinical practice guidelines more time-consuming [1, 2]. This undesirable state of affairs might often be prevented by seeking statistical and methodological expertise [3] during the design and conduct of research and during data analysis and manuscript preparation.

To assist educators and medical researchers in improving the quality of medical research, we conducted a survey of the editors and statistical reviewers of high-impact medical journals to identify the most frequent and critical statistical and design-related errors in submitted manuscripts. Methods experts have documented the use and misuse of quantitative methods in medical research, including statistical errors in published works and how authors use analytical expertise in manuscript preparation [311]. However, this is the first multi-journal survey of medical journal editors regarding the problems they see most often and what they would like to communicate to researchers. Scientists may be able to use the results of this study as a springboard to improve the impact of their research, their teaching of medical statistics, and their publication record.

Sample and Procedure

We identified the 20 medical journals from the "Medicine, General & Internal" and "Biomedical" categories with the highest impact factor in each of the 2007 Science Journal Citation Report and the 2007 Social Science Journal Citation Report. Journals that do not publish results with statistical analysis were discarded, yielding 38 high impact journals. Twelve of these journals endorse the CONSORT criteria for randomized controlled trials, 6 endorse the STROBE guidelines for observational studies, and 5 endorse PRISMA criteria for systematic reviews. These journals are listed in Additional file 1 [12].

The Editors-in-Chief and identifiable statistical reviewers of these journals were mailed a letter informing them of the online survey and describing the forthcoming email invitation that contained an electronic link to the survey instrument (sent within the week). We sent one email reminder a week after the initial email invitation in spring of 2008. We also requested that the Editors-in-Chief forward the invitation to their statistically-oriented editors or reviewers in addition to or instead of completing the survey themselves. An electronic consent form with the principal investigator's contact information was provided to potential respondents emphasizing the voluntary and confidential nature of participation. The Stanford University Panel on Human Subjects approved the protocol. This is one in a series of five studies surveying the editors and reviewers of high-impact journals in health and social science disciplines (medicine, public health, psychology, psychiatry, and health services) [13, 14].

Survey Content

The survey contained three parts: (1) Short-answer questions about the journals for which the respondents served, how many manuscripts they handled in a typical month, and their areas of statistical and/or research design expertise; (2) The main, open-ended question which asked: "As an editor-in-chief or a statistically-oriented reviewer, you provide important statistical guidance to many researchers on a manuscript-by-manuscript basis. If you could communicate en masse to researchers in your field, what would you say are the most important (common and high impact) statistical issues you encounter in reviewing manuscripts? Please describe the issues as well as what you consider to be adequate and inadequate strategies for addressing them."; and (3) One to four follow-up questions based on the respondents' self-identified primary area of statistical expertise. These questions were developed by polling 69 researchers regarding what statistical questions they would want to ask the editors or statistical reviewers of major journals.

Analysis

Responses to the open-ended questions were analyzed qualitatively using content analysis to identify dominant themes. We coded the responses to the main question on the most common and high impact (per the wording of the question) statistical issue and the respondents' proposed solutions to those issues. In the analysis phase, two of the authors resolved coding criteria and sorted the responses according to the two major categories that emerged from the data.

  1. A.

    Statistical and sampling issues

  2. B.

    Inadequate reporting clarity or completeness

The results are presented in each category from most frequently mentioned to least frequently mentioned.

Respondent Characteristics

Respondents to the survey were comprised of 25 editors and statistical reviewers (of 60 solicited) who manage manuscripts from 20 of the 38 journals in the sampling frame. Respondents indicated reviewing or consulting on a mean of 47 (range: 0.5 to 250) manuscripts per month. The most frequently reported areas of expertise (multiple responses possible) were the design and analysis of clinical trials (n = 12), general statistics (n = 14), quasi-experimental/observational studies (n = 12), and epidemiology (n = 11).

Respondents' Suggestions for Statistical and Sampling Issues

Respondents often noted problems that are fundamental to research design and quantitative methods, including analytical strategies that are incomplete or mismatched with the data structure or scientific questions, failure to address missing data, and low power. Below, we describe the specific issues mentioned by respondents and provide accessible references for more detailed discussion.

  1. (1)

    Inappropriate or incomplete analysis: In addition to minor arithmetic and calculation errors, respondents expressed concern over researchers' choice of statistical tests. Specifically, frequent problems exist in the appropriateness of statistical tests chosen for the questions of interest and for the data structure. These include using parametric statistical tests when the sample size is small or in the presence of obviously violated assumptions [15]. In addition, researchers may fail to account for the sampling framework in survey-based studies with appropriate weighting of observations [16, 17]. Other errors include confusing the exposure and outcome variables in the analysis phase. That is, in laboratory data, the exposure of interest is mistakenly analyzed as the outcome in analyses. In a similar vein, researchers sometimes mistakenly report the discrimination of a clinical prediction rule or internal validation method (e.g., bootstrap) using the training dataset rather than the test set [18, 19]. Other concerns included creating dichotomous variables out of continuous ones without legitimate justification, thereby discarding information, and the use of stepwise regression analysis, which, among other problems, introduces bias into parameter estimates and tends to over-fit the data. See Malek, et al. [20] for a pithy discussion of the pitfalls of stepwise regression and additional references.

  2. (2)

    The substantive area of analysis that received the most attention from respondents was the failure to account for clustered data and the use of hierarchical or mixed linear models. The reviewers often observed that authors fail to account for clustering when it is present. Examples of this include data collected on patients over time, where successive observations are dependent upon those in the previous time period(s), or multiple observations are nested in larger units (e.g., patients within hospitals). In these situations, reviewers prefer to see an analytical approach that does not have an independence assumption and properly accounts for clustering, including time series analysis, generalized linear mixed models, or generalized estimating equations where the population-averaged effect is of interest [2124].

  3. (3)

    Addressing missing data: Frequently, researchers fail to mention the missing data in their sample or fail to describe the extent of the missing data. Problems with low response rates in studies are often not addressed or are inadequately discussed. In addition, longitudinal studies may fail to address differential dropout rates between groups that may have an effect on the outcome. In addition, those researchers who do discuss missing data often do not describe their methods of data imputation or their evaluation of whether missing data are significantly related to any observed variables. Those researchers who do explicitly address missing data regularly use suboptimal approaches. For example, investigators with longitudinal data often employ complete case analysis, last observation carried forward (LOCF) or other single imputation methods. These approaches can bias estimates and understate the sample variance. Preferably, researchers would evaluate the missing at random (MAR) assumption and conduct additional sensitivity analyses if the MAR assumption is suspect [25, 26]. In addition, a detailed qualitative description of the loss process is essential, including the likelihood of MAR and the likely direction of any bias.

  4. (4)

    Power and sample size issues: Power was another area that reviewers mentioned as problematic. Respondents also noted that power calculations are not done at all or are done post hoc rather than being incorporated into the design and sampling framework [27]. In novel studies where no basis for power calculations exists, this should be explicitly noted.

  5. (5)

    Researchers often use propensity scores without recognition of the potential bias caused by unmeasured confounding [2830]. Propensity scores are the probabilities of the individuals in a study being assigned to a particular condition given a set of known covariates and are used to reduce the confounding of covariates in observational studies. The bias problem arises when an essential confounder is not measured, and the use of propensity scores in this situation can exacerbate the bias already present in an analysis.

Respondents' Suggestions for Inadequate Reporting Clarity or Completeness

In addition to specific analytical concerns, respondents also reported common errors in the text of methods and results sections. Although some of these problems are semantic, others reflect a misinterpretation or misunderstanding of the methods employed.

  1. (1)

    Inadequate description of methods and analysis: Respondents observed that manuscripts often do not contain a clear description of the analysis. Authors should provide as much methodological detail as possible, including targeted references and a statistical appendix if appropriate. One respondent provided a rule of thumb whereby an independent reader should be able to perform the same analysis based solely on the paper. Other issues included inadequate description of the study cohort, recruitment, and response rate, and the presentation of relative differences (e.g., odds ratio = 1.30) in the absence of absolute differences (e.g., 2.6% versus 2%). As one respondent wrote, "Since basic errors that are easily identified remain common, there is real concern of the presentation of analyses for more complex methods where the errors will not be testable by the reviewer."

  2. (2)

    Miscommunication of results: Researchers frequently report likelihood ratios for diagnostic tests (the likelihood of an individual having a particular condition relative to the likelihood of an individual not having that condition given a certain test result) without associated sensitivity and specificity. Although this is very useful for learning how well a test of interest predicts the risk of a given result [31, 32], editors also appreciate the inclusion of rates of true positives and true negatives to give the reader a complete picture of the analysis.

Respondents also noted an undue emphasis on p-values and excessive focus on significant results. For example, authors often highlight the significance of a categorical dummy that is not significant overall; the overall significance of a multi-category predictor should be tested by using an appropriate joint test of significance [33]. In turn, non-significant results are seldom presented in manuscripts. Authors leave out indeterminate test results when describing diagnostic test performance and fail to report confidence intervals along with p-values. An analogous problem is the "unthinking acceptance" of p < 0.05 as significant. Researchers can fall prey to alpha errors and take the customary but curious position of touting significance just below p < 0.05 and non-significance just above the 0.05 threshold. In addition, authors may trumpet a significant result in a large study when the size of the difference is clinically unimportant. In this situation, a focus on the effect size could be more appropriate [34].

Discussion

Journal editors and statistical reviewers of high-impact medical journals identified several common problems that significantly and frequently affect the quality of submitted manuscripts. The majority of respondents underscored the fundamentals of research methods that should be familiar to all scientists. These include rigorous descriptions of sampling and analytic strategies, recognition of the strengths and drawbacks of a particular analytical approach, and the appropriate handling of missing data. Respondents also discussed concerns about more advanced methods in the medical research toolkit. Specifically, authors may not understand or report the limitations of their analysis strategies and hedge these with sensitivity analyses and more tempered interpretations. Finally, respondents emphasized the importance of the clear and accurate presentation of methods and results.

Although this study was not intended as a systematic or comprehensive catalog of all statistical problems in medical research, it does shed some light on common issues that delay or preclude the publication of research that might otherwise be sound and important. Moreover, the references included in this paper may provide some useful analytical guidance for researchers and for educators. Accordingly, this work serves to inform medical education and research to improve the overall quality of manuscripts and published research and to increase likelihood of publication.

In addition, these data provide evidence for the importance of soup-to-nuts methodological guidance in the research process. Statisticians and methodological experts should be consulted during the study design, analysis, and manuscript writing phases to improve the quality of research and to ensure the clear and appropriate application of quantitative methods. Although this may seem obvious, previous work by Altman and his colleagues demonstrates that this is rarely the case in medical research [3]. Rather, statistical experts are often consulted only during the analysis phase, if at all, and even then may not be credited with authorship [35]. In addition to statistical guidance, researchers should consult reporting guidelines associated with their intended research design, such as CONSORT for randomized, controlled trials, STROBE for observational studies, and PRISMA for systematic reviews. Adherence to such guidelines helps to ensure a common standard for reporting and a critical level of transparency in medical research. Professional organizations and prominent journals, including the Cochrane Collaboration and The Lancet, peer-review research protocols, which also helps to create a standard for research design and methods.

This work should be interpreted in light of several important limitations. We did not collect data on the professional position (e.g., academic department, industry, etc.) of the respondents and consequently do not know the composition of the sample or how this may have shaped our findings. Although the response rate was similar to other surveys of journal editors, and we have no reason to suspect significant response bias, the possibility of response bias remains. In addition, the size of our sample may limit the generalizability of our findings

Overall, this work is intended to inform researchers and educators on the most common pitfalls in quantitative medical research, pitfalls that journal editors note as problematic. Given the recent clinical research priorities of health care agenda-setting organizations, such as comparative effectiveness research and evidence-based practice, medical research is expected to meet a new bar in terms of valid and transparent inquiry [3639]. Improving the application and presentation of quantitative methods in scholarly manuscripts is essential to meeting the current and future goals of medical research.