Background

I am a physician by training, and a journal editor by chance. Both of these professions require discipline and use of aids, such as checklists, to compensate for the limits of human memory and attention and to prevent errors (in patient treatment or manuscript management). This is the reason why I read with enthusiasm the recent article by Barnes et al. in BMC Medicine [1], who performed a randomized controlled trial (RCT) to test an online tool for writing methods section of RCT articles. Primarily based on the 2010 CONSORT reporting guideline [2] and its explanatory document [3], the tool was tested in a sample of masters and doctoral students in public health and significantly increased the completeness of reporting for most of the methodological domains in an RCT report compared to a classical writing exercise.

While the previous studies demonstrated that CONSORT endorsement by journals was successful in increasing the completeness of trial reporting in medical journals [4], the study of Barnes et al. [1] is the first study to test the tool where and when it should be used – by researchers at the time of manuscript writing.

Considering that the first version of the CONSORT guideline was published almost 20 years ago [5] and is currently endorsed by more than 600 journals and most influential editorial organizations [6], why did it take so long to translate CONSORT into the actual practice of writing health research?

The long road to checklist implementation in healthcare

Checklists made their way into medicine from industry, where they have been used as quality and safety assurance of processes and products, especially those carrying high risk [7]. The most popular and globally relevant example of a medical checklist is the WHO Surgical Safety Checklist, which was created in 2008 to reduce the rate of major surgical complications. It was tested simultaneously in eight hospitals around the world to demonstrate a highly significant reduction in complication or death rates after surgery [8]. In 2014, a systematic review of seven studies testing the WHO Surgical Safety Checklist demonstrated its consistent effect on the reduction of postoperative complications and mortality [9]. In experimental settings, checklists have also been proven as an effective tool to improve adherence to best practices during operating-room crises [10].

In the case of the WHO Surgical Safety Checklist, the evidence base for the checklist implementation built up quickly, but the implementation is still burdened by a number of barriers at the organizational, systems, team, or checklist-specific levels, as demonstrated by a qualitative evaluation of its nation-wide implementation in UK hospitals [11]. A recently published systematic review of qualitative evidence for barriers and facilitators of surgical safety checklists showed that the complex reality of healthcare practice requires approaches that go beyond barriers and facilitators to fostering teamwork, mutual understanding, and communication [12].

The complex world of reporting guidelines

Currently, the most comprehensive source of information about reporting guidelines – the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, lists 281 different reporting guidelines [13]. While only a few cover most common study designs in health research, such as CONSORT for RCTs, STROBE for observational studies, and PRISMA for systematic reviews, most are guidelines for specific study types or variations of standard methodology. Nevertheless, they all aim to improve the completeness and clarity of published research and thus reduce waste in research informing health practice [14]. Guidance is also available for developers of reporting guidelines [15].

However, the primary users of reporting guidelines should be the researchers and authors, who may find their use unavoidable and horrifying at the same time. On the one hand, they have to meet the expectations of journals about reporting guidelines. On the other, they may not be sure which reporting guideline to choose (for example, CONSORT has 10 current official extensions) and how to follow it: a checklist may have over 20 items [2], many of which are difficult to understand for an average clinical researcher without good knowledge of clinical epidemiology, while the “Explanation and Elaboration” documents sometimes have over 30 pages [3].

Nevertheless, the main problem is that the reporting guidelines are used too late in the research process, when the study has already been performed or sometimes even after the provisional acceptance of the manuscript. At that time, it may be too late to discover that the important things have been missed or could have been done better in order to increase the quality of the publication. My experience as a journal editor and teacher of research methodology to graduate and postgraduate medical students, residents, and physicians is that knowledge about reporting guidelines should be acquired at the graduate level, during the medical curriculum [16]. This is in line with observations from other seasoned clinical trialists, such as Dr. Thomas Chalmers, a physician with a pivotal role in the scientific development of the RCT and meta-analysis in the USA, who stated, “[i]n medical school, I think we have to just hammer away at evidence and probability theory and general statistics” [17], as well as recommendations from the International Society for Evidence-Based Health Care [18]. When medical or healthcare students learn critical appraisal and understanding of evidence early in the curriculum and as seriously as they do for any other medical course, they will be better practitioners, making better decisions with their patients as well as in performing and publishing research.

Will writing tools for reporting checklists work?

My answer to the above question is – yes, writing tools will work. Examples of good practice are already there for healthcare researchers. Researchers working on Cochrane systematic reviews use Review Manager (RevMan) – the online tool that guides authors in preparing the text of the review, building tables, performing meta-analyses, and graphically presenting the results. The most recent development is RevMan HAL, a text-editor extension for RevMan developed by the Cochrane Schizophrenia Group, which helps authors to generate parts of the review automatically [19]. It has already been used to construct a first draft of review sections [20].

In clinical practice, the use of natural language generation system did not seem like a ready solution for automatic text generation of clinical reports in 2003 [21], but in 2013, computer-generated patient history summaries seem to be at least as accurate as records produced by clinicians while requiring less time to produce [22]. Of course, there is always a possibility of misuse of technology, as demonstrated by examples of computer-generated gibberish papers accepted at conferences [23], but this is a more complex problem of research and publication integrity [24].

Conclusions

It is good to see that the efforts in providing assistance to increase the clarity and transparency of reporting health research have moved from journals to authors. The effectiveness of the writing tool needs to be tested in the real world – when and where research occurs. The tool should be further developed to be easy to use in all research settings, both in the developed and developing world. Finally, it should not replace original thinking and the excitement of communicating original discoveries, but make sure that all relevant data are in the manuscript so that research results could be understood, critically evaluated, and used in practice.