Introduction

Robotic assisted surgery has the potential to surmount some of the restraints of laparoscopy, presenting an immersive 3-dimensional depth of field, articulating instruments and a stable camera platform [1]. The swift endorsement of the robotic surgical platform indicates that it might prevail as the preferred technique for many complex abdominal and pelvic operations. Nonetheless, use of the surgical robotic system introduces further layers of complexity into the operating theatre, including a change in the conventional surgeon and trainee relationship, new highly developed technology, different motor and visual skills, and challenges in communication, thus necessitating new training models [2]. The training in new procedures including robotic surgery is characterised by changes in practice over time, or the proficiency curve [3], which has been recognised as one of the main barriers for surgeons to embrace robotic surgery, alongside costs and lack of drive from the hospitals [4].

Each robotic system is costly and is likely to be highly demanded for clinical use; as a consequence simulation training exercises may need to happen outside of clinical work time to access this resource, which can inhibit its use. Further challenges, especially for trainees, comprehend the competition for trainee time for other highly set educational activities, clinical commitments and working hours restrictions [5]. Observing live operating, attending educational workshops and seminars are all valuable resources but necessitate surgeons to interrupt their clinical activity to attend dedicated training sessions [6]. Instructive videos with relevant exposition could be exemplary for early training in robotic surgery [7] and can be developed even with basic prior video editing background [8]. The video output has the convenience that explicatory operations are chosen in advance and the educational content can be outlined beforehand [9]. Surgical trainers acknowledge online videos as a valuable teaching aid [10] that maximizes trainees’ learning and skill improvement in view of the backdrops of time constraints and productivity requirements [11], but the reliability of a significant part of highly viewed freely available content continues to remain debatable, as not all video outputs are trustworthy and some may not demonstrate procedures based on strong evidence [12].

On the basis of these premises, the aim of this study was to develop consensus guidelines on how to report a robotic surgery video for educational purposes to achieve high quality educational video outputs that could enhance surgical training.

Methods

The guidelines were established according to The Appraisal of Guidelines Research and Evaluation Instrument II (Agree II, https://www.agreetrust.org/agree-ii). A steering committee was selected to incorporate surgical trainers as contributors across several specialties such as general surgery, gynaecology, urology and lower and upper gastrointestinal surgery. Committee members were selected on the basis of previously published experience in guidelines development [13] on distance learning in surgery [12], minimally invasive surgery training programme development [14] and dissemination of online surgical videos [15]. 18 experts made up this committee.

A steering subcommittee comprising 10 members from 5 countries and 4 surgical specialties defined the consensus report. The steering committee was accountable for the selection of the survey items and statements were agreed upon following teleconferences, e-mails and face-to-face meetings. An electronic survey tool (Enalyzer, Denmark, www.enalyzer.com) was used for the voting round of the Delphi survey items after 46 statements were prepared by the consensus committee.

The Delphi methodology is a generally adopted procedure with a systematic progression of repeated rounds of voting for attaining agreement among a board of experts [16]. The experts vote anonymously to a minimum two rounds survey; participants expressing a vote against a survey item need to complete a reviewed statement with an explanation for their choice [17]. During the last round of the survey, participants do not have any more the opportunity to amend the items, and therefore only a binary accept or reject option is available. The required threshold for acceptance of a survey item into the consensus statements was of ≥ 80% [18, 19]. Feedbacks on the items not reaching 80% agreement were revised by the consensus guidelines members after the first round and statements were amended and submitted again for a second round of the survey. The finalised consensus guidelines were disseminated together with the draft article to all members of the committee.

Results

All 18 representatives of the consensus committee answered both the first and the second round of the Delphi survey. The first Delphi analysis comprised 46 items. The statements not achieving the minimum required 80% agreement at the first round were reviewed and circulated for a second vote. 36 consensus statements were finally agreed and are summarised in seven categories with the rate of agreement shown in Table 1. Rejected survey items are presented in Table 2.

Table 1 36 consensus statements approved by the committee, with rate of agreement
Table 2 Statements that did not reach consensus agreement

Discussion

Before undertaking robotic surgery clinical training operating on real patients with expert supervision, novice surgeons must first become familiar with the robotic interface [20] by attending dedicated courses and using online educational material and simulators according to a structured approach. Intraoperative mentorship and structured feedback from colleagues are beneficial even beyond completion of residency training, but time constraints and hierarchy can limit significantly implementation [21]. Surgeon video review leads to improved techniques and outcomes [22] with postoperative video debriefing being shown as an effective educational tool leading to reduced adverse events [23]. Video based peer feedback through social networking allows surgeons to receive mentorship at a convenient time beyond geographical limitations [24] and holds promises to become an essential part of continual professional development, provided patient privacy and consenting is maintained. E-modules and video training are extremely valuable educational methods, however their use its not exclusive and only effective if integrated within a structured training program, including simulation training, dry and wet lab activity. Moreover, proctoring plays an essential role in guaranteeing patients’ safety when operations are performed during the initial part of the surgeons’ learning and proficiency curve.

One of the main strengths of our research is that we have collated the expertise of several international committee members across different surgical specialties to establish consensus agreement on how to present a robotic surgery video developed for the scope of surgical education, with the main aim to enhance the educational content of videos by introducing a reference standard to reduce the variability in the quality, trustworthiness and educational accuracy of online robotic surgery videos, as we previously published in laparoscopic surgery [13]. Consensus guidelines, generally reported as a checklist, flow diagram, or explicit text, clarify a required set of information for a complete and clear account for reporting what was done and found during a study, highlighting factors potentially accountable for bias introduction into the project [25]. We acknowledge the lack of previously published guidelines for reporting of a robotic surgery video and, as such, the quality of these video outputs is very heterogeneous. To enhance the educational quality of published robotic surgery videos, especially when intended for training, the logical progression is to set a reference standard by introducing consensus agreement. Technical competence is a prerequisite for independent practising and encompasses understanding of pertinent anatomy, natural evolution of the disease, indications, steps and possible complications of the surgical technique [26] which are the reasons why additional educational content should be included in training videos.

Procedural competency can depend on the number of cases performed under supervision [27], which is consistent with the theory of deliberate practice, implying that proficiency is not only associated with the volume of cases but also with the time used practising with constructive feedback [28]. As a consequence, objective assessments must be applied to evaluate procedural competence focusing on the safety of the performance rather than the number of cases completed and distance learning in surgery should not only be confined to observing a video of another surgeon operating, but also incorporate examining the trainees’ own performance, by revising the video with peers and trainers. It has been demonstrated that constructive feedback can enhance performance [29], and therefore it must be an essential component of training in robotic surgery, in spite of representing a shift from the more classic methods of surgical training [30]. Commensurate training for new technologies is essential for the safe introduction into the wider surgical community. Credentialing aims to assure safety for patients, and gives confidence to hospitals that adequate training has been achieved, which is the reason why it is a requirement in many institutions [31]. Peer review of surgical videos submitted according to standardised criteria, could provide an effective tool for maintaining credentialing for robotic surgeons.

The proficiency curve in robotic surgery concerns the whole team [32], not just the surgeon, and we must acknowledge this as a limitation of these guidelines, which may provide limited benefit to the anaesthetists, nursing staff and operating department practitioners, who all require training as part of the robotic surgery team [33]. Teamwork and communication are paramount for safe and effective performance, particularly in robotic surgery [34] which introduced physical distance between the surgical team members and the patient providing changes to the spatial configuration of the operating room [35]. How the lack of face to face interaction can affect team communication has not been explored in these guidelines, which focus on surgeon’s technical skills [36]. Surgical trainees acknowledge highly informative videos reporting patients’ data and procedure outcomes, and integrated with supplementary educational material such as screenshots and diagrams to help identification of anatomical structures [37]. We must recognise another limitation of these guidelines is the time needed for producing such high-quality video outputs, with several gigabytes required for storage and sharing of high definition robotic videos that can be produced and uploaded with minimal technical skills [38]. It is important to acknowledge that there is minimal data available in the published literature to base this consensus statement on high quality evidence, which may explain why almost none of the accepted statements reached 100% agreement. This lack of endorsement for some statements is not uncommon when using Delphi methodology, however, a threshold for approval of 80% was selected and transparency was ensured by publishing both the accepted and rejected statements with correspondent rate of agreement. Nevertheless, the Delphi process with pre-set objectives is an accepted methodology to reduce the risk of individual opinions prevailing and the invited co-authors of these practice guidelines have previously reported on the topic of surgical videos availability [39], quality [12], content standardisation [13] and use by surgeons in training [37].

There is currently no standard accreditation or regulation for medical videos as training tools [40]. The HONCode [41] is a code of conduct for online medical and health platforms, but this applies to all web content and is not specific for audio-visual material. We propose that following these guidelines could help improve video quality and offer a standardised tool for use in quality evaluation of video materials presented for publication or conferences, although we appreciate that they were not developed with this purpose and further validation research would be needed.

Conclusions

Consensus guidelines on how to report robotic surgery videos for educational purposes have been developed utilising Delphi methodology. Adherence to the presented guidelines could help enhance the educational value of video outputs when used for the scope of surgical training.