Let us begin by asking this seemingly simple question: What is minimally invasive surgery? While minimally invasive surgery (MIS) is typically defined as using smaller incisions to accomplish the same operative task as conventional surgery, it must also be appreciated that this methodology has been an important step towards the creation of the digital operating theatre. Why?

Fundamentally, both laparoscopic and robotic MIS rely on a digitized video image sequences from a charge-coupled device (CCD) camera, which are then converted into digital signals and relayed at near-instant speed—approximately 0.7 times the speed of light. These signals are then typically processed and then projected onto a liquid crystal display (LCD) high-definition monitor. In the simplest sense, an MIS surgeon performs digital surgery, because the operation is carried out based on the surgeon’s interpretation of the real-time digitized cine displayed on LCD rather than direct visualization of the patient’s anatomy.

This concept, essential to all forms of MIS, represents an important departure from conventional surgery, and a crucial segue towards the realm of true digital surgery. Thus, an MIS surgeon operates not by looking at a patient but rather by looking at digitized information that represents the patient—a sequential stream of 1 and 0 s converted into ‘picture elements’ (aka, pixels) via LCD and the image rendered visible by red–green–blue colour modelling. With robotic master–slave platforms, the surgeon is removed from the patient altogether. Nevertheless, despite the absence of haptic feedback, very precise surgical dissection is possible with robotic MIS and the prospect of telemedicine becomes real—because of the ability to convert human anatomy into digital data points that can be streamed to a remote site in real time.

In the field of surgery, parallels are often drawn to the aviation industry due to the rigorous, safety-oriented protocols it mandates [1]. But another important aviation paradigm relates to navigation and includes the digitization of information and the processes used for data interpretation via advanced avionics. For instance, a pilot utilizes sophisticated navigational aides and fly-by-wire technology that allow an aircraft to travel safely in precise flight paths and at high velocity without reliance on visual references. In essence, a pilot can determine craft position and trajectory by using digital navigation inputs from various computer-centric instruments including the geostationary satellite global positioning system. For more complex vehicle navigation, system informatics and computing centres are necessary—such as those utilized for the manned space flight programme. The reason such systems are employed is because they are accurate and reliable and because a digital-based, augmented reality provides more meaningful information than visual cues alone. So then, if a pilot is able to navigate an aircraft by relying on digital information only, can a surgeon navigate though surgical planes in the same fashion, relying only on digital information? For select neurosurgical and orthopaedic procedures navigation-assisted surgery is already the norm, but other surgical specialties have been slow to adapt this modality.

The modern digital OR remains in early evolution, but will likely mirror the advancements made in aviation. In addition to high-definition digital 2D and 3D stereoscopic video, the digital operating theatre will incorporate information from image-based systems, such as the Picture Archiving Communication System (PACS). It will also use newer video-to-radiographic image management systems which can allow for multi-planar rendering of both computed tomography and magnetic resonance imaging which, in turn, can be used as a digital map to direct surgical dissection. Stereotactic navigation—a necessary core to the digitization of surgery—continues to improve, and its applications are predicted to expand. Ideally, the digital surgeon will be able to interpret real-time multi-planar images and navigate through them to define very precise surgical planes, thereby using radiographic images and digitized navigation maps to direct surgery rather than relying on the more traditional optical images.

Future data management systems are expected to be less bulky, imparting a smaller hardware footprint in the digitized operating theatre of tomorrow. This may include hand-held, application-driven devices similar to current smartphones and tablets that rely on cloud computing rather than space-intensive mainframe computing. The digital OR and the digital surgeon will also be able to utilize source data from a variety of integrated systems via secure cloud-based networks—provided the information technology (IT) infrastructure is compliant with privacy standards of shared information as set forth by the Health Insurance Portability and Accountability Act (HIPAA). Ultimately, optical data, as displayed on LCD monitors, can be merged with PACS imaging in real time via the cloud. Together with surgical navigational tools, including frameless stereotaxy, the digitized information will aid in providing the surgeon with an enriched understanding of surgical anatomy [2].

OR theatres will become increasingly digitized through integration of information from different sources. For example, a surgeon in the digital OR will rely on not only digital video information from a CCD camera output, but also other data sets that provide a rich array of practical, real-time information. Ideally, this information can be superimposed so that a surgeon can synthesize data and render logical decisions about the anatomic approach to surgical dissection and provide the ability to discern more complex anatomy in real time.

This can include data sets from a wide spectrum of analogue and digital sources. In colorectal surgery, this can include data from (a) ureteral and urethral position based on lighted stents and the optical feedback they provide [3], cluing the surgeon about potential critical structures and their proximity to the dissection plane. This is analogous to (using the aviation paradigm) a ground-proximity warning system designed to alert pilots of approaching terrain. Other applications include (b) the use of intra-operative ultrasound imaging to localize anatomic targets (such as with liver metastasectomy); (c) fluorescent angiography to quantify tissue perfusion [4]; and (d) PACS images overlayed with navigational markers to determine the 3D position in space of a surgeon’s instrument so that correct anatomic planes are realized and maintained during complex surgery—such has been demonstrated with transanal total mesorectal excision [5, 6] (Fig. 1a, b). Collectively, each of these data sets can provide critical information to the surgeon, and it is possible to envision mathematical models and computerized algorithms that integrate this information (from different source data) so that surgeons are best assisted with complex operative tasks and decision-making.

Fig. 1
figure 1

a TAMIS-TME with real-time stereotactic navigation is possible due to an infrared tracking system than allows for millimetre accuracy of the surgeon’s dissector which can be referenced, in this case, on a rectal protocol MRI with multi-planar rendering. Thus, the surgeon is able to conduct the operation by relying on radiographic images rather than video. b A surgeon views side-by-side LCD monitors that relay optical, digitized data and on a separate monitor (centre) an MRI ‘map.’ The green line moves in real time and the tip of the line correlated with the tip of the surgeon’s dissection instrument, thereby providing the surgeon an augmented understanding of the dissection plane in real time. The ‘white’ of the mesorectal envelope can be easily appreciated on MRI and correlated precisely with the surgeon’s dissector tip on the optical camera feed

Ultimately, this will shift the surgeon’s focus onto the management of digital information. In other words, the digital operating theatre will represent an integration of information from various digital sources so that a surgeon is not only navigating though anatomic planes, but also more globally speaking, navigating through information itself to perform safe and accurate surgery. These concepts are likely to represent the future direction of surgery and how it will evolve over the next several decades.