Content-based processing and analysis of endoscopic images and videos: A survey
- 2.5k Downloads
In recent years, digital endoscopy has established as key technology for medical screenings and minimally invasive surgery. Since then, various research communities with manifold backgrounds have picked up on the idea of processing and automatically analyzing the inherently available video signal that is produced by the endoscopic camera. Proposed works mainly include image processing techniques, pattern recognition, machine learning methods and Computer Vision algorithms. While most contributions deal with real-time assistance at procedure time, the post-procedural processing of recorded videos is still in its infancy. Many post-processing problems are based on typical Multimedia methods like indexing, retrieval, summarization and video interaction, but have only been sparsely addressed so far for this domain. The goals of this survey are (1) to introduce this research field to a broader audience in the Multimedia community to stimulate further research, (2) to describe domain-specific characteristics of endoscopic videos that need to be addressed in a pre-processing step, and (3) to systematically bring together the very diverse research results for the first time to provide a broader overview of related research that is currently not perceived as belonging together.
KeywordsMedical imaging Endoscopic videos Content-based video analysis Medical multimedia
In the last decades, medical endoscopy has emerged as key technology for minimally-invasive examinations in numerous body regions and for minimally-invasive surgery in the abdomen, joints and further body regions. The term “endoscopy” derives from Greek and refers to methods to “look inside” the human body in a minimally invasive way. This is accomplished by inserting a medical device called endoscope into the interior of a hollow organ or body cavity. Depending on the respective body region the insertion is performed through a natural body orifice (e.g., for examination of the esophagus or the bowel) or through a small incision that serves as artificial entrance. For surgical procedures (e.g., removal of the gallbladder), additional incisions are required to insert various surgical instruments. Compared to open surgery this still causes much less trauma, which is one of the main advantages of minimally invasive surgery (also known as buttonhole or keyhole surgery). Many medical procedures were revolutionized by the introduction of endoscopy, some were even enabled by this technology in the first place. For a more detailed insight into the history of endoscopy, please refer to  and .
body region (e.g., abdomen, joints, gastrointestinal tract, lungs, chest, nose)
medical speciality (e.g., general surgery, gastroenterology, orthopedic surgery)
diagnostic vs. therapeutical focus
flexible vs. rigid construction form
Unfortunately, the usage of the term “endoscopy” is neither consistent in common parlance nor in the literature. It is often used as synonym for gastro-intestinal endoscopy (of the digestive system), which mainly includes colonoscopy and gastroscopy (examination of the colon and stomach, respectively). A further special type is Wireless Capsule Endoscopy (WCE). The patient has to swallow a small capsule that includes a tiny camera and transmits a large number of images to an external receiver while it travels through the digestive tract for several hours. The images are then assessed by the physician after the end of this process. WCE is especially important for examinations of the small intestine because neither gastroscopy nor colonoscopy can access this part of the gastrointestinal tract.
The major therapeutic sub-types are laparoscopy (procedures in the abdominal cavity) and arthroscopy (orthopedic procedures on joints, mainly knee and shoulder). They are often subsumed under the term “minimally invasive surgery”. Laparoscopic operations span different medical specialities, particularly general surgery, pediatric surgery, gynecology and urology. Examples for common laparoscopic operations are cholecystectomy (removal of the gall bladder), nephrectomy (removal of the kidney), prostatectomy (removal of the prostate gland) and the diagnosis and treatment of endometriosis. Further important endoscopy types are thoracoscopy (thorax/chest), bronchoscopy (airways), cystoscopy (bladder), hysteroscopy (uterus) and further special procedures in the field of ENT (ear, nose, throat) and neurosurgery (brain).
In the course of an endoscopic procedure, a video signal is produced by the endoscopic camera and visualized to the surgical team to guide their actions. This inherently available video signal is predestinated for automatic content analysis in order to assist the physician. Hence, numerous research communities proposed methods to process and analyze it, either in real-time or for post-procedural usage. In both cases, image processing techniques are often used to pre-process individual video frames, be it to improve the performance of subsequent processing steps or to simply improve their visual quality. Pattern recognition and machine learning methods are used to detect lesions, polyps, tumors etc. in order to aid physicians in the diagnostic analysis. The robotics community applies Computer Vision algorithms for 3D reconstruction of the inner anatomical structure in combination with detection and tracking of operation instruments to enable robot-assisted surgery. In the context of Augmented Reality, endoscopic images are registered to pre-operative CT or MRI scans to provide guidance and additional context-specific information to the surgeon during the operation. For readers who want to learn more about the workflow in this field, we recommend the following tutorial papers [151, 152].
In recent years, we can observe a growing trend to record and store videos of endoscopic procedures, mainly for medical documentation and research. This new paradigm of video documentation has many advantages: it enables to revisit the procedure anytime, it facilitates detailed discussions with colleagues as well as explanations to patients, it allows for better planning of follow-up operations, and it is a great source of information for research, training, education and quality assurance. The benefits of video documentation have been confirmed in numerous studies, e.g., [61, 100, 101, 137]. However, physicians can only benefit from endoscopic videos if they are easily accessible. This is where research in the Multimedia field comes in. Well researched methods like content-based video retrieval, video segmentation, summarization, efficient storage and archiving concepts as well as efficient video interaction and browsing interfaces can be used to organize an endoscopic video archive and make it accessible for physicians. Because of their post-processing nature, these techniques are not constrained by immediate OR requirements and therefore can be applied in real-world scenarios much easier than real-time assistance features. Nevertheless, they have to be adapted to the specific peculiarities of this very specific domain. Their practical relevance is steadily growing considering the fact that video documentation is on the rise in recent years. Once comprehensive video documentation is established as best practice and maybe even becomes mandatory, they will be essential cornerstones in Endoscopic Multimedia Information Systems.
real-time support at procedure time
Figure 1 illustrates the resulting categorization of research topics in the field of endoscopic image/video processing and analysis, representing the structure of the following sections as well. This classification should not be understood as the ultimate truth because many of the presented techniques and concepts have significant overlappings and cannot be distinctively delimited. For example, the traditionally post-procedural application of surgical quality assessment is currently being ported to real-time systems and in this context could as well be regarded as an application of Augmented Reality. Nevertheless, this categorization enables a structured and clear overview of the many topics that are covered in this review.
2 Pre-processing methods
Endoscopic videos have various domain-specific characteristics that need to be addressed when dealing with this special kind of video. This section describes the most distinctive aspects and gives an overview of corresponding methods that are applied as a preparatory step prior to other analysis techniques and/or enhance the image quality for the surgeon.
2.1 Image enhancement
Automatic adjustment of contrast with the help of clustering and histogram modification .
Removal of temporal noise, i.e., small flying particles or fast moving smoke only appearing for a short moment at one position, by using a temporal median filter of color values .
Color normalization using an affine transformation in order to get rid of a reddish tinge caused by blood during therapeutic interventions and to obtain a more natural color .
Correction of color misalignment: Most endoscopes do not use a color chipset camera but a monochrome chipset that only captures luminance information. To get a color video, red, green and blue color filters have to be applied sequentially. In case of rapid movements - which occur frequently in endoscopic procedures - the color channels become misaligned. This is not only annoying when watching the video but particularly hindering further automatic analysis. Dahyot et al.  propose to use color channels equalization, camera motion estimation and motion compensation to correct the misalignments.
2.1.1 Camera calibration and distortion correction
2.1.2 Specular reflection removal
Endoscopic images often contain specular light reflections, also called highlights, on the wet tissue surface. They are caused by the inherent frontal illumination and are very distracting for the observer. A study conducted in  shows that physicians prefer images where they are corrected. Even worse, reflections severely impair analysis algorithms because they introduce wrong pixel values and additional edges. This also impairs image feature extraction, which is an essential technique for reconstruction, tracking etc. Hence, a number of approaches for correction have been proposed as a supporting component for other analysis methods, e.g., detection of non-informative frames , segmentation and detection of surgical instruments [34, 201], tracking of natural landmarks for cardiac motion estimation , reconstruction of 3D structures  or correction of color channel misalignment .
Most approaches consist of two phases. First, the highlights are detected in each frame. This is rather straightforward and in most cases uses basic histogram analysis, thresholding and morphological operations. Pixels with an intensity above a threshold are regarded as highlights. Some authors additionally propose to check for low saturation as a further strong indication for specular highlights ([169, 274]). In this context, the usage of various color spaces has been proposed, e.g., RGB , YUV , HSV , HSI , CIE-xyY . In a second phase, the pixels identified as reflections are “corrected”, i.e., modified in a way that the resulting image looks as realistic as possible. An example of a corrected image can be seen in Fig. 2b. An important aspect is that user should be informed about this image enhancement, because one cannot rule out the possibility that wrong information is introduced, e.g., a modified pit pattern on a polyp that can adversely affect the diagnosis. For this second phase, the following two different approaches can be distinguished:
Spatial interpolation: Only the current frame is considered and the pixels that correspond to specular highlights are replaced by interpolated pixels from the surrounding. This technique is also called inpainting and has its origins in image restoration (for an overview of general inpainting techniques refer to ). For the interpolation, different methods have been proposed, e.g., spectral deconvolution , anisotropic confidence-based filling-in [70, 71] or a two-level inpainting where first the highlight pixels are replaced with the centroid of their neighborhood and finally a gaussian blur is applied to smooth the contour of the interpolated region .
Temporal interpolation: With inpainting techniques, an actually correct reconstruction is not possible because the real color information for highlight pixels cannot be determined from the current frame. Hence, several approaches have been proposed that consider the temporal context [226, 253], i.e., try to find the corresponding position in preceding and subsequent frames and reuse the information from this position. This approach has a higher complexity than inpainting but it can be used to reconstruct the real information. However, this is not always possible, especially if there is too little or very abrupt motion or if the lighting conditions change too much. Moreover, it is not applicable to single frames or WCE (Wireless Capsule Endoscopy) frames.
2.1.3 Image rectification
In surgical practice, a commonly used type of endoscopes are oblique-viewing endoscopes (e.g., 30 °). The advantage of this design is the possibility to easily change the viewing direction by rotating the endoscope around its axis. This enables a larger field of view. The problem is that also the image rotates, resulting in a non-intuitive orientation of the body anatomy. The surgeon has to unrotate the image in their mind in order to not lose their orientation. The missing information about the image orientation is especially a problem in Natural Orifice Translumenal Endoscopic Surgery (NOTES), where a flexible endoscope is used (as opposed to rigid endoscopes like in laparoscopy). Some approaches have been proposed that use modified equipment to tackle this problem, e.g., an inertial sensor mounted on the endoscope tip , but hardware modifications always limit the practical applicability. Koppel et al.  propose an early vision-based solution. They track 2D image features to estimate the camera motion. Based on this estimation, the image is rectified, i.e., rotated such that the natural “up” direction is restored. Moll et al.  improve this approach by using the SURF descriptor (Speeded Up Robust Features), RANSAC (Random Sample Consensus) and a bag-of-visual-words approach based on Integrated Region Matching (IRM). A different approach  exploits the fact that endoscopic images often feature a “wedge mark”, a small spike outside the image circle that visually indicates the rotation. By detecting the position of this mark, the rotation angle can easily be computed.
2.1.4 Super resolution and comb structure removal
In diagnostic endoscopic procedures like colonoscopy it is important to visualize very fine details - e.g., patterns on the colonic mucosa surface - to make the right diagnosis. Super-resolution  has been proposed as a means to increase the level of detail of HD videos and enable a better diagnosis - both manual and automatic [75, 76]. The idea of super resolution is to combine the high frequency information of several successive low resolution images to an image with higher resolution and more details. However, the authors come to the conclusion that their approach neither has a significant impact on the visual quality nor on the classification accuracy. Duda et al. propose to apply super-resolution  for WCE images. Their approach is very fast because it simply computes a weighted average of the upsampled and registered frames and is shown to perform better than bilinear interpolation.
Rupp et al.  use super-resolution for a different task, namely to improve the calibration accuracy of flexible endoscopes (also called fiberscopes). This type of endoscope uses a bundle of coated glass fibers for light transmission. This produces image artifacts in the form of a comb-like pattern (see Fig. 2c). This comb structure hampers an exact calibration, but also many other analysis tasks like feature detection. Several methods for comb structure removal have been proposed, e.g., low pass filtering , adaptive reduction via spectral masks , or spatial barycentric or nearest neighbor interpolation between pixels containing fiberscopic content . These methods typically contain some kind of low pass filtering, meaning that edges and contours are blurred. These lost high frequency components can be restored by applying super-resolution algorithms.
2.2 Information filtering
Endoscopic videos typically contain a considerable amount of frames that do not carry any relevant information and therefore are useless for content-based analysis. Hence, it is desirable to automatically detect such frames and sort them out, i.e., perform a temporal filtering. This can be regarded as a different kind of pre-processing, with the difference that not the pixels of individual frames are modified but the video as such is modified to the effect that frames are removed. This idea is closely related to video summarization (see Section 4.2.4), which can be seen as an intensification of frame filtering. In video summarization, the goal is to select especially informative frames or sequences and reduce the video to an even higher extent. Moreover, it is often the case that only parts of on image are non-informative, but other regions are indeed relevant for the analysis. To concentrate analysis on such selected regions, several image segmentation techniques have been proposed to perform a spatial filtering.
2.2.1 Frame filtering
In the literature, different criteria can be found for a frame to be considered as informative or non-informative. The most important criterion is blurriness. According to , about 25 % of the frames of a typical colonoscopy video are blurry. Oh et al.  propose to use edge detection and compute the ratio of isolated pixels to connected pixels in the edge image to determine the blurriness. As this method depends very much on the selection of thresholds and further parameters, they propose a second approach using discrete Fourier transformation (DFT). Seven texture features are extracted from the gray-level co-occurrence matrix (GLCM) of the resulting frequency spectrum image and used for k-means clustering to differentiate between blurry and clear images. A similar approach by  uses the 2D discrete wavelet transform with a Haar wavelet Kernel to obtain a set of approximation and detail coefficients. The L 2 norm of the detail coefficients of the wavelet decomposition is used as feature vector for a Bayesian classifier. This method is nearly 10-times faster than the DFT-based method and also has a higher accuracy. Rangseekajee and Phongsuphap [189 and Rungseekajee et al. 199] on the other side took up the edge-based approach for the domain of thoracoscopy and added adaptive thresholding as pre-processing step to reduce the effect of lighting conditions. Besides, they claim that the Sobel edge detector is more appropriate for this task than the Canny edge detector because it detects less edges due to irrelevant details caused by noise. Another approach  uses inter-frame similarities and the concept of manifold learning for dimensionality reduction to cluster indistinct frames. Grega et al.  compared the different approaches for the domain of bronchoscopy and reported results for F-measure, sensitivity, specificity and accuracy of at least 87 % or higher. According to them, the best-scoring alternative is a transformation-based approach using discrete cosine transformation (DCT).
Especially in the context of WCE (Wireless Capsule Endoscopy), the presence of intestinal juices is another criterion for non-informative images. Such images are characterized by bubbles that occlude the visualization field. Vilarino et al.  use Gabor filters to detect them. According to their studies, 23 % of all images can be discarded, meaning that the visualization time for the manual diagnostic assessment as well as the processing time for automatic diagnostic support can be considerably reduced. In , a similar approach is proposed that uses a Gauss Laguerre transform (GLT)-based multiresolution texture feature and introduces a second step that uses spatial segmentation of the bubble region to classify ambiguous frames.
A further type of non-informative frames are out-of-patient frames, i.e., frames from scenes that are recorded outside the patients body. They often occur at the beginning or end of a procedure because it is not always possible to start and stop the recording exactly at the right time. The need for manual recording triggering in general deters many endoscopists from recording videos at all. To address this issue,  propose a system that automatically detects when a colonoscopic examination begins and ends. Every time a new procedure is detected, the system starts recording and writes a video file to the disk until the end of the procedure is detected. The proposed approach uses simple color features that work well for the domain of colonoscopy. In , the authors extend their approach by various temporal features that take into account the amount of motion to avoid false positives.
2.2.2 Image segmentation
Instead of discarding complete frames, some authors try to identify non-informative regions in endoscopic images. In further processing steps, only the informative regions have to be considered, which speeds up processing and improves accuracy. Such a spatial filtering can also be used as basis for temporal filtering by defining a threshold ratio between the size of informative and non-informative regions. A typical irrelevant region is the border area outside the characteristic circular content area of endoscopic images. It contains no useful information but only noise that impairs analysis as well as compression. In , an efficient domain-specific algorithm is proposed to detect the exact circle parameters. Bernal et al.  propose a model of appearance of non-informative lumen regions that can be discarded in a subsequent CAD (Computer Aided Diagnosis) component. Prasath et al.  also use image segmentation to differentiate between lumen and mucosa, but they use the result as a basis for 3D reconstruction. For WCE images,  apply morphological operations, fuzzy k-means, sigmoid function, statistic features, Gabor Filters, Fisher test, neural network, and discriminators in the HSV color space to differentiate between informative and non-informative regions. In the context of CAD, image segmentation is also used as basis for shape-based features. Here, the goal is to determine the boundaries of polyps, tumors and lesions [21, 83] or other anomalies like bleeding or ulceration in WCE images .
In the case of surgical procedures, the most frequently addressed target of image segmentation are surgical instruments. They can be tracked in order to understand the surgical workflow or assess the skills of the surgeon. For more details on instrument detection and tracking please refer to Section 3.2.6. Few approaches have been proposed for segmentation of anatomical structures. Chhatkuli et al.  show how segmentation of the uterus in gynecological laparoscopy using color and texture features improves the performance of Shape-from-Shading 3D reconstruction and feature matching. Bilodeau et al.  combine graph-based segmentation and multistage region merging to determine the boundary of the operated disc cavity in thoracic discectomy, which is a useful depth cue for 3D reconstruction and very important in this surgery type to correctly estimate the distance to the spinal cord.
3 Real-time support at procedure time
The use case of endoscopic video analysis that has been studied most extensively in the literature is to directly support the physician during the procedure in various ways. The application scenarios can be categorized into (1) Diagnostic Decision Support and (2) Computer Integrated Surgery, which includes Augmented Reality as well as Robot-Assisted Surgery.
3.1 Diagnostic decision support
CAD systems typically use pattern recognition and machine learning algorithms to identify abnormal structures. After various pre-processing steps, visual features are extracted and fed into a classifier, which then delivers the diagnosis in form of a classification result. The classifier has to be trained in advance with a possibly great number of labeled examples. The most frequently used classifiers are Support Vector Machines (SVM) and Neural Networks. Often, dimensionality reduction techniques like Principal Component Analysis (PCA) are used, e.g., in . Numerous alternatives for the selection of features and classifiers have been proposed. Some approaches exploit the characteristic shape of polyps, e.g., . Considering the shape also enables unsupervised methods, which require no training, e.g., the extraction of geometric information from segmented images  or simple counting of regions of a segmented image . Many approaches use texture features, e.g., based on wavelets or local binary patterns (LBP), e.g., [6, 86, 118, 257]. In many cases, color information adds a lot of additional information, so color-texture combinations are also common . Also simple color and position approaches have been shown to perform reasonable despite their low complexity compared to more sophisticated approaches .
Most publications concentrate on one specific feature, but there are also attempts to use a mix of features. As example, Zheng et al. propose a Bayesian fusion algorithm  to combine various feature extraction methods, which provide different cues for abnormality. A very recent contribution by Riegler et al.  proposes to employ Multimedia methods for disease detection and shows promising preliminary results. A detailed survey of gastro-intestinal CAD-systems can be found in .
3.2 Computer integrated surgery
In the case of surgical endoscopy, we can differentiate between “passive” support in the form of Augmented Reality and “active” support in the form of robotic assistance.
In the former case, supplemental information from other image modalities (MRT, CT, PET etc.) is displayed to improve navigation, enhance the viewing conditions or provide context-aware assistance. In the latter case, surgical robots are used to improve surgical precision for complex and delicate tasks. While early systems acted as direct extender of the surgeons movements, recent research activity strives for more and more actions carried out by the robot autonomously. Both cases pose a number of typical Computer Vision problems (object detection and tracking, reconstruction, registration etc. in order to “understand” the surgical scene), hence video analysis is an essential component.
All these ideas and techniques can be subsumed under the concept of Computer Integrated Surgery (CIS), or sometimes also referred to as surgical CAD (due to the popularity of the term CAD) . The underlying idea is to integrate all phases of treatment with the support of computer systems, and in particular medical imaging. This includes intra-operative endoscopic imaging as well as pre-operative diagnostic imaging modalities like CT (X-ray Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography) or sonography (“ultrasound”). These modalities are used for diagnosis and planning of the actual procedure and often are essential for the precise navigation to the surgical site . This is especially the case for surgeries that require a very high accuracy due to the risk of damaging healthy tissue (e.g., endonasal skull base surgery ). Navigation support is also important for diagnostic procedures like bronchoscopy (examination of the lung airways) where the flexible endoscope has to traverse a complex tree structure with many branches to find the biopsy site that has been identified prior to the examination [48, 79]. These pre-operative images or volumetric models are aligned with general information about human anatomy (anatomy atlases) in order to create a patient-specific model that enables a comprehensive and detailed procedure planning. This pre-operative model is then registered to the intra-operative video images in real-time to guide the surgeon by overlaying additional information, performing certain tasks autonomously or increasing surgical safety by imposing constraints on surgical actions that could harm the patient. For such an assistance, the system has to monitor the progress of the procedure and in case of complications automatically adapts/updates the surgical plan. 
3.2.1 Augmented reality
AR is especially helpful in the field of surgical oncology , i.e., the surgical management of cancer. The exact position and size of the tumor is often not directly visible in the endoscopic images, hence an accurate visualization helps to choose an optimal dissection plane that minimizes damage of healthy tissue. Such systems are often called “Surgical Navigation Systems” because they support the navigation to the surgical site. They have been proposed for various procedures, e.g., prostatectomy (removal of the prostate gland) [82, 210]. Mirota et al.  provides a comprehensive overview of vision-based navigation in image-guided interventions for various procedure types.
A further possible application of AR is to improve the viewing conditions of the surgeon. This can be done by expanding the restricted viewport and visualizing the surrounding area using image stitching methods [19, 153, 241], potentially even in 3D . Similar techniques have also been investigated for the purpose of video summarization, e.g., to obtain a condensed representation of an examination for a medical record (see Section 4.2.4). Other approaches even provide an alternative point of view with improved visualization. In , a “Virtual Mirror” is proposed that enables the surgeon to inspect the virtual components (e.g., a volumetric model of the liver from a pre-operative CT scan) from different perspectives in order to understand complex structures (e.g., blood vessel trees) and improve depth perception as well as navigational tasks. Fuchs et al.  propose a prototypical system that restores the physicians’ natural viewpoint and visualizes it via a See-Through HMD. The underlying idea is to free the surgeon from the inherent technical limitations of the imaging system and enable a more “direct” view on the patient, similar to that in traditional open surgery. The surgeon can change the viewing perspective by moving his head instead of moving the laparoscope.
3.2.2 Context awareness
Several medical studies prove the clinical applicability of Augmented Reality in various endoscopic operation types, e.g., nephrectomy , prostatectomy , laparoscopic gastrointestinal procedures  or splenectomy . However - despite all the potential benefits - too much additional information may distract surgeons from the actual task . The goal should be to automatically select the appropriate assistance for the current state of the procedure in a context-aware manner, providing hints or situation-specific additional information. Such assistance can also go beyond visualizations of the pre-operative model, e.g., it can support decision making by finding situations similar to the current one and showing how other surgeons handled a similar exceptional situation . Another use case is to simulate the effect of a surgical step without actually executing it . Speidel et al.  and  propose an AR system for warning in case of risk situations. The system alerts the surgeon if an instruments comes too close to a risk structure (ductus cysticus or arteria cystica in the case of cholecystectomy).
The basis for such context-aware assistance is the semantic understanding of the current situation. The field of Surgical Process Modeling (SPM) is concerned with the definition of appropriate workflow models [110, 166]. The main challange is to formalize the existing expert knowledge, be it formal knowledge from textbooks or experience-based knowledge that also considers variations and deviations from theory. First, typical surgical situations or tasks have to be defined. Some approaches focus on fine-granular gestures like for example “insert a needle”, “grab a needle”, “position a needle” , or more generic actions like tool-tissue interaction in general . The detection of such “low-level” tasks can also be used as basis for the assessment of surgical skills (see Section 4.1.1). On a higher abstraction level, a procedure is subdivided into pre-defined operation phases that describe the typical sequence of a surgery. Existing approaches focus on well standardized procedures, in most cases cholecystectomy (removal of the gallbladder) [25, 95, 99, 109, 174], which can be broken down to distinct phases very well. Surgical workflow understanding is also of particular interest for post-procedural applications, especially for temporal video segmentation and summarization (see Section 4.2.3).
A very discriminative feature to distinguish between phases is the presence of operation instruments, which can be detected by video analysis (see Section 3.2.6 for more information). However, metadata obtained from video analysis is only one of many possible inputs for surgical situation understanding systems proposed in the literature. Often, various additional sensor data are used, e.g., weight of the irrigation and suction bags, the intra-abdominal C O 2 pressure and the inclination of the surgical table  or a coagulation audio signal . The focus in this research area is not on how to obtain the required information from the video, but how to map the available signals (e.g., binary information about instrument presence) to the corresponding surgical phase. Therefore, instrument detection is often achieved by hardware modifications like RFID tags or color markers or even by manual annotations from an observer .
Several authors propose to use statistical methods and machine learning to recognize the current situation. The most popular method are Hidden Markov Models (HMM) [25, 111, 174, 196]. An HMM is a directed graph that defines possible states and transition probabilities and is built from training data. Another frequently used method is Dynamic Time Warping (DTW) , which can also be used without explicit pre-defined models by temporally aligning surgeries of the same type . Besides HMM and DTW also alternative methods like Random Forests have been proposed . A fundamentally different approach is to use formal knowledge representation methods like ontologies that use rules and logical reasoning to derive the current state. For example, Katic et al. use Description Logic in the OWL-standard (Web Ontology Language) [94, 95, 214].
3.2.3 Robot-assisted surgery
The majority of publications dealing with endoscopic video analysis have their roots in the robotics community and aim at integrating robotic components into the surgical workflow. Medical robots are not intended to replace human surgeons, but to extend their capabilities and improve efficiency and effectiveness by overcoming certain limitations of traditional laparoscopic surgery. They considerably enhance the dexterity, precision and repeatability by using mechanical wrists that are controlled by a microprocessor. Motion can by stabilized by filtering hand tremor and scaled for micro-scale tasks which are not possible manually . Robotic surgery systems are practically used for several years, especially for very precarious procedures like prostatectomy . However, current systems like the daVinci system  are pure “surgeon extenders”, i.e., the surgeon directly controls the slave robot via a master console. In this telemanipulation scenario, the robot has no autonomy and only “enhances” the surgeons movements, e.g., by hand tremor filtering and motion scaling. An overview of popular surgical robot systems can be found in  and . State-of-the art research tries to extend the robots autonomy, which requires numerous image/video analysis and Computer Vision techniques. Robotic systems inherently provide additional data that can be used to facilitate video analysis, e.g., kinematic motion data, information about instrument usage and stereoscopic images that allow for an easier 3D reconstruction of the scene.
An important application with mediocre degree of autonomy is the automation of endoscope holding [36, 254, 256, 278]. This task is usually carried out by an assistant, but during lengthy procedures, humans suffer from fatigue, hand tremor etc., therefore automation of this task is very appreciated by surgeons. The endoscope should always point at the current area of interest, which is typically characterized by the presence of the instrument tips. Hence, instrument positions have to be detected and the robot arm has to be moved such that the endoscope is adequately positioned without colliding with tissue, and the right zoom level is chosen . Another application are automatic safety checks, e.g., in the form of active constraints respectively virtual fixtures. A virtual fixture [140, 177] is a constraint that reduces the precision requirements. It can be used to define forbidden regions or a safety margin around critical anatomical structures, which must not be damaged, in order to prevent erroneous moves, or to simplify the execution of a task by “guiding” the instrument motion along a safe corridor . The long-term vision is to enable commonly occurring tasks like suturing to be executed autonomously by high-level command of the surgeon (e.g., by pointing at the target position with a laser pointer) [105, 175, 220]. The main challenge is to safely move the instrument to the desired 3D position without harming the patient, a process referred to as visual servoing. Such an assistance requires a very detailed understanding of the surgical scene, including a precise 3D model of the anatomy, registered to the pre-operational model and also considering tissue deformations, as well as the exact location of relevant anatomical objects and instruments. Also surgical task models as discussed above are of great importance for this scenario. A survey of recent advances in the field of autonomous and semi-autonomous actions carried out by robotic systems is given in .
3.2.4 3D reconstruction
The reconstruction of the local geometry of the surgical site is an essential requirement for Robot-Assisted Surgery. It produces a three-dimensional model of the anatomy in which the instruments are positioned. Also for many Augmented Reality applications a 3D model is required for registration with volumetric pre-operative models (e.g., from CT scans). For diagnostic procedures, the analysis of the 3D shape of suspicious objects can be more expressive than the 2D shape. The fundamental challenge of 3D reconstruction is to map the available 2D image coordinates to 3D world coordinates.
In the context of Robot-Assisted Surgery, usually stereoscopic endoscopes are used to improve the depth perception of the surgeon. The stereo images also facilitate correspondence-based 3D reconstruction [22, 194, 225]. The challenge is to identify matching image primitives (e.g., feature points) between the left and right image, which can then be used to calculate the depth information by triangulation. However, this task is still far from being trivial because of various aggravating factors like homogenous surfaces with few distinct visual features, occlusions, specular reflections, image perturbations (smoke, blood etc.) and tissue deformations. In traditional laparoscopy, which is not supported by a robot, the used endoscopes are usually monoscopic. In this case, Structure-From-Motion (SfM) methods can be applied [44, 81, 138]. For SfM, the different views are obtained when the camera is moved to a different position. Camera motion estimation is required to estimate the displacement of the camera position, which is necessary for the triangulation, while in the stereoscopic case, the displacement is inherently known. A related method that is often used in the robotics domain is SLAM (Simultaneous Localization And Mapping), which iteratively constructs a map of the unknown environment and at the same time keeps track of the camera location. Traditional SLAM assumes a rigid environment, which does not hold for the endoscopic case. Therefore, attempts have been made to extend SLAM with respect to tissue deformations [67, 154, 242]. A further common problem for both SfM and SLAM is the often scarce camera motion. An alternative approach that deals with single images and therefore does not depend on the problem of finding correspondences is Shape-from-Shading (SfS), where the depth information is derived from the shading of the surface of anatomic objects. However, again some basic assumptions of generic SfS do not hold for endoscopic images, hence adaptations are necessary to obtain acceptable results [43, 171, 269]. The generalization of SfS to multiple light sources is referred to as photometric stereo and has also been proposed for 3D reconstruction of endoscopic videos [42, 178]. This techniques requires hardware modifications to obtain different illumination conditions. Other active methods requiring hardware modifications that have been proposed are Structured Light  and Time-of-Flight . The former projects a known light pattern on the surface and reconstructs depth information from the deformation of the pattern. The latter uses non-visible near-infrared light and measures the time until it is reflected back.
All these approaches have their drawbacks, hence several attempts have been made to improve the performance by fusing multiple depth cues, e.g., stereoscopic video and SfS [127, 249], SfM and SfS [139, 239], or by incorporating patient specific shape priors extracted from pre-operative images . However, 3D reconstruction still remains a very challenging task in the endoscopic domain. Several recent surveys about this topic are available [63, 69, 136, 176].
3.2.5 Image registration and tissue deformation tracking
The process of bringing two images of the same scene together to one common coordinate system is referred to as image registration. This includes the computation of a transformation model that describes how one image has to be modified to obtain the second image. This is a typical optimization problem that can be solved with optimization algorithms like Gauss-Newton etc. The classical application of medical image registration is to align images from different modalities, e.g., pre-operative 3D CT images and intra-operative 2D X-ray projection images . We can distinguish between different types of registration, depending on the dimensionality of the underlying images (2D slice, 3D volumetric, video), as reviewed in .
For Augmented Reality scenarios where the endoscopic video stream is used as intra-operative modality, various use cases have been addressed in the literature, e.g., registering 3D CT models of the lungs with bronchoscopic videos [79, 131]. Similar examples relying on registration are sinus surgery (nose)  or skull base surgery . In terms of laparoscopic surgery, interesting contributions are 3D-to-3D registration of the liver during laparoscopic surgery  and coating of the pre-operative 3D model with texture information from the live view . An important prerequisite for registration is an accurate camera calibration (see Section 2.1.1) to obtain correct geometric correspondences.
A topic closely related to image registration is object tracking, i.e., following the motion of a region of interest over time, either in the two- or three-dimensional space. It can be seen as an intra-modality (as opposed to inter-modality) registration, i.e., successive frames of a video are registered. In case of camera motion between frames, the transformation can be described by translation, rotation and scaling. Estimating the camera motion is an important technique for 3D reconstruction (especially SfM and SLAM) and generating panoramic images. However, in the endoscopic video domain, the transformation is usually much more complex. The main reason is the fact that the soft tissue is not rigid but is deforming non-linearly, requiring adaptations of many established methods which assume a rigid environment (e.g., SLAM). Tissue deformation occurs for three main reasons, (1) organ shift due to insufflation (mainly relevant for inter-modality registration), (2) periodic motion caused by respiratory and cardiac cycles as well as muscular contraction and (3) tool-tissue interaction. Tracking the tissue deformation is a challenging research topic that has strongly gained attention in the last years. It is particularly important for updating the reconstructed 3D model that has to be registered with the static pre-operative model for Augmented Reality applications and Robot-Assisted Surgery, but also to track anatomical modifications for surgical workflow understanding. Periodic motion can be well described by a model, e.g., Fourier series . Estimating the periodic motion of the heart is of particular interest for robot-assisted motion compensation and virtual stabilization .
The matching strategy typically used by feature-based approaches is “tracking-by-detection”, as opposed to recursive tracking methods, e.g., Lucas Kanade, which is based on the optical flow. The latter search locally for a best match for image patches, while the former extract features for each frame and then compare them to find the best matches. While recursive methods work well on small deformations, they have problems with illumination changes and reflections and suffer from error propagation. In contrast, tracking-by-detection in combination with feature-based region representation is fairly robust against large deformations and occlusions due to the abstracted feature space. A comparative evaluation of state-of-the-art feature-matching algorithms for endoscopic images has been carried out in .
However, although promising advances have been made recently [155, 187, 188, 205, 231, 268], deforming tissue tracking is a very hard research challenge that still requires a lot of further work. Endoscopic videos feature many domain-induced problems like scarcity of distinctive landmarks because of homogenous surfaces and indistinctive texture that makes it hard to find good points to track. Moreover, occlusions, specular reflections, cauterization smoke, blood and fluids lead to tracking points being lost. Hence, one of the main problems is a robust long-term tracking. Last but not least, the real-time requirement poses a demanding challenge. A survey about three-dimensional tissue deformation recovery and tracking is available in .
3.2.6 Instrument detection and tracking
Besides anatomical objects, surgical instruments are the most important objects of interest in the surgical site. Therefore, a key requirement for scene understanding, Robotic-Assisted Surgery and many other use cases is to detect their presence and track their position as well as their motion trajectories. The precision requirements differ with the application scenario. For surgical phase recognition, it is often sufficient to know which instruments are present at all. In this case, it is already sufficient to equip the instruments with a cheap RFID tag to detect the insertion and withdrawal . In terms of visual analysis, a classification of the full frame can be carried out to detect the presence of instruments . The next level is to determine the position of the instrument in the two-dimensional image, or more specifically the position of the instrument tip, which is the main differentiation characteristic between different types of instruments . Also for instrument tracking, the position of the tip is usually considered as reference point.
Many approaches proposed in the literature use modified equipment for localizing instruments. The most common modification are color markers on the shaft that can easily be detected and segmented [240, 263]. To distinguish between multiple instruments, different colors can be used . Nageotte et al.  use a pattern of 12 black spots on a white surface. This modification also enables pose estimation to some extent. Krupa et al.  use a laser pointing device on the tip of the endoscope to derive the relative orientation of the instrument with respect to the organ. This approach even works if the instrument is outside the current field of view. Besides the fact that these methods have a limited applicability to arbitrary videos from an archive, another disadvantage of such modifications is that the biocompatibility and sterilizability has to be ensured, as they have direct contact to human tissue. Also internal kinematic data from a robot can be used to estimate the position of instruments, but is generally not accurate enough, especially when force is applied to a surface . However, kinematics can be useful as supplementary source of information to get a coarse estimation that is refined by visual analysis .
Also purely vision-based approaches without any hardware modification have been proposed. Doignon et al.  perform a color segmentation mainly using the saturation attribute to differentiate the achromatic instrument from the background. Voros et al.  define a cylindrical shape model and use geometric information to detect the edges of the tool shaft and the symmetry axis. A similar approach using the Hough transform to detect the straight lines of the instrument shaft is presented in . These approaches face a number of challenges like homogenous color distribution, indistinct edges, occlusions, blurriness, specular reflections and artifacts like smoke, blood or liquids. Moreover, methods have to deal with multiple instruments, as surgical procedures rarely rely on one single instrument.
The final stage of instrument identification and one of the current research challenges is to determine the exact position of the tip in the three-dimensional space and track its motion trajectories [5, 33]. In this context, also the estimation of the instrument pose is of particular importance . This knowledge is necessary for use cases like visual servoing, context-aware alerting and skills assessment. Given geometric constraints can be exploited to facilitate tracking, particularly the motion constraint, which is imposed by the stationary incision point. Knowledge about this point restricts the search area for region seeds for color-based segmentation  and enables modeling of possible instrument motion . Recently, a number of advanced and very sophisticated approaches for 3D instrument tracking and pose estimation have been proposed, e.g., training of appearance models of individual parts of an instrument (shaft, wrist and finger) using color and texture features , learning of fine-scaled natural features in the form of particular 3D landmarks on instrument tips using Randomized Trees , learning the shape of instruments using HOG descriptors and Latent SVM to probabilistically track them , and approaches to determine semantic attributes like “open/closed, stained with blood or not, state of cauterizing tools” etc. .
4 Post-procedural applications
In recent years, it became more and more common to capture videos of endoscopic procedures. We experience a trend towards a comprehensive documentation where entire procedures are recorded, stored and archived for documentation, retrospective analysis, quality assurance, education, training and other purposes. The question arises how to handle this huge corpus of data? The emerging huge video archives pose a challenge to management and retrieval systems. The Multimedia community has proposed several methods to enable summarization, retrieval and management of such data, but this research topic is clearly understudied as compared to the real-time assistance scenario. This section gives an overview of these methods, which can be regarded as kind of post-processing. They do not have the requirement to work in real-time because they operate on captured video data and can be executed offline. Nevertheless, performance is important in order to keep up with the constantly growing data volume.
4.1 Quality assessment
An important application for post-procedural usage of endoscopic videos that has been studied intensively is quality assessment of individual surgeon skills and of entire procedures. Quality control of endoscopic procedures is a very important, but also very difficult issue. In current practice, an experienced surgeon has to review videos of surgeries to subjectively assess the skills of the surgeon and the quality of the procedure. This is a very time-consuming and cumbersome task that cannot be carried out extensively due to the high effort. Hence, it is desirable to provide automatic methods for an objective quality assessment that can be applied to each and every procedure and has the potential to strongly improve surgical quality by pointing out weak points and suggesting potentials for improvement. Recent works even assess quality in real-time during the procedure to provide immediate feedback to the physician and thus improve their performance .
4.1.1 Surgical skills assessment
Several attempts have been made to assess the psychomotor skills of surgeons by analyzing how they perform individual surgical tasks like cutting, grasping, clipping, drilling or suturing. This requires a decomposition of a procedure into individual atomic surgical gestures (often called surgemes), which can also be seen as a kind of temporal segmentation. Similar to surgical workflow understanding (page 14), Hidden Markov Models (HMM) are typically used to model and detect the different tasks. A reference model is trained by an expert and serves as basis for the skills assessment. The main parameter for the assessment is the motion trajectory of instruments. Various metrics like path length, motion smoothness, average acceleration etc. have been proposed as quality indicators.
Most classic approaches use non-standard equipment to obtain this motion data in a straightforward manner, e.g., inherently available kinematic data from a simulator or surgical robot , trajectory data from an external optical tracker  and/or haptic data from an additional three-axis force/torque sensor . However, such a simple but expensive data acquisition is not suitable for a comprehensive quality assessment on a daily basis, but rather interesting for special applications like training, simulation and Robot-Assisted Surgery.
The more practical alternative for an extensive evaluation of surgeon skills is to extract the motion data directly from the video data with content-based analysis methods . The advantage of this approach is that it can be applied to any video without any hardware modification. Furthermore, videos can provide contextual information with regard to the anatomical structures and instruments involved that can act as additional hints. To obtain the required motion data, instruments have to be detected and tracked (see Section 3.2.6), preferably in the three-dimensional space. Some contributions concerning this matter have recently been published, e.g., [89, 172, 276].
4.1.2 Assessment of screening quality
The skills assessment methods discussed above are rather applied for surgeries and focus mainly on the instrument handling. In diagnostic procedures, usually no instruments are used, therefore other quality criteria have to be defined. Hwang et al. [84, 168] define objective metrics that characterize the quality of a colonoscopy screening, e.g., the duration of the withdrawal phase (based on temporal video segmentation), the ratio of non-informative frames, the number of camera motion changes and the ratio of frames showing close inspections of the colon wall to frames showing a global lumen view. This framework is extended in several further publications. In , a “quadrant coverage histogram” is introduced that determines to what extent all sides of the mucosa have been inspected. A similar approach is proposed in  for the domain of cystoscopy. Here the idea is to determine to what extent the inner surface of the bladder has been inspected and if parts have been missed. A further quality criterion for colonoscopies is the presence of stool that occludes the mucosa and thus may lead to missed polyps. Color features are used to measure the ratio of “stool images” and consequently the quality of the preceding bowel cleansing [85, 164]. On the other side, the presence of frames showing the appendiceal orifice indicates a high quality examination, because it means that the examination has been performed thoroughly . Also the occurrence of retroflexion, which is a special endoscope maneuver to discover polyps that are hidden behind deep folds in the peri-anal mucosa, is a quality indicator and can be detected with a method proposed in . In , therapeutic actions are detected and also considered for quality assessment metrics. The assessment of intervention quality is mainly applied post-operatively on recorded videos, but also first attempts have been made to include these techniques in the clinical workflow and directly notify the physician about quality deficiencies [163, 170].
4.2 Management and retrieval
The goal of video management and retrieval systems is to enable users to efficiently find exactly the information they are looking for, either within one specific video or within an archive. They have to provide means to articulate some kind of query describing the information need, or special interaction mechanisms for efficient content browsing and exploration. Especially the latter aspect has rarely been addressed for this specific domain yet and therefore provides a large potential for future work.
4.2.1 Compression and storage
If an endoscopic video management system is to be deployed in a realistic scenario, domain-specific concepts for compression, storage organization and dissemination of videos are required. As for compression of endoscopic videos, literature research only showed very few contributions. For the field of bronchoscopic examinations, we found a statement that “it is possible to use lossy compressed images and video sequences for diagnostic purposes” [56, 185]. In terms of storage organization,  propose a system with a distributed architecture that uses a NoSQL database to provide access to videos within a hospital and across different health care institutions . They also present a device for video acquisition and an annotation system that enables content querying . Münzer et al.  show that the circular content area of endoscopic videos can be exploited to considerably improve encoding efficiency and the discarding of irrelevant segments (blurry, out-of-patient or dark and noisy) can save up to 20 % of the storage space . In , a subjective quality assessment study is conducted that shows that it is not necessary to archive videos in the original HD resolution, but lower quality representations still provide sufficient semantic quality. This study also contains the first set of encoding recommendations for the domain of endoscopy. A follow-up study in  evaluates the effective savings in storage space by using domain-specific video compression on an authentic real-world data set. It comes to the conclusion that by using these encoding presets together with circle detection, relevance segmentation and a long-term archiving strategy, the overall storage capacity can be reduced by more than 90 % in the long term without losing relevant information.
One possibility to retrieve specific information in endoscopic videos is to annotate them manually. Lux et al. present a mobile tool for efficient manual annotation of surgery videos [73, 133]. It provides intuitive direct interaction mechanisms on a tablet device that make the annotation task less tedious. For colonoscopy videos, an annotation tool called Arthemis  has been proposed that supports the Minimal Standard Terminology (MST) of the European Gastrointestinal Society for Endoscopy (ESGE), which is a standardized terminology for diagnostic findings in GI endoscopy. However, manual annotation and tagging cannot be carried out for each and every video of a video archive due to time restrictions. It is rather interesting to use manual annotation to obtain expert knowledge from a limited number of representative examples and extract this knowledge for automatic retrieval techniques like content-based image retrieval (CBIR).
The typical use case of CBIR is to find images that are visually similar to a query image. Up to now, research in image retrieval in the medical domain rather focuses on other image modalities (CT, MRI etc.) [106, 157], but some publications also deal with endoscopic images. An early approach  uses simple color histograms in HSV color space to determine the similarity between colonoscopy images. A more recent approach  for gastroscopic images uses image segmentation in the CIE L*a*b* color space to extract a “color change feature” and dominant color information and compares images using the Kullback-Leibler distance. Tai et al.  also incorporate texture information by using a color-texture correlogram and the Generalized Tersky Index as similarity measure. Furthermore, they employ an interactive relevance feedback to refine the search result. Xia et al.  propose to use multi-feature fusion to combine color, texture and shape information. A very recent and more sophisticated approach  that obtains very promising results uses Multiscale Geometric Analysis (MGA) of Nonsubsampled Contourlet Transform (NSCT) and the statistical framework based on Generalized Gaussian Density (GGD) model and Kullback-Leibler Distance (KLD). Another task relying on similarity search is to link a single still image captured by the surgeon during the procedure to the according video segment in an archive . The latest approach for this task uses Feature Signatures and the Signature Matching Distance and achieves reasonable results  . For laparoscopic surgery videos, a technique has been proposed to find scenes showing a specific instrument that is depicted in the query image . In terms of video retrieval,  use the HOG (Histogram of Oriented Gradients) descriptor and a Fisher kernel based similarity to find similar video sequences in other videos based on a a video snippet query. This technique can on the one hand be used to compare similar situations, but on the other hand also to automatically assign a semantic tag annotation based on the existing annotation of similar sequences in an existing (annotated) video archive. The same authors also propose a method for surgery type classification that automatically differentiates between 8 types of laparoscopic surgery . The method uses RGB and HSV color histograms, SIFT (Scale Invariant Feature Transform) and HOG features together with an SVM classifier and obtains promising results.
4.2.3 Temporal segmentation
While CBIR seems to work quite well for diagnostic endoscopy, it is not well studied for surgical endoscopy types like laparoscopy. In this case, the “query by example” paradigm is not very expedient. It is based on the naive assumption that visual similarity correlates with semantic similarity. However, this assumption does not necessarily hold because the semantics of a laparoscopic image or video sequence depend on a very complex context that cannot be thoroughly represented with simple low-level features like color, texture and shape. This discrepancy between low-level representation and high-level semantics is referred to as semantic gap. The key to close this gap is to additionally take into account the dynamic aspects of videos that images do not have. One of the key techniques for general video retrieval is temporal segmentation, i.e., the subdivision of a video into shots and semantic scenes. This abstraction can help a lot to better understand a video and, e.g., find the position of a certain surgical step. Unfortunately, established generic techniques cannot be applied because they typically assume that a video is composed of shots. Endoscopic videos usually have exactly one shot and hence only one scene according to commonly accepted definitions. Therefore, conventional shot detection cannot be used. This means that a new definition of shots and scenes is necessary for endoscopic videos. Some authors tried to introduce a new domain-specific notion of shots. Primus et al.  define “shot-like” segments based on significant motion changes. Their idea is to differentiate between typical motion types, namely no motion, camera motion and instrument motion. This approach produces a very fine-grained segmentation that can be used as basis for a more coarse-grained semantic segmentation. Cao et al.  define operation shots in colonoscopy videos. They are detected by the presence of instruments, which are only used in special situations in this diagnostic endoscopy type.
A more promising approach for endoscopic video segmentation is to define a model of the progress of the procedure and associate video segments with the individual phases of this model. This idea is closely related to surgical workflow understanding as discussed above (see Section 3.2.2). However, here the purpose is not to provide context-aware assistance, but to structure a video file to facilitate efficient review of procedures and retrieval of specific scenes. The fact that complete information about the whole procedure is available at processing time makes this task easier as compared to the intra-operative real-time scenario. For diagnostic procedures like colonoscopy, where the endoscope has to follow a predetermined path through several anatomic regions, it is straightforward to consider these regions as phases. Cao et al.  observed that transitions from one section of the colon to the next feature a certain pattern of sharp and blurry frames. This is because the physician has to steer the endoscope around anatomic “corners”. This pattern can be exploited to segment a video into semantic phases. Similarly, WCE videos can also be segmented into subvideos showing specific topographic areas like esophagus, stomach, small intestine and large intestine [46, 114, 134, 206].
In the case of laparoscopy, the presence of certain instruments can be used to distinguish between surgical phases . This approach works very well for standardized procedures like Cholecystectomy. For more individualized procedures, additional cues need to be incorporated. Some alternative approaches, which are not based on surgical process modeling, have been proposed in the literature, e.g., probabilistic tissue tracking to generate motion patterns that are used to identify surgical episodes , smoke detection as indication of electrocautery based on ad hoc kinematic features from optical flow analysis of a grid of particles  and an unsupervised approach that extracts and analyzes multiple time-series data sets based on instrument occurrence and motion . Another interesting contribution  does not incorporate temporal information, but classifies individual frames according to their surgical phase. The used features are automatically generated by genetic programming in order to overcome the challenge of choosing the right features a priori. The drawback of this approach is the long processing time for feature evolution.
Endoscopic videos often have a duration of several hours, but surgeons usually do not have the time to review the whole footage. Therefore, summarization of endoscopic videos is a crucial feature of a video management system. Dynamic summaries can be used to reduce the duration by determining the most relevant parts of the video. Static summaries are useful to visualize the essence of an endoscopic procedure at a glance and can easily be archived in the patient’s medical record or even be passed down to the patient. Many general approaches for summarization of videos have been proposed in the literature, but only a few that consider the specific domain characteristics of endoscopic videos.
Lux et al.  present a static summarization algorithm for the domain of arthroscopy (an endoscopy type that is hardly ever addressed in the literature). It generates a single result image composed of a predefined number of most representative frames. The representativity is determined based on k-medoid clustering of color and texture features. In this context, such frames are often referred to as keyframes. Another method for keyframe extraction in endoscopic videos, which can also be used as basis for temporal segmentation, is presented in . It uses the ORB (Oriented FAST and Rotated BRIEF) keypoint descriptor  and an adaptive threshold to detect significant differences between two frames. Lokoč et al.  present an interactive tool for dynamic browsing of such keyframes. The keyframes are clustered and presented in a hierarchical manner to get a quick overview of a procedure. A similar interactive presentation of keyframes from hysteroscopy videos in the form of a video segment tree is proposed in  and further refined in , together with a summarization technique that estimates the clinical relevance of segments based on the attention attracted during video acquisition. These interactive techniques build the bridge between video summarization and temporal segmentation. However, video summaries do not have to be static, but can also consist of a shortened version of the original video, as proposed for the domain of bronchoscopy in . This is achieved by discarding a large number of completely non-informative frames and keeping frames that are representative or clinically especially relevant (e.g., showing the branching of airways or pathological lesions).
Many WCE-specific techniques can also be considered as kind of summarization [40, 87, 243]. The task is to reduce a very large collection of images (with some temporal relationship, but less than in a typical video because the frame rate is much lower) to the ones that are diagnostically relevant. This could as well be seen as a pre-processing or filtering step. The WCE scenario differs from the usual post-procedural review scenario because it is not an additional optional task where the physician can watch the video material again, but it is the mandatory core step of the screening. Thus, the optimization of time efficiency is especially important here. A recent survey of various image analysis techniques for WCE can be found in .
Also panoramic images of the surgical site can be seen as a special kind of summary that gives a visual overview, especially of examinations. Techniques for panorama generation are sometimes also calling mosaicing or stitching algorithms. If a panorama is generated during a procedure, the term dynamic view expansion is often used (see 3.2.1). The basic idea is to combine different frames of the video, which were recorded from a different perspective, to one image that extends the restricted field of view. Image stitching is closely related to image registration (see Section 3.2.5). The underlying challenge is to find corresponding points and compute the transformation between the two frames, in order to convert them to a common coordinate system. Finally, the registered images have to be blended together to create the panorama.
Several authors addressed panoramas in the context of cystoscopy, i.e., the examination of the interior of the bladder [78, 144, 212]. The bladder can be modeled as a sphere, so geometric constraints can be imposed. Behrens et al.  present a method using graphs to identify missed regions during a cystoscopy, which would lead to gaps in a panoramic image, in order to assess the completeness of the examination. Spyrou et al.  propose a stitching technique for WCE frames. In the context of fetoscopy (examination of the interior of the uterus during pregnancy),  propose a method to create a panorama of the placenta to support intrauterine fetal surgery. Liao et al.  extend this idea by a method to map the panorama to a three-dimensional ultrasound image. A review about recent advances in the field of endoscopic image mosaicing and panorama generation can be found in .
The main goal of this extensive literature review was to give a broad overview of research that deals with the processing and analysis of endoscopic videos. A further goal is to draw attention to this research field in the Multimedia community. We hope to stimulate further research, especially in terms of post-processing, which is probably the most relevant topic with regard to common Multimedia methods and offers a broad range of open research questions. Moreover, we give insights into domain-specific characteristics of the endoscopy domain and how to deal with them in a pre-processing phase (e.g., lense distortion, specular reflections, circular content area, etc.).
In the literature research, numerous contributions were found and classified into three categories: (1) pre-processing methods, (2) real-time support at procedure time and (3) post-procedural applications. However, many methods and approaches have been found to be relevant for multiple use-cases, e.g., instrument detection and tracking methods developed for robotic assistance that can also be helpful for post-procedural video indexing. Currently, the respective research communities are often not aware that complementary contributions exist in seemingly unrelated research fields.
The domain-specific peculiarities of the endoscopic video domain require specific pre-processing methods like distortion correction or specular reflection detection. Moreover, pre-processing is often used to enhance the image quality or filter relevant content, both temporally and spatially. These enhancements are both important to improve the viewing conditions for physicians and as pre-processing step for advanced analysis methods. In this context, it is also important to distinguish between different types of endoscopy, mainly between diagnostic (examinations) and therapeutic (surgeries) types, but also between the subtypes that often have very heterogeneous domain characteristics. Most approaches focus on one specific endoscopy type and cannot be transferred to other types without significant modifications, i.e., they are strongly domain-specific.
Furthermore, we have to distinguish between methods that are applied during the procedure and methods that operate on recorded videos. The former have the requirement to work in real-time and can only use the information up to the current moment, i.e., they cannot “read into the future”. The latter have this possibility because the entire video is available at analysis time. In terms of diagnostic endoscopy types, the focus is on pattern recognition for Diagnostic Decision Support, mainly in the form of polyp/lesion/tumor detection in colonoscopic screenings. In the special domain of WCE (Wireless Capsule Endoscopy), numerous approaches have been proposed to differentiate between diagnostically relevant and irrelevant content in order to increase time efficiency without impairing the diagnostic accuracy.
The largest part of all found publications originates in the robotics community and has the goal to enable real-time assistance during surgeries by (1) Augmented Reality and (2) (semi-)autonomous actions carried out by surgical robots. These visionary goals require a number of classical Computer Vision techniques like 3D-reconstruction, object detection and tracking, registration etc. Existing methods fail in most cases because of the special characteristics of endoscopic images and aggravating factors like the restricted and distorted viewport, scarce camera motion, specular reflections, occlusions, image quality perturbations, textureless surfaces, tissue deformation etc., and therefore have to be adapted. Many of these techniques could also be useful for post-procedural analysis of videos for more efficient management, retrieval and interactive retrospective review.
The post-procedural use case turned out to be extremely understudied in comparison to the real-time scenario. However, it involves a number of very interesting and challenging research questions, including indexing, retrieval, segmentation, summarization and interaction.
One of the reasons might be that recording of endoscopic procedures is just becoming a general practice and is not yet commonly widespread. This can also be explained by the lack of efficient compression and archiving concepts for comprehensive recording. As a consequence, the acquisition of appropriate data sets is the first challenge for new researchers in this field. Data availability in general is a critical issue since sensitive medical data is involved and hospital policies regarding video recording and transfer are often very strict. Only very few public data set are available and usually target a very specific application (like tumor recognition). In order to draw expressive comparisons between alternative approaches it is very important for the future to create and publish more and bigger public data sets. The extent of a data set is especially important for machine learning methods that require large amounts of training samples. The shortage of a sufficiently large and representative training corpus often hampers the usage of popular techniques like deep learning with Convolutional Neural Networks (CNN) .
An additional challenge is to find medical experts who are willing to take the time to cooperate and share their medical expertise, which is an absolutely essential ingredient to successful research in this extremely specific domain. As an example, ground truth labels need to be annotated to training examples. This task cannot be carried out by medical laymen. Moreover, at a first glance, it seems that the demand by physicians is limited because they might see post-procedural usage of endoscopic videos as additional workload, although in fact it has a huge potential for quality improvement. Nevertheless, it is a tough job to familiarize physicians with the benefits of post-procedural usage of videos and the need for research in this area. A survey conducted in  showed that physicians often do not have a clear notion of potential benefits until they actually see it in action. After watching a prototype demonstration of a content-based endoscopic video management system, they stated a significantly higher interest in such a system than before.
However, it should not be expected that all problems in the post-operative handling of endoscopic videos can be solved by automatic analysis. An extremely important aspect is to combine these methods with easily understandable visualization concepts and intuitive interaction mechanisms for efficient content browsing and exploration. Especially the latter aspect has rarely been addressed in the literature yet and therefore provides a huge potential for future work.
In the foreseeable future, we assume that video documentation of endoscopic procedures will become required by law. This will lead to huge archives of visually very similar video content and techniques for video storage, retrieval and interaction will become essential. When it comes to that point, research should already have appropriate solutions for these problems.
Open access funding provided by University of Klagenfurt. This work was supported by Universität Klagenfurt and Lakeside Labs GmbH, Klagenfurt, Austria and funding from the European Regional Development Fund and the Carinthian Economic Promotion Fund (KWF) under grant KWF-20214 U. 3520/26336/38165.
- 1.Ackerman JD, Keller K, Fuchs H (2002) Surface reconstruction of abdominal organs using laparoscopic structured light for augmented reality. In: Electronic Imaging 2002, pp 39–46. International Society for Optics and PhotonicsGoogle Scholar
- 2.Agrawal M, Konolige K, Blas MR (2008) CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching. In: Computer Vision – ECCV 2008, no. 5305 in LNCS, pp 102–115. SpringerGoogle Scholar
- 3.Ahmadi SA, Sielhorst T, Stauder R, Horn M, Feussner H, Navab N (2006) Recovery of Surgical Workflow Without Explicit Models. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006, no. 4190 in LNCS, pp 420–428. SpringerGoogle Scholar
- 4.Alexandre L, Nobre N, Casteleiro J (2008) Color and Position versus Texture Features for Endoscopic Polyp Detection. In: Int’l Conf on BioMedical Engineering and Informatics. BMEI 2008, vol 2, pp 38–42Google Scholar
- 6.Ameling S, Wirth S, Paulus D, Lacey G, Vilarino F (2009) Texture-based polyp detection in colonoscopy. In: Bildverarbeitung für die Medizin 2009, pp 346–350. SpringerGoogle Scholar
- 7.Amir-Khalili A, Peyrat JM, Hamarneh G, Abugharbieh R (2013) 3d Surface Reconstruction of Organs Using Patient-Specific Shape Priors in Robot-Assisted Laparoscopic Surgery. In: Abdominal Imaging. Computation and Clinical Appl., no. 8198 in LNCS, pp 184–193. SpringerGoogle Scholar
- 8.Arnold M, Ghosh A, Ameling S, Lacey G (2010) Automatic Segmentation and Inpainting of Specular Highlights for Endoscopic Imaging. EURASIP Journal on Image and Video Processing:1–12Google Scholar
- 9.Arnold M, Ghosh A, Lacey G, Patchett S, Mulcahy H (2009) Indistinct Frame Detection in Colonoscopy Videos. In: Machine Vision and Image Processing Conf, 2009. IMVIP ’09. 13th International, pp 47–52Google Scholar
- 11.Barreto J, Roquette J, Sturm P, Fonseca F (2009) Automatic Camera Calibration Applied to Medical Endoscopy. In: 20th British Machine Vision Conference (BMVC ’09)Google Scholar
- 12.Barreto J, Swaminathan R, Roquette J (2007) Non Parametric Distortion Correction in Endoscopic Medical Images. In: 3DTV Conference, 2007, pp 1–4Google Scholar
- 15.Bay H, Tuytelaars T, Gool LV (2006) SURF: Speeded Up Robust Features. In: Computer Vision – ECCV 2006, no. 3951 in LNCS, pp 404–417. SpringerGoogle Scholar
- 16.Beecks C, Schoeffmann K, Lux M, Uysal MS, Seidl T (2015) Endoscopic Video Retrieval: A Signature-based Approach for Linking Endoscopic Images with Video Segments. In: IEEE Int’l Symposium on Multimedia. ISM, MiamiGoogle Scholar
- 17.Behrens A, Takami M, Gross S, Aach T (2011) Gap detection in endoscopic video sequences using graphs. In: 2011 Annual Int’l Conf of the IEEE Engineering in Medicine and Biology Soc, pp 6635–6638Google Scholar
- 18.Béjar Haro B, Zappella L, Vidal R (2012) Surgical Gesture Classification from Video Data. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2012:34–41Google Scholar
- 19.Bergen T, Wittenberg T (2014) Stitching and Surface Reconstruction from Endoscopic Image Sequences: A Review of Applications and Methods. IEEE Journal of Biomedical and Health Informatics PP(99):1–1Google Scholar
- 20.Bernal J, Gil D, Sánchez C, Sánchez FJ (2014) Discarding Non Informative Regions for Efficient Colonoscopy Image Analysis. In: Computer-Assisted and Robotic Endoscopy, LNCS, pp 1–10. SpringerGoogle Scholar
- 22.Bernhardt S, Abi-Nahed J, Abugharbieh R (2013) Robust dense endoscopic stereo reconstruction for minimally invasive surgery. In: Medical Computer Vision. Recognition Techniques and Applications in Medical Imaging. Springer, pp 254–262Google Scholar
- 28.Braga J, Laranjo I, Rolanda C, Lopes L, Correia-Pinto J, Alves V (2014) A Novel Approach to Endoscopic Exams Archiving. In: New Perspectives in Information Systems and Technologies, Volume 1, no. 275 in Advances in Intelligent Systems and Computing. Springer, pp 239–248Google Scholar
- 29.Buck SD, Cleynenbreugel JV, Geys I, Koninckx T, Koninck PR, Suetens P (2001) A System to Support Laparoscopic Surgery by Augmented Reality Visualization. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2001, no. 2208 in LNCS, pp 691–698. SpringerGoogle Scholar
- 32.Calonder M, Lepetit V, Strecha C, Fua P (2010) BRIEF: Binary Robust Independent Elementary Features. SpringerGoogle Scholar
- 33.Cano A, Gayá F, Lamata P, Sánchez-González P, Gómez E (2008) Laparoscopic tool tracking method for augmented reality surgical applications. Biomedical Simulation:191–196Google Scholar
- 35.Cao Y, Tavanapong W, Li D, Oh J, Groen PCD, Wong J (2004) A Visual Model Approach for Parsing Colonoscopy Videos. In: Image and Video Retrieval, no. 3115 in LNCS, pp 160–169. SpringerGoogle Scholar
- 36.Casals A, Amat J, Laporte E (1996) Automatic guidance of an assistant robot in laparoscopic surgery. In: IEEE International Conference on Robotics and Automation, pp 895–900Google Scholar
- 37.Chattopadhyay T, Chaki A, Bhowmick B, Pal A (2008) An application for retrieval of frames from a laparoscopic surgical video based on image of query instrument. In: TENCON 2008-2008 IEEE Region 10 Conference, pp 1–5Google Scholar
- 38.Chhatkuli A, Bartoli A, Malti A, Collins T (2014) Live image parsing in uterine laparoscopy. In: IEEE International Symposium on Biomedical Imaging (ISBI)Google Scholar
- 39.Chowdhury M, Kundu MK (2015) Endoscopic Image Retrieval System Using Multi-scale Image Features. In: Proceedings of the 2Nd International Conference on Perception and Machine Intelligence, PerMIn ’15. ACM, New York, NY, USA, pp 64-70Google Scholar
- 40.Chu X, Poh C, Li L, Chan K, Yan S, Shen W, Htwe T, Liu J, Lim J, Ong E, Ho K (2010) Epitomized Summarization of Wireless Capsule Endoscopic Videos for Efficient Visualization. In: MICCAI 2010, no. 6362 in LNCS, pp 522–529. SpringerGoogle Scholar
- 42.Collins T, Bartoli A (2012) 3d reconstruction in laparoscopy with close-range photometric stereo. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2012. Springer, pp 634–642Google Scholar
- 43.Collins T, Bartoli A (2012) Towards Live Monocular 3d Laparoscopy Using Shading and Specularity Information. In: Inf. Proc. in Computer-Assisted Interventions, no. 7330 in LNCS, pp 11–21. SpringerGoogle Scholar
- 44.Collins T, Compte B, Bartoli A (2011) Deformable shape-from-motion in laparoscopy using a rigid sliding window. In: Medical Image Understanding and Analysis ConferenceGoogle Scholar
- 45.Collins T, Pizarro D, Bartoli A, Canis M, Bourdel N (2014) Computer-Assisted Laparoscopic myomectomy by augmenting the uterus with pre-operative MRI data. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp 243–248Google Scholar
- 49.Dhandra BV, Hegadi R, Hangarge M, Malemath VS (2006) Analysis of abnorMality in endoscopic images using combined hsi color space and watershed segmentation. In: 18th International Conference on Pattern Recognition, 2006, vol 4, ICPR 2006. pp 695–698Google Scholar
- 50.Dickens M, Bornhop DJ, Mitra S (1998) Removal of optical fiber interference in color micro-endoscopic images. In: 11th IEEE Symposium on Computer-Based Medical Systems, pp 246–251Google Scholar
- 52.Doignon C, Nageotte F, de Mathelin M (2006) The role of insertion points in the detection and positioning of instruments in laparoscopy for robotic tasks. MICCAI:527–534Google Scholar
- 53.Doignon C, Nageotte F, de Mathelin M (2007) Segmentation and Guidance of Multiple Rigid Objects for Intra-operative Endoscopic Vision. In: Proceedings of the 2005/2006 International Conference on Dynamical Vision, WDV’05/WDV’06/ICCV’05/ECCV’06, pp 314–327. Springer-Verlag, Berlin, HeidelbergGoogle Scholar
- 55.Duda K, Zielinski T, Duplaga M (2008) Computationally simple Super-Resolution algorithm for video from endoscopic capsule. In: Int’l Conf. on Signals and Electronic Systems, 2008. ICSES ’08, pp 197–200Google Scholar
- 56.Duplaga M, Leszczuk M, Papir Z, Przelaskowski A (2008) Evaluation of Quality Retaining Diagnostic Credibility for Surgery Video Recordings. In: Visual Information Systems. Web-Based Visual Information Search and Management, no. 5188 in LNCS, pp 227–230. SpringerGoogle Scholar
- 57.Elter M, Rupp S, Winter C (2006) Physically Motivated Reconstruction of Fiberscopic Images. In: 18th International Conference on Pattern Recognition, 2006. ICPR 2006, vol 3, pp 599–602Google Scholar
- 59.Fuchs H, Livingston MA, Raskar R, State A, Crawford JR, Rademacher P, Drake SH, Meyer AA (1998) Augmented reality visualization for laparoscopic surgery. In: Proceedings of the 1st Int’l Conf. on Medical Image Computing and Computer-Assisted Intervention. Springer, pp 934–943Google Scholar
- 60.Fukuda N, Chen YW, Nakamoto M, Okada T, Sato Y (2010) A scope cylinder rotation tracking method for oblique-viewing endoscopes without attached sensing device. In: 2010 2nd International Conference on Software Engineering and Data Mining (SEDM), pp 684–687Google Scholar
- 64.Giannarou S, Visentini-Scarzanella M, Yang GZ (2009) Affine-invariant anisotropic detector for soft tissue tracking in minimally invasive surgery. In: From Nano to Macro, 2009. ISBI’09. IEEE International Symposium on Biomedical Imaging, pp 1059–1062Google Scholar
- 65.Giannarou S, Yang GZ (2010) Content-Based Surgical Workflow Representation Using Probabilistic Motion Modeling. In: Med. Imaging and Aug. Reality, no. 6326 in LNCS, pp 314–323. SpringerGoogle Scholar
- 66.Giritharan B, Yuan X, Liu J, Buckles B, Oh J, Tang S (2008) Bleeding detection from capsule endoscopy videos. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008. EMBS 2008, pp 4780–4783Google Scholar
- 67.Grasa O, Civera J, Montiel JMM (2011) EKF monocular SLAM with relocalization for laparoscopic sequences. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp 4816–4821Google Scholar
- 68.Grega M, Leszczuk M, Duplaga M, Fraczek R (2010) Algorithms for Automatic Recognition of Non-informative Frames in Video Recordings of Bronchoscopic Procedures. In: Information Technologies in Biomedicine, Advances in Intelligent and Soft Computing, vol 69. Springer, pp 535–545Google Scholar
- 69.Groch A, Seitel A, Hempel S, Speidel S, Engelbrecht R, Penne J, Höller K, Röhl S, Yung K, Bodenstedt S, Pflaum F, dos Santos TR, Mersmann S, Meinzer HP, Hornegger J, Maier-Hein L. (2011) 3d surface reconstruction for laparoscopic computer-assisted interventions: comparison of state-of-the-art methods. In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conf Series, vol 7964, p 796415Google Scholar
- 70.Gröger M, Ortmaier T, Hirzinger G (2005) Structure Tensor Based Substitution of Specular Reflections for Improved Heart Surface Tracking. In: Bildverarbeitung f. d. Medizin 2005, Informatik aktuell. Springer, pp 242–246Google Scholar
- 71.Gröger M, Sepp W, Ortmaier T, Hirzinger G (2001) Reconstruction of Image Structure in Presence of Specular Reflections. In: Proceedings of the 23rd DAGM-Symp. on Pattern Recognition. Springer, pp 53–60Google Scholar
- 72.Gschwandtner M, Liedlgruber M, Uhl A, Vécsei A (2010) Experimental study on the impact of endoscope distortion correction on computer-assisted celiac disease diagnosis. In: 2010 10th IEEE International Conference on Information Technology and Applications in Biomedicine (ITAB), pp 1–6Google Scholar
- 73.Guggenberger M, Riegler M, Lux M, Halvorsen P (2014) Event Understanding in Endoscopic Surgery Videos. In: Proceedings of the 1st ACM International Workshop on Human Centered Event Understanding from Multimedia, HuEvent ’14, pp 17–22. ACM, New York, NY, USAGoogle Scholar
- 74.Guthart G, Salisbury Jr JK (2000) The IntuitiveTM Telesurgery System: Overview and Application. In: ICRA, pp 618–621Google Scholar
- 75.Hafner M, Liedlgruber M, Uhl A (2013) POCS-based super-resolution for HD endoscopy video frames. In: 2013 IEEE 26th International Symposium on Computer-Based Medical Systems (CBMS), pp 185–190Google Scholar
- 76.Hafner M, Liedlgruber M, Uhl A, Wimmer G (2014) Evaluation of super-resolution methods in the context of colonic polyp classification. In: 12th Int’l Workshop on Content-Based Multimedia IndexingGoogle Scholar
- 80.Holler K, Penne J, Hornegger J, Schneider A, Gillen S, Feussner H, Jahn J, Gutierrez J, Wittenberg T (2009) Clinical evaluation of Endorientation: Gravity related rectification for endoscopic images. In: Proceedings of 6th Int’l Symp. on Image and Signal Processing and Analysis, ISPA 2009, pp 713–717Google Scholar
- 83.Hwang S, Celebi M (2010) Polyp detection in Wireless Capsule Endoscopy videos based on image segmentation and geometric feature. In: 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp 678–681Google Scholar
- 84.Hwang S, Oh JH, Lee JK, Cao Y, Tavanapong W, Liu D, Wong J, de Groen PC (2005) Automatic measurement of quality metrics for colonoscopy videos. In: Proceedings of the 13th annual ACM international conference on Multimedia, pp 912–921Google Scholar
- 85.Hwang S, Oh JH, Tavanapong W, Wong J, de Groen PC (2008) Stool detection in colonoscopy videos. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008. EMBS 2008. pp 3004–3007Google Scholar
- 86.Iakovidis DK, Maroulis DE, Karkanis SA, Brokos A (2005) A comparative study of texture features for the discrimination of gastric polyps in endoscopic video. In: Proceedings of the 18th IEEE Symposium on Computer-Based Medical Systems, pp 575–580Google Scholar
- 88.Ieiri S, Uemura M, Konishi K, Souzaki R, Nagao Y, Tsutsumi N, Akahoshi T, Ohuchida K, Ohdaira T, Tomikawa M, Tanoue K , Hashizume M, Taguchi T (2012) Augmented reality navigation system for laparoscopic splenectomy in children based on preoperative CT image using optical tracking device. Pediatr Surg Int 28(4):341–346CrossRefGoogle Scholar
- 89.Jun SK, Narayanan M, Agarwal P, Eddib A, Singhal P, Garimella S, Krovi V (2012) Robotic Minimally Invasive Surgical skill assessment based on automated video-analysis motion studies. In: 2012 4th IEEE RAS EMBS Int’l Con. on Biomedical Robotics and Biomechatronics (BioRob), pp 25–31Google Scholar
- 90.Kallemeyn NA, Grosland NM, Magnotta VA, Martin JA, Pedersen DR (2007) Arthroscopic Lens Distortion Correction Applied to Dynamic Cartilage Loading. Iowa Orthop J 27:52–57Google Scholar
- 91.Kanaya J, Koh E, Namiki M, Yokawa H, Kimura H, Abe K (2008) A system for detecting locations of oversight in cystoscopy. In: Proceedings of the 9th Asia Pacific Industrial Engineering & Management Systems Conf., pp 2388–2392Google Scholar
- 94.Katić D, Wekerle AL, Gärtner F, Kenngott HG, Müller-Stich BP, Dillmann R., Speidel S. (2014) Knowledge-Driven ForMalization of Laparoscopic Surgeries for Rule-Based Intraoperative Context-Aware Assistance. In: Information Processing in Computer-Assisted Interventions, no. 8498 in LNCS. Springer, pp 158–167Google Scholar
- 97.Khatibi T, Sepehri MM, Shadpour P (2014) A Novel Unsupervised Approach for Minimally-invasive Video Segmentation. Journal of Medical Signals and Sensors 4 (1):53–71Google Scholar
- 100.Kondo W, Zomer MT (2014) Video recording the Laparoscopic Surgery for the Treatment of Endometriosis should be Systematic!. Gynecol Obstet 04(04)Google Scholar
- 102.Koppel D, Wang YF, Lee H (2004) Image-based rendering and modeling in video-endoscopy. In: IEEE International Symposium on Biomedical Imaging: Nano to Macro, 2004, vol 1, pp 269–272Google Scholar
- 104.Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
- 107.Kumar S, Narayanan M, Singhal P, Corso J, Krovi V (2013) Product of tracking experts for visual tracking of surgical tools. In: 2013 IEEE International Conference on Automation Science and Engineering (CASE), pp 480–485Google Scholar
- 108.Kumar S, Narayanan M, Singhal P, Corso J, Krovi V (2014) Surgical tool attributes from monocular video. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp 4887–4892Google Scholar
- 109.Lahane A, Yesha Y, Grasso MA, Joshi A, Martineau J, Mokashi R, Chapman D, Grasso MA, Brady M, Yesha Y et al (2011) Detection of Unsafe Action from Laparoscopic Cholecystectomy Video. In: Proceedings of ACM SIGHIT International Health Informatics Symposium, vol 47, pp 114–122Google Scholar
- 111.Lalys F, Riffaud L, Morandi X, Jannin P (2011) Surgical phases detection from microscope videos by combining SVM and HMM. In: Proceedings of the 2010 Int’l MICCAI conference on Medical computer vision: recognition techniques and applications in medical imaging, MCV’10. Springer, pp 54–62Google Scholar
- 112.Laranjo I, Braga J, Assunção D, Silva A, Rolanda C, Lopes L, Correia-Pinto J, Alves V (2013) Web-Based Solution for Acquisition, Processing, Archiving and Diffusion of Endoscopy Studies. In: Distributed Computing and Artificial Intelligence, no. 217 in AISC, pp 317–324. SpringerGoogle Scholar
- 114.Lee J, Oh J, Shah SK, Yuan X, Tang SJ (2007) Automatic classification of digestive organs in wireless capsule endoscopy videos. In: Proce. of the 2007 ACM Symp. on Applied Computing, SAC ’07. ACM, pp 1041–1045Google Scholar
- 116.Leong JJ, Nicolaou M, Atallah L, Mylonas GP, Darzi AW, Yang GZ (2006) HMM assessment of quality of movement trajectory in laparoscopic surgery. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006, pp 752–759. SpringerGoogle Scholar
- 120.Liao H, Tsuzuki M, Mochizuki T, Kobayashi E, Chiba T, Sakuma I (2009) Fast image mapping of endoscopic image mosaics with three-dimensional ultrasound image for intrauterine fetal surgery. Minimally invasive therapy & allied technologies: MITAT: official journal of the Society for Minimally Invasive Therapy 18(6):332–340CrossRefGoogle Scholar
- 123.Liedlgruber M, Uhl A, Vecsei A (2011) Statistical analysis of the impact of distortion (correction) on an automated classification of celiac disease. In: 2011 17th Int’l Conf. on Digital Signal Processing, pp 1–6Google Scholar
- 126.Liu D, Cao Y, Tavanapong W, Wong J, Oh JH, de Groen PC (2007) Quadrant coverage histogram: a new method for measuring quality of colonoscopic procedures. In: 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007. EMBS 2007, pp 3470–3473Google Scholar
- 127.Lo B, Scarzanella MV, Stoyanov D, Yang GZ (2008) Belief Propagation for Depth Cue Fusion in Minimally Invasive Surgery. In: MICCAI 2008, no. 5242 in LNCS, pp 104–112. SpringerGoogle Scholar
- 128.Lokoč J, Schoeffmann K, del Fabro M (2015) Dynamic Hierarchical Visualization of Keyframes in Endoscopic Video. In: MultiMedia Modeling, no. 8936 in LNCS, pp 291–294. SpringerGoogle Scholar
- 133.Lux M, Riegler M (2013) Annotation of endoscopic videos on mobile devices: a bottom-up approach. In: Proceedings of the 4th ACM Multimedia Systems Conference. ACM, pp 141–145Google Scholar
- 135.Maghsoudi O, Talebpour A, Sotanian-Zadeh H, Alizadeh M, Soleimani H (2014) Informative and Uninformative Regions Detection in WCE Frames Journal of Advanced ComputingGoogle Scholar
- 138.Malti A (2014) Variational Formulation of the Template-Based Quasi-Conformal Shape-from-Motion from Laparoscopic Images Int’l Journal of Advanced Computer Science and Applications (IJACSA)Google Scholar
- 140.Marayong P, Li M, Okamura A, Hager G (2003) Spatial motion constraints: theory and demonstrations for robot guidance using virtual fixtures. In: IEEE International Conference on Robotics and Automation, 2003. Proceedings. ICRA ’03, vol 2, pp 1954–1959Google Scholar
- 143.Meslouhi O, Kardouchi M, Allali H, Gadi T, Benkaddour YA (2011) Automatic detection and inpainting of specular reflections for colposcopic images. Central European Journal of Computer Science 1(3):341–354Google Scholar
- 145.Mirota D, Wang H, Taylor R, Ishii M, Hager G (2009) Toward video-based navigation for endoscopic endonasal skull base surgery. MICCAI 2009:91–99Google Scholar
- 147.Mirota DJ, Uneri A, Schafer S, Nithiananthan S, Reh DD, Gallia GL, Taylor RH, Hager GD, Siewerdsen JH (2011) High-accuracy 3d image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery. In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, vol 7964, p 79640JGoogle Scholar
- 148.Münzer B, Schoeffmann K, Böszörmenyi L (2016) Domain-Specific Video Compression for Long-Term Archiving of Endoscopic Surgery Videos. In: 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), pp 312–317. doi: 10.1109/CBMS.2016.28.00000
- 149.Moll M, Koninckx T, Van Gool LJ, Koninckx PR (2009) Unrotating images in laparoscopy with an application for 30° laparoscopes. In: 4th European conference of the international federation for medical and biological engineering. Springer, pp 966–969Google Scholar
- 150.Mountney P, Lo B, Thiemjarus S, Stoyanov D, Yang GZ (2007) A probabilistic framework for tracking deformable soft tissue in minimally invasive surgery. MICCAI 2007:34–41Google Scholar
- 151.Mountney P, Stoyanov D, Yang GZ Recovering Tissue Deformation and Laparoscope Motion for minimally invasive Surgery. Tutorial paper. http://www.mountney.org.uk/publications/SPM%202010.pdf
- 153.Mountney P, Yang GZ (2009) Dynamic view expansion for minimally invasive surgery using simultaneous localization and mapping. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2009, pp 1184–1187Google Scholar
- 158.Münzer B (2011) Requirements and Prototypes for a Content-Based Endoscopic Video Management System. Master’s Thesis. Alpen-Adria Universität Klagenfurt, KlagenfurtGoogle Scholar
- 159.Münzer B, Schoeffmann K, Böszörmenyi L (2013) Detection of circular content area in endoscopic videos. In: 2013 IEEE 26th Int’l Symp. on Computer-Based Medical Systems (CBMS), pp 534–536Google Scholar
- 160.Münzer B, Schoeffmann K, Böszörmenyi L (2013) Improving encoding efficiency of endoscopic videos by using circle detection based border overlays. In: 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp 1–4Google Scholar
- 161.Münzer B, Schoeffmann K, Böszörmenyi L (2013) Relevance Segmentation of Laparoscopic Videos. In: 2013 IEEE International Symposium on Multimedia (ISM), pp 84–91Google Scholar
- 162.Münzer B, Schoeffmann K, Böszörmenyi L, Smulders J, Jakimowicz J (2014) Investigation of the Impact of Compression on the Perceptional Quality of Laparoscopic Videos. In: 2014 IEEE 27th International Symposium on Computer-Based Medical einfarbig Systems (CBMS), pp 153–158Google Scholar
- 163.Muthukudage J, Oh J, Nawarathna R, Tavanapong W, Wong J, de Groen PC (2014) Fast Object Detection Using Color Features for Colonoscopy Quality Measurements. In: Abdomen and Thoracic Imaging, pp 365–388. SpringerGoogle Scholar
- 164.Muthukudage J, Oh J, Tavanapong W, Wong J, de Groen PC (2012) Color Based Stool Region Detection in Colonoscopy Videos for Quality Measurements. In: Advances in Image and Video Technology, no. 7087 in LNCS, pp 61–72. SpringerGoogle Scholar
- 165.Nageotte F, Zanne P, Doignon C, de Mathelin M (2006) Visual servoing-based endoscopic path following for robot-assisted laparoscopic surgery. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 2364–2369Google Scholar
- 170.Oh JH, Rajbal MA, Muthukudage JK, Tavanapong W, Wong J, de Groen PC (2009) Real-time phase boundary detection in colonoscopy videos. In: ISPA 2009. Proceedings of 6th International Symposium on Image and Signal Processing and Analysis, 2009, pp 724–729Google Scholar
- 172.Oropesa I, Sánchez-González P, Chmarra MK, Lamata P, Fernández A, Sánchez-Margallo JA, Jansen FW, Dankelman J, Sánchez-Margallo FM, Gómez EJ (2013) EVA: Laparoscopic Instrument Tracking Based on Endoscopic Video Analysis for Psychomotor Skills Assessment. Surg Endosc 27(3):1029–1039CrossRefGoogle Scholar
- 175.Padoy N, Hager G (2011) Human-Machine Collaborative surgery using learned models. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp 5285–5292Google Scholar
- 176.Parchami M, Mariottini GL (2014) A comparative study on 3-D stereo reconstruction from endoscopic images. In: Proceedings 7th Int’l Conf. on Pervasive Technologies Related to Assistive Environments, p 25. ACMGoogle Scholar
- 177.Park S, Howe RD, Torchiana DF (2001) Virtual Fixtures for Robotic Cardiac Surgery. In: MICCAI 2001, no. 2208 in LNCS, pp 1419–1420. SpringerGoogle Scholar
- 179.Penne J, Höller K, Stürmer M, Schrauder T, Schneider A, Engelbrecht R, Feußner H, Schmauss B, Hornegger J (2009) Time-of-Flight 3-D Endoscopy. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2009, no. 5761 in LNCS, pp 467–474. SpringerGoogle Scholar
- 180.Pezzementi Z, Voros S, Hager G (2009) Articulated object tracking by rendering consistent appearance parts. In: IEEE International Conference on Robotics and Automation, 2009. ICRA ’09, pp 3940–3947Google Scholar
- 181.Prasath V, Figueiredo I, Figueiredo P, Palaniappan K (2012) Mucosal region detection and 3d reconstruction in wireless capsule endoscopy videos using active contours. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 4014–4017Google Scholar
- 182.Primus MJ, Schoeffmann K, Böszörmenyi L (2013) Segmentation of recorded endoscopic videos by detecting significant motion changes. In: 2013 11th International Workshop on Content-Based Multimedia Indexing (CBMI), pp 223–228Google Scholar
- 183.Primus MJ, Schoeffmann K, Böszörmenyi L (2015) Instrument Classification in Laparoscopic Videos. In: 2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI). PragueGoogle Scholar
- 184.Primus MJ, Schoeffmann K, Böszörmenyi L (2016) Temporal segmentation of laparoscopic videos into surgical phases. In: 2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI), pp 1–6. doi: 10.1109/CBMI.2016.7500249
- 185.Przelaskowski A, Jozwiak R (2008) Compression of Bronchoscopy Video: Coding Usefulness and Efficiency Assessment. In: Information Technologies in Biomedicine, no. 47 in Advances in Soft Computing, pp 208–216. SpringerGoogle Scholar
- 186.Puerto GA, Mariottini GL (2012) A Comparative Study of Correspondence-Search Algorithms in MIS Images. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2012, no. 7511 in LNCS, pp 625–633. SpringerGoogle Scholar
- 188.Puerto-Souza GA, Mariottini GL (2014) Wide-Baseline Dense Feature Matching for Endoscopic Images. In: Image and Video Technology, no. 8333 in LNCS, pp 48–59. SpringerGoogle Scholar
- 189.Rangseekajee N, Phongsuphap S (2011) Endoscopy video frame classification using edge-based information analysis. In: Computing in Cardiology, 2011, pp 549–552Google Scholar
- 190.Reeff M, Gerhard F, Cattin PC, Székely G (2006) Mosaicing of endoscopic placenta images. GI Jahrestagung 2006(1):467–474Google Scholar
- 193.Riegler M, Lux M, Gridwodz C, Spampinato C, de Lange T, Eskeland SL, Pogorelov K, Tavanapong W, Schmidt PT, Gurrin C, Johansen D, Johansen HA, Halvorsen PA (2016) Multimedia and Medicine: Teammates for Better Disease Detection and Survival. In: Proceedings of the 2016 ACM on Multimedia Conference, MM ’16, pp 968–977. ACM, New York, NY, USA. 00000. doi: 10.1145/2964284.2976760
- 195.Roldan Carlos J, Lux M, Giro-i Nieto X, Munoz P, Anagnostopoulos N (2015) Visual information retrieval in endoscopic video archives. In: 2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI), pp 1–6Google Scholar
- 198.Rublee E, Rabaud V, Konolige K, Bradski G (2011) ORB :An efficient alternative to SIFT or SURF. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp 2564–2571Google Scholar
- 199.Rungseekajee N, Lohvithee M, Nilkhamhang I (2009) Informative frame classification method for real-time analysis of colonoscopy video. In: 6th Int’l Conf on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, 2009, ECTI-CON 2009, vol 2, pp 1076–1079Google Scholar
- 200.Rupp S, Elter M, Winter C (2007) Improving the Accuracy of Feature Extraction for Flexible Endoscope Calibration by Spatial Super Resolution. In: 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007. EMBS 2007, pp 6565–6571Google Scholar
- 203.Scharcanski J, Gaviao W (2006) Hierarchical summarization of diagnostic hysteroscopy videos. In: 2006 IEEE International Conference on Image Processing, pp 129–132Google Scholar
- 204.Schoeffmann K, del Fabro M, Szkaliczki T, Böszörmenyi L, Keckstein J (2014) Keyframe extraction in endoscopic video. Multimedia Tools and Applications:1–20Google Scholar
- 207.Sheraizin S, Sheraizin V (2006) Endoscopy Imaging Intelligent Contrast Improvement. In: 27th Annual Int’l Conf of the Engineering in Medicine and Biology Society, 2005, IEEE-EMBS 2005. pp 6551–6554Google Scholar
- 208.Shih T, Chang RC (2005) Digital inpainting - survey and multilayer image inpainting algorithms. In: Third Int’l Conf. on Information Technology and Applications, 2005. ICITA 2005, vol 1, pp 15–24Google Scholar
- 211.Song KT, Chen CJ (2012) Autonomous and stable tracking of endoscope instrument tools with monocular camera. In: 2012 IEEE/ASME Int’l Conf. on Advanced Intelligent Mechatronics (AIM), pp 39–44Google Scholar
- 213.Speidel S, Benzko J, Krappe S, Sudra G, Azad P, Müller-Stich BP, Gutt C, Dillmann R (2009) Automatic classification of minimally invasive instruments based on endoscopic image sequences. In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, vol 7261 , p 72610AGoogle Scholar
- 214.Speidel S, Sudra G, Senemaud J, Drentschew M, Müller-Stich BP, Gutt C, Dillmann R (2008) Recognition of risk situations based on endoscopic instrument tracking and knowledge based situation modeling. In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conf. Series, vol 6918, p 69180XGoogle Scholar
- 215.Spyrou E, Diamantis D, Iakovidis D (2013) Panoramic Visual Summaries for Efficient Reading of Capsule Endoscopy Videos. In: 2013 8th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), pp 41–46Google Scholar
- 218.Stanek SR, Tavanapong W, Wong JS, Oh J, de Groen PC (2008) Automatic real-time capture and segmentation of endoscopy video. In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, vol 6919, p 69190XGoogle Scholar
- 219.Staub C, Lenz C, Panin G, Knoll A, Bauernschmitt R (2010) Contour-based surgical instrument tracking supported by kinematic prediction. In: 2010 3rd IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), pp 746–752Google Scholar
- 220.Staub C, Osa T, Knoll A, Bauernschmitt R (2010) Automation of tissue piercing using circular needles and vision guidance for computer aided laparoscopic surgery. In: IEEE International Conference on Robotics and Automation (ICRA), 2010, pp 4585–4590Google Scholar
- 221.Stauder R, Okur A, Peter L, Schneider A, Kranzfelder M, Feussner H, Navab N (2014) Random Forests for Phase Detection in Surgical Workflow Analysis. In: Information Processing in Computer-Assisted Interventions, no. 8498 in LNCS, pp 148–157. SpringerGoogle Scholar
- 222.Stehle T (2006) Removal of specular reflections in endoscopic images. Acta Polytechnica: Journal of Advanced Engineering 46(4):32–36Google Scholar
- 223.Stehle T, Hennes M, Gross S, Behrens A, Wulff J, Aach T (2009) Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics. In: Bildverarbeitung für die Medizin 2009, Informatik aktuell, pp 142–146. SpringerGoogle Scholar
- 224.Stoyanov D, Mylonas GP, Deligianni F, Darzi A, Yang GZ (2005) Soft-Tissue Motion Tracking and Structure Estimation for Robotic Assisted MIS Procedures. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2005, no. 3750 in LNCS, pp 139–146. SpringerGoogle Scholar
- 225.Stoyanov D, Scarzanella MV, Pratt P, Yang GZ (2010) Real-Time Stereo Reconstruction in Robotically Assisted Minimally Invasive Surgery. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2010, no. 6361 in LNCS, pp 275–282. SpringerGoogle Scholar
- 226.Stoyanov D, Yang GZ (2005) Removing specular reflection components for robotic assisted laparoscopic surgery. In: IEEE International Conference on Image Processing, 2005. ICIP 2005, vol 3, pp III–632–5Google Scholar
- 228.Sudra G, Speidel S, Fritz D, Müller-Stich BP, Gutt C, Dillmann R (2007) MEDIASSIST: medical assistance for intraoperative skill transfer in minimally invasive surgery using augmented reality. In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, vol 6509, p 65091OGoogle Scholar
- 229.Sugimoto M, Yasuda H, Koda K, Suzuki M, Yamazaki M, Tezuka T, Kosugi C, Higuchi R, Watayo Y, Yagawa Y, Uemura S, Tsuchiya H, Azuma T (2010) Image overlay navigation by markerless surface registration in gastrointestinal, hepatobiliary and pancreatic surgery. J Hepatobiliary Pancreat Sci 17(5):629–636CrossRefGoogle Scholar
- 231.Suwelack S, Röhl S, Dillmann R, Wekerle AL, Kenngott H, Müller-Stich B, Alt C, Speidel S (2012) Quadratic Corotated Finite Elements for Real-Time Soft Tissue Registration. In: Computational Biomechanics for Medicine, pp 39–50. Springer, New YorkGoogle Scholar
- 238.Tjoa MP, Krishnan SM et al (2003) Feature extraction for the analysis of colon status from the endoscopic images. BioMedical Engineering OnLine 2(9):1–17Google Scholar
- 239.Tokgozoglu H, Meisner E, Kazhdan M, Hager G (2012) Color-based hybrid reconstruction for endoscopy. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 8–15Google Scholar
- 242.Totz J, Mountney P, Stoyanov D, Yang GZ (2011) Dense Surface Reconstruction for Enhanced Navigation in MIS. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2011, no. 6891 in LNCS, pp 89–96. SpringerGoogle Scholar
- 243.Tsevas S, Iakovidis DK, Maroulis D, Pavlakis E (2008) Automatic frame reduction of Wireless Capsule Endoscopy video. In: 8th IEEE International Conference on BioInformatics and BioEngineering, 2008. BIBE 2008, pp 1–6Google Scholar
- 244.Twinanda AP, Marescaux J, de Mathelin M, Padoy N (2015) Classification approach for automatic laparoscopic video database organization. Int J Comput Assist Radiol Surg:1–12Google Scholar
- 245.Twinanda AP, de Mathelin M, Padoy N (2014) Fisher Kernel Based Task Boundary Retrieval in Laparoscopic Database with Single Video Query. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2014, no. 8675 in LNCS, pp 409–416. SpringerGoogle Scholar
- 248.Vilarino F, Spyridonos P, Pujol O, Vitria J, Radeva P, de Iorio F (2006) Automatic Detection of Intestinal Juices in Wireless Capsule Video Endoscopy. In: 18th International Conference on Pattern Recognition, 2006. ICPR 2006, vol 4, pp 719–722Google Scholar
- 249.Visentini-Scarzanella M, Mylonas GP, Stoyanov D, Yang GZ (2009) i-brush: A gaze-contingent virtual paintbrush for dense 3d reconstruction in robotic assisted surgery. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2009, pp 353–360. SpringerGoogle Scholar
- 251.Vogt F, Krüger S, Niemann H, Schick C (2003) A system for real-time endoscopic image enhancement. Medical Image Computing and Computer-Assisted Intervention-MICCAI 2003:356–363Google Scholar
- 252.Vogt F, Paulus D, Heigl B, Vogelgsang C, Niemann H, Greiner G, Schick C (2002) Making the Invisible Visible: Highlight Substitution by Color Light Fields. Conference on Colour in Graphics Imaging, and Vision 2002(1):352–357Google Scholar
- 253.Vogt F, Paulus D, Niemann H (2002) Highlight substitution in light fields. In: 2002 International Conference on Image Processing. 2002. Proceedings, vol 1, pp I–637–I–640Google Scholar
- 254.Voros S, Haber GP, Menudet JF, Long JA, Cinquin P (2010) ViKY Robotic Scope Holder: Initial Clinical Experience and Preliminary Results Using Instrument Tracking. IEEE/ASME Trans Mechatron 15(6):879–886Google Scholar
- 255.Voros S, Hager GD (2008) Towards real-time tool-tissue interaction detection in robotically assisted laparoscopy. In: BioRob 2008. 2nd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, 2008, pp 562–567Google Scholar
- 257.Wang P, Krishnan SM, Kugean C, Tjoa MP (2001) Classification of endoscopic images based on texture and neural network. In: Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2001, vol 4, pp 3691–3695Google Scholar
- 262.Weede O, Dittrich F, Worn H, Jensen B, Knoll A, Wilhelm D, Kranzfelder M, Schneider A, Feussner H (2012) Workflow analysis and surgical phase recognition in minimally invasive surgery. In: 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp 1080–1074Google Scholar
- 264.Wengert C, Reeff M, Cattin P, Székely G (2006) Fully Automatic Endoscope Calibration for Intraoperative Use. In: Bildverarbeitung für die Medizin 2006, Informatik aktuell, pp 419–423. SpringerGoogle Scholar
- 265.Wieringa FP, Bouma H, Eendebak PT, van Basten JPA, Beerlage HP, Smits GA, Bos JE (2014) Improved depth perception with three-dimensional auxiliary display and computer generated three-dimensional panoramic overviews in robot-assisted laparoscopy. J Med Image 15001:1Google Scholar
- 267.Wolf R, Duchateau J, Cinquin P, Voros S (2011) 3d Tracking of Laparoscopic Instruments Using Statistical and Geometric Modeling. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2011, no. 6891 in LNCS, pp 203–210. SpringerGoogle Scholar
- 270.Wu C, Tai X (2009) Application of Color Change Feature in Gastroscopic Image Retrieval. In: Sixth Int’l Conf on Fuzzy Systems and Knowledge Discovery, 2009. FSKD ’09, vol 3, pp 388–392Google Scholar
- 271.Xia S, Ge D, Mo W, Zhang Z (2005) A Content-Based Retrieval System for Endoscopic Images. In: 27th Annual International Conference of the Engineering in Medicine and Biology Society, 2005, IEEE-EMBS 2005. pp 1720–1723Google Scholar
- 272.Xia S, Mo W, Yan Y, Chen X (2003) An endoscopic image retrieval system based on color clustering method. In: Third International Symposium on Multispectral Image Processing and Pattern Recognition, vol 5286, pp 410–413Google Scholar
- 273.Yamaguchi T, Nakamoto M, Sato Y, Konishi K, Hashizume M, Sugano N, Yoshikawa H, Tamura S (2004) Development of a camera model and calibration procedure for oblique-viewing endoscopes. Computer aided surgery: official journal of the Int’l Society for Computer Aided Surgery 9(5):203–214Google Scholar
- 274.Yao R, Wu Y, Yang W, Lin X, Chen S, Zhang S (2010) Specular Reflection Detection on Gastroscopic Images. In: 2010 4th Int’l Conf. on Bioinformatics and Biomedical Engineering (iCBBE), pp 1–4Google Scholar
- 277.Zhang C, Helferty JP, McLennan G, Higgins WE (2000) Nonlinear distortion correction in endoscopic video images. In: Proceedings. 2000 International Conference on Image Processing, 2000, vol 2, pp 439–442Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.