Challenges in Multi-centric Generalization: Phase and Step Recognition in Roux-en-Y Gastric Bypass Surgery

Most studies on surgical activity recognition utilizing Artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers. In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers: the University Hospital of Strasbourg (StrasBypass70) and Inselspital, Bern University Hospital (BernBypass70). The dataset has been fully annotated with phases and steps. Furthermore, we assess the generalizability and benchmark different deep learning models in 7 experimental studies: 1) Training and evaluation on BernBypass70; 2) Training and evaluation on StrasBypass70; 3) Training and evaluation on the MultiBypass140; 4) Training on BernBypass70, evaluation on StrasBypass70; 5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, evaluation 6) on BernBypass70 and 7) on StrasBypass70. The model's performance is markedly influenced by the training data. The worst results were obtained in experiments 4) and 5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments 6) and 7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments 1) and 2)). MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows.


Introduction
The emerging field of Surgical Data Science (SDS) aims to impact the quality of interventional healthcare by collecting, organizing, analyzing, and modeling surgical data (Maier-Hein et al., 2022).A principal element of SDS is analyzing intraoperative data collected in the operating room (OR) to model surgical workflows which eventually could improve patient outcomes by providing intraoperative assis-  (Lavanchy et al., 2023a)) followed in more than 10 surgeries in each medical center.activities from endoscopic videos; Phases (Twinanda et al., 2017;Garrow et al., 2020;Ramesh et al., 2021;Demir et al., 2023), steps (Charriere et al., 2014;Ramesh et al., 2021Ramesh et al., , 2023)), modeling surgical actions, for example through the use of action triplets (Nwoye et al., 2020(Nwoye et al., , 2022) ) and detection and localization surgical tools (Hajj et al., 2018;Vardazaryan et al., 2018) are some of the popular tasks studied in the community.
Given the data-driven nature of these recent AI methods in SDS, the availability of large labeled surgical video datasets is paramount.Datasets have been curated in the community to study phase recognition across different types of surgeries: Cholec80 (Twinanda et al., 2017) for laparoscopic cholecystectomy, Bypass40 (Ramesh et al., 2021) for laparoscopic Roux-en-Y gastric bypass (LRYGB), laparoscopic sleeve gastrectomy (Hashimoto et al., 2019), transanal total mesorectal excision (Kitaguchi et al., 2021), and laparoscopic inguinal hernia repair (Takeuchi et al., 2022).Nevertheless, datasets to train AI models for other more fine-grained tasks, such as recognition of surgical tools, action triplets, safe dissection zones, etc, have only been collected for specific surgeries.For example, Bypass40 (Ramesh et al., 2021) and CATARACTS 2 2 https://cataracts2020.grand-challenge.org/have been annotated with steps alongside phases for LRYGB and cataract surgeries, CholecT50 (Nwoye et al., 2022) contains surgical action triplets labels for laparoscopic cholecystectomy surgeries, and safe dissection zones have been studies for cholecystectomy (Madani et al., 2020).Furthermore, these labeled datasets have been collected from a single medical center.Training on mono-centric datasets limits the model's generalizability to datasets from other centers.To overcome this generalization gap multi-centric datasets representing different surgical techniques and workflows are warranted (Kitaguchi et al., 2022;Mascagni et al., 2022;Kassem et al., 2023;Wagner et al., 2023).One such attempt has been made in creating the HeiChole dataset which consists of 33 videos of laparoscopic cholecystectomy performed in 3 different medical centers (Wagner et al., 2023).However, multi-centric datasets are rare as they are difficult to acquire and annotate consistently.
Besides, only a few works have explored recognizing activities at different levels of granularity.(Ramesh et al., 2021;Valderrama et al., 2022) have attempted joint phase and step recognition using endoscopic video datasets from a single medical center.The most closely related work to this paper in objectives is HeiChole (Wagner et al., 2023) which created a multi-centric dataset of 33 videos for phase recognition (7 To this end, the study has two objectives: creating a large multi-centric dataset for a complex LRYGB surgical procedure and recognition of activities at multiple levels.Thus the contributions of this work are threefold: 1. Introduction of a multi-centric dataset of 140 laparoscopic Roux-en-Y gastric bypass videos from two centers (Strasbourg and Bern) 2. The full annotated dataset with LRYGB ontology of 12 phases and 46 steps, which will be publicly released.
The code and evaluation scripts will be made available alongside the dataset.3. Evaluation of AI models for phase and step recognition and assessment of multi-centric model generalization

Datasets and Annotations
BernBypass70 is a dataset consisting of 70 surgical videos of LRYGB at Inselspital, Bern University Hospital, Switzerland.The surgeries were performed by three surgeons.The videos were recorded at a resolution of 720 × 576 at 25 frames-per-second (fps).
StrasBypass70, extending the Bypass40 (Ramesh et al., 2021) dataset, is a collection of 70 videos of LRYGB surgeries performed by surgeons at the University Hospital of Strasbourg, France.The videos were recorded at a resolution of 854 × 480 or 1920 × 1080 resolution at 25 fps and were uniformly edited to a resolution of 854 × 480.
MultiBypass140 is the combined dataset of 140 videos from Bern and Strasbourg medical centers.Sample images of the two datasets are presented in Figure 1.All videos have been anonymized by blacking out the potentially identifying frames outside the patient's body.Those out-of-body frames have been detected using our publicly released OoBNet model (Lavanchy et al., 2023b) and were verified by manual review.

Annotations.
Two board-certified surgeons with more than 10 years of clinical practice and extensive experience with surgical video analysis, annotated the MultiBypass140 dataset with activities at two levels of granularity, i.e., phases and steps.The annotation ontology of the LRYGB procedure as defined in Lavanchy et al. (2023a) consists of 12 phases and 46 finer-grained steps, presented in Table 1.A detailed description of all the phases and steps can be found in the supplementary.The MultiBypass140 was annotated by two board-certified surgeons using the MOSaiC software (Mazellier et al., 2023).The annotation inter-rater reliability (Cohen's kappa) of the ontology between the two surgeons is found to be 96% for phases and 81% for steps (Lavanchy et al., 2023a).

Data Statistics.
On average, the surgical duration is 110 and 72 minutes and the total number of frames at 1 fps amounts to 464,794 and 305,907 in the StrasBypass70 and BernBypass70, respectively.Data characteristics of the multicenter dataset can be found in the supplementary.According

Model architecture
A state-of-the-art deep learning model, MTMS-TCN (Ramesh et al., 2021), for surgical activity recognition, was used for the different experiments presented in this paper.The pipeline of MTMS-TCN consists of two stages where first a multi-task Convolutional Neural Network (CNN) (ResNet-50 (He et al., 2016)) model is employed for extracting visual features from images followed by a multi-task multi-stage causal Temporal Convolutional Network (TCN) to refine the features and extracting temporal information for joint phase and step recognition.A schematic representation of the model architecture is displayed in Figure 2.
Spatial model: ResNet-50 (He et al., 2016) is one of the popular CNN architectures that has been heavily utilized in the computer vision community for activity recognition.Due to its success, ResNet-50 is utilized as a visual feature extractor and trained in multi-task learning of phase and step recognition.The model is initialized with pre-trained weights on ImageNet (a public dataset of over 14 million images for object recognition tasks) and trained using Adam optimizer for 30 epochs.
Temporal model: MTMS-TCN, is a two-stage TCN model that was trained in a multi-task learning setup on video features extracted from the CNN model for 200 epochs.Furthermore, each stage of the TCN model consists of causal convolutions that utilize only information from past frames and dilated convolutions with exponentially increasing dilation factor that facilitates capturing long temporal dependencies.This work utilized MTMS-TCN with a single-stage temporal model as a second stage does not improve the model performance (Ramesh et al., 2021).
Seven experimental setups were used to analyze the generalizability of MTMS-TCN:  Step occurrence Fig. 4. Total occurrence of steps in the videos from the two medical centers.

Model evaluation
Model performance was assessed by comparing human ground truth annotations with model predictions measuring accuracy, precision, recall, and F1-score.Accuracy is the proportion of all correct (true positive and true negative) predictions among all predictions.Precision, also referred to as positive predictive value, is the proportion of true positive predictions among all positive (true and false positive) predictions.Recall, also referred to as sensitivity, is the proportion of true positive predictions among all relevant (true positive and false negative) predictions.F1-score, which is a measure of accuracy, is defined as the harmonic mean of precision and recall.As in previous works, performance metrics were averaged across phases and steps per video and then across videos (Ramesh et al., 2021;Czempiel et al., 2020).

Results & Discussions
This is the first study to evaluate deep learning models for multi-level activity recognition, i.e., phases and steps, on a large multi-centric video dataset of LRYGB procedures.A common ontology of phases and steps has been designed to capture the surgical workflow followed in the two medical centers.In this section, we present the results on the new multi-centric video dataset of LRYGB procedures and highlight our findings.

Workflow:
Strasbourg vs Bern.Differences in surgical workflow between medical centers are natural and common, as different surgeons perform the interventions.StrasBypass70 on average has a video duration of 111±33 minutes and the average workflow includes 10 phases and 33 steps.On the other hand, BernBypass70 has an average video duration of 73±20 minutes with the average workflow consisting of 8 phases and 27 steps.To understand the surgical workflow differences of LRYGB followed in the Strasbourg and Bern medical centers, we visualize and compare the phase and step occurrences in Figure 3 & 4. We also visualize the surgical workflows, modeled as phase transition graphs, in Figure 1.
In StrasBypass70, the occurrence of phases and steps is evenly distributed.Either a phase or a step occurs in most videos in the dataset, or it does not occur at all.In contrast, BernBypass70 has only some videos containing all phases and steps.Most of the videos contain a subset of phases and steps.These differences in dataset distribution of phases and steps between StrasBypass70 and BernBypass70 result from differences in surgical technique and workflows.The phase transition graph visualized in Figure 1 further exemplifies these differences in surgical workflows across different centers.In StrasBypass70 the omentum is routinely divided (P3) and both mesenteric defects are routinely closed (P7 & P9), which is not routinely done in BernBypass70.Given the hierarchical structure of phases and steps, with every phase missing, corresponding steps are missing as well.Hence, the average video of BernBypass70 contains 2 phases and 6 steps less than the average StrasBypass70 video.This finding is also reflected by the average video duration which is 38 minutes shorter in BernBypass70 compared to StrasBypass70 videos.
To independently analyze the performance of deep learning models on each center/dataset, we train different models on BernBypass70, StrasBypass70, and MultiBypass140 datasets and evaluate the models' performance on respective test sets.The phase and step recognition task results are presented in Table 2 & 3.All the models, both spatial and spatio-temporal, achieve considerably low performance across all the metrics on Bern-Bypass70 in comparison to StrasBypass70.For instance, the CNN (ResNet-50) spatial model on phase recognition task shows 8% lower accuracy and a staggering 28% degradation in F1-score on BernBypass70 compared to StrasBy-pass70.Spatio-temporal model, MTMS-TCN, performs 5% lower in accuracy and 15-17% lower on all other metrics on BernBypass70 over StrasBypass70.Similarly for step recognition, CNN and MTMS-TCN on BernBypass70 achieve 12% and 8-10% lower than StrasBypass70 on all metrics.These differences are direct consequences of the differences in surgical workflow followed in the two centers and consistent with previous work on laparoscopic cholecystectomy (Kassem et al., 2023).Given that many phases and steps (Figure 3 & 4) are not carried out routinely in Bern compared to Strasbourg medical center, their occurrences/class distribution is notably skewed in BernBypass70 which makes recognition of phases and steps increasingly challenging for deep learning models on this dataset.This can be witnessed in Figure 5 & 6 where the model performs best on videos following common workflow (P1→P2→P3→...) in both the datasets while performs worse when there is unexpected flow of phases/steps performed during surgeries (P4→P10→P8 or P1→P12→P1→P4).
Lastly, all the deep learning models on the combined MultiBypass140 dataset have a performance exceeding the performance on BernBypass70, but inferior to the performance on StrasBypass70.
Recognition: Cross-center.Here we examine the ability of the models to transfer knowledge learnt using data from one center to the other, we train CNN and MTMS-TCN on one center and evaluate them on the other (experiments 4, 5, 6, & 7).The experimental results are tabulated in Table 4.
The performance of the CNN & MTMS-TCN in these experiments is considerably inferior to training and evaluation on individual mono-centric datasets (experiments 1 & 2).CNN & MTMS-TCN trained on BernBypass70 when evaluated on StrasBypass70 without any fine-tuning achieve 57% & 64% in accuracy and 32% & 33% in F1 score.This is due to the significant differences in the workflow followed in the Bern center with many phases and steps not routinely carried out.Inversely, CNN & MTMS-TCN  Both CNN & MTMS-TCN trained on MultiBypass140 when evaluated on the mono-centric datasets (experiments 6 & 7) achieve performance close to its performance when trained and evaluated on the individual dataset (experiments 1 & 2) for both the phase and step recognition tasks.This shows the capacity of deep learning models to learn all the variations existing in the data and domain without any degradation in performance.
Challenges.Despite its multi-centric design, this study is limited by the fact that datasets from only two centers are involved.The significant variability in surgical technique and image domain makes the transferability of deep learning models between centers a challenging task.More studies on adding video datasets from other clinical centers are imperative to capture the variability in surgical technique and dataset distributions.Furthermore, these studies could facilitate standardizing surgical procedures across the globe.However, labeling large surgical datasets from different centers is difficult and time-consuming.Future studies should focus on developing deep learning models to learn from a large corpus of unlabeled data from multiple centers and evaluate their performance on a multi-centric dataset.

Conclusion
This study demonstrates the need to exhibit the variation of surgical techniques and workflow to deep learning models to avoid the generalization gap described in the literature (Kitaguchi et al., 2022;Bar et al., 2020).With extensive experiments, the origin of the performance differences in our datasets has been investigated.It has been shown that dataset distribution and size due to different LRYGB techniques and workflows between centers have a major impact on model performance.This work highlights the importance of multicentric datasets for the training and evaluation of AI models in surgical video analysis.The public release of the datasets and code of this work will inspire and foster future research in developing multi-centric generalizable AI models for the recognition of surgical activities.

Fig. 2 .
Fig. 2. Schematic representation of the model architecture.In stage I the input images are processed by a ResNet-50 convolutional neural network to extract visual feature vectors, that are passed to stage II.In stage II the feature vectors of subsequent images of a video are stacked and processed by a multi-stage temporal convolutional network (MS-TCN).This introduces temporal awareness into the model predictions.

Fig. 3 .
Fig. 3. Total occurrence of phases in the videos from the two medical centers.
1. Training and evaluation on BernBypass70 2. Training and evaluation on StrasBypass70 3. Training and evaluation on the joint MultiBypass140 dataset 4. Training on BernBypass70 and evaluation on StrasBy-pass70 5. Training on StrasBypass70 and evaluation on BernBy-pass70 6. Training on MultiBypass140 and evaluation on BernBy

Fig. 5 .
Fig. 5. Best (upper row) and worst (lower row) pairs of ground truth annotations (top) and model predictions (MTMS-TCN, bottom) for all 3 datasets.Every pair corresponds to one video and the width of each phase is relative to its duration.

Fig. 6 .
Fig. 6.Best (upper row) and worst (lower row) pairs of ground truth annotations (top) and model predictions (MTMS-TCN, bottom) for all 3 datasets.Every pair corresponds to one video and the width of each step is relative to its duration.
Ramesh)  and French state funds managed by the ANR within the National AI Chair program under Grant ANR-20-CHIA-0029-01 (Nicolas Padoy) and the Investments for the future program under Grant ANR-10-IAHU-02 (IHU Strasbourg).This work was also supported by French state funds managed within the Investissements d'Avenir program by BPI France (project CONDOR).Access to the HPC resources of IDRIS was granted under the allocations AD011012832R1.

Table 1 .
List of phases and steps of LRYGB procedure.