An automated surgical decision-making framework for partial or radical nephrectomy based on 3D-CT multi-level anatomical features in renal cell carcinoma

Objectives To determine whether 3D-CT multi-level anatomical features can provide a more accurate prediction of surgical decision-making for partial or radical nephrectomy in renal cell carcinoma. Methods This is a retrospective study based on multi-center cohorts. A total of 473 participants with pathologically proved renal cell carcinoma were split into the internal training and the external testing set. The training set contains 412 cases from five open-source cohorts and two local hospitals. The external testing set includes 61 participants from another local hospital. The proposed automatic analytic framework contains the following modules: a 3D kidney and tumor segmentation model constructed by 3D-UNet, a multi-level feature extractor based on the region of interest, and a partial or radical nephrectomy prediction classifier by XGBoost. The fivefold cross-validation strategy was used to get a robust model. A quantitative model interpretation method called the Shapley Additive Explanations was conducted to explore the contribution of each feature. Results In the prediction of partial versus radical nephrectomy, the combination of multi-level features achieved better performance than any single-level feature. For the internal validation, the AUROC was 0.93 ± 0.1, 0.94 ± 0.1, 0.93 ± 0.1, 0.93 ± 0.1, and 0.93 ± 0.1, respectively, as determined by the fivefold cross-validation. The AUROC from the optimal model was 0.82 ± 0.1 in the external testing set. The tumor shape Maximum 3D Diameter plays the most vital role in the model decision. Conclusions The automated surgical decision framework for partial or radical nephrectomy based on 3D-CT multi-level anatomical features exhibits robust performance in renal cell carcinoma. The framework points the way towards guiding surgery through medical images and machine learning. Clinical relevance statement We proposed an automated analytic framework that can assist surgeons in partial or radical nephrectomy decision-making. The framework points the way towards guiding surgery through medical images and machine learning. Key Points • The 3D-CT multi-level anatomical features provide a more accurate prediction of surgical decision-making for partial or radical nephrectomy in renal cell carcinoma. • The data from multicenter study and a strict fivefold cross-validation strategy, both internal validation set and external testing set, can be easily transferred to different tasks of new datasets. • The quantitative decomposition of the prediction model was conducted to explore the contribution of each extracted feature. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-023-09812-9.


An automated surgical decision-making framework for partial or radical nephrectomy based on 3D-CT multi-level anatomical features in renal cell carcinoma ELECTRONIC SUPPLEMENTARY MATERIAL
S1 CT sequences and acquisition for patients.

CT scanning protocol
The CT image scanning was performed using a dual-source spiral CT system(SOMATOM Force, Siemens Medical System, Forchheim, Germany) in local hospital Ⅰ .The CT image scanning also was performed using a dualsource spiral CT system(SOMATOM Force, Siemens Medical System, Forchheim, Germany) in local hospital Ⅱ .The local hospital Ⅲ underwent CT scanning with a 128-slice spiral CT system (Brilliance iCT, Philips System).The CT protocol data of five open-source data sets was missing.The CT scanning parameters were showed in Table S1.In this study, we strictly abide by the following inclusion criteria: 1.According to the new system[2], the goal of PN is reached when (1) surgical margins are negative,

Participant data
(2) WIT is <20 min, and (3) no major complications are observed(For samples from local hospitals, we strictly enforce this criterion, but the open-source data are filtered by urologists with reference to this criterion).2. For the nephrographic phase, the CT values of blood in the abdominal aorta, renal cortex, and renal medulla are similar (The difference in CT value is less than or equal to 20HU).3.
For the corticomedullary phase, the CT value of blood in the abdominal was utilized to predict the surgical procedure of kidney tumor (partial versus radical nephrectomy) For the hyperparameter tuning in XGBoost, we utilized Grid Search to obtain the best value of following parameters: "min_child_weight", "max_depth", "n_estimators" and "max_depth".

Model explaining with SHAP values
To evaluate the contribution of the imaging features to the model prediction, the SHapley Additive exPlanations (SHAP) [12] value of each feature for each sample was Fig.S1(a) The Accuracy of surgical approach prediction model in internal validation set by the 5-Fold Cross-Validation..

Five
cohorts including C4KC-KiTS, CPTCA-CCRCC, TCGA-KIRC, TCGA-KIRP, TCGA-KICH from The Cancer Imaging Archive (TCIA) [1] and there local-hospitals data from the eighth affiliated of Sun Yat-sen University ,Sun Yat-sen University Cancer Center ,Shenzhen Luohu People's Hospital are incorporated into this study.C4KC-KiTS includes 210 CT scan images and kidneys/tumor image segmentation results.For the data in TCGA, CPTCA and local hospitals, some images are disordered, low-quality chest or brain scans, postsurgical without kidney.Two well-trained clinical graduate students and two radiologists helped the data cleaning and filtering and cases with corticomedullary phase CT scan images containing renal regions were kept.
aorta and the renal artery is greater than or equal to 250 HU. (To ensure sufficient contrast agent concentration).The renal cortex is significantly enhanced, and the renal medulla is slightly or not enhanced.The clinical, pathologic, and surgical of the TCGA cohort were collected from cBioportal [3].To sum up, we collected 473 cases containing 473 corticomedullary phase CT images and related clinical pathologic.CT image preparation and segmentation criteria First of all, DICOM format primary files are transformed into NIfTI files by the dcm2niix tool [4].Secondly, low-quality images, disorganized, postsurgical, or MRI images as well as the CT scan without abdominal area or enhanced images were cleaned and filtered by two well-trained clinical graduate students and two radiologists.Thirdly, 141 CT images and segmentation results from the C4KC-KiTS cohort are utilized to construct a kidney/tumor pre-segmentation model by a modified 2D Mask RCNN model [5; 6].Fourthly, the proposed model is utilized to predict the kidney/tumor segmentation of the 271 CT images from another dataset.AT last, two well-trained clinical graduate students and two radiologists manually mark the segmentation results by ITK-SNAP software [7].After the above steps, 473 corticomedullary phase CT images and related kidney/tumor criteria segmentations are manufactured.CT images feature extractionPyradiomics[8] module in PyPi is used to extract the kidney and tumor texture features, morphological features, and statistical features of the raw CT images with segmentations.For each corticomedullary phase CT scan image file, 200 features of the kidney and tumor region are extracted.Besides, we established two extractors about stage and grace and performed dimension reduction on the segmented voxels of CT images (Fig.3).First, we aligned the resolution of raw CT images to 2.0×1.5×1.5 mm.Secondly, the mean and standard deviation values of kidney and tumor regions are utilized to normalize all images.Thirdly, we calculated the central coordinate of the joint region of tumor and kidney.Fourthly, a region of 64×64×64 (12.8×9.6×9.6cm 3 ) only contained tumor and kidney is cropped with the center as the tumor and kidney center.The empty regions of the cropped images are filled with minimal pixel.Moreover, we constructed two feature extractors about stage and grace by 3D ResNet-18.It is an 18-layer CNN proposed, which are composed of 1 cross-layer connection and 17 stacked convolutional layers (Fig.S2)[9].64 Deep Learning features from the cross-layer connection are extracted for each feature extractor.Lastly, singular value decomposition (SVD) and principal component analysis (PCA) are used to reduce the dimension of the cropped region into 64 and 256 scikit-learn tools.SVD is a general method of decomposing a matrix into three deterministic matrices: = ΣV T .With comparison of model performance, the best feature combination (Pyradiomics (200) + Deep Learning (64+64) +SVD (64) + PCA (256)) is introduced for prediction.In brief, a total of 648 features are used for classification of partial vs radical nephrectomy.Machine learning model and algorithm 473 cases are randomly split into a training set (80%, 378 cases with 378 files) and a testing set (20%, 95 cases with 95 files).For image segmentation, the state-of-the-art model nnUNet [10] is used to construct kidney/tumor segmentation with a training set.Five-fold cross-validation is applied in the training process.Extreme gradient boosting (XGBoost [11], v1.3.3) calculated.The SHAP values display the contribution to affect the predictive model of each feature.The shade of the dot indicates the different eigenvalue magnitudes, with redder dots designating larger eigenvalues and bluer dots designating lower eigenvalues.So, the top-20 ranks of mean absolute SHAP values show the features with great significance on classification.