Multi-atlas Based Segmentation Editing with Interaction-Guided Constraints
We propose a novel multi-atlas based segmentation method to address the editing scenario, when given an incomplete segmentation along with a set of training label images. Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate training labels and derive their voting weights. Specifically, we divide user interactions, provided on erroneous parts, into multiple local interaction combinations, and then locally search for the training label patches well-matched with each interaction combination and also the previous segmentation. Then, we estimate the new segmentation through the label fusion of selected label patches that have their weights defined with respect to their respective distances to the interactions. Since the label patches are found to be from different combinations in our method, various shape changes can be considered even with limited training labels and few user interactions. Since our method does not need image information or expensive learning steps, it can be conveniently used for most editing problems. To demonstrate the positive performance, we apply our method to editing the segmentation of three challenging data sets: prostate CT, brainstem CT, and hippocampus MR. The results show that our method outperforms the existing editing methods in all three data sets.
KeywordsVote Weight Initial Segmentation Training Label Active Shape Model Patch Model
Unable to display preview. Download preview PDF.
- 3.Schwarz, T., Heimann, T., Tetzlaff, R., Rau, A.M., Wolf, I., Meinzer, H.P.: Interactive surface correction for 3D shape based segmentation. In: Proc. SPIE (2008)Google Scholar
- 4.Park, S.H., Lee, S., Yun, I.D., Lee, S.U.: Structured patch model for a unified automatic and interactive segmentation framework. Medical Image Analysis (2015)Google Scholar