Introduction

Clinical practice has long used medical images for diagnosis and treatment planning. With recent advances in three-dimensional (3D) digital imaging technology, the need for biomedical engineering (BME) students to learn the basics of extracting specific anatomical features from the images, a process called segmentation, has grown significantly. Acquisition of these skills is important to make BME undergraduate students more marketable for a variety of professional development opportunities, including summer internships, graduate school, and industry jobs, and would also prove useful in their curriculum for tasks such as obtaining 3D anatomy for design projects or engineering analysis. In this study, we evaluate the suitability of several commercial packages for use in an introductory learning module intended to introduce BME students to image segmentation, share a description of the developed learning module, and present results from classroom implementation in multiple course settings.

The clinical and industry trend toward 3D viewing of reconstructed images also makes possible creation of 3D printed models from additive manufacturing (AM). The applications and benefits of these patient-specific, printed models are many. For example, printed models are especially useful in orthopedics for understanding the anatomy of bones and joints, manufacturing customized orthotics and implants, surgical planning, customized jigs, addressing deformities, teaching, and research.5,1,2 Commercial enterprises have developed to offer customized hip, shoulder, and cranio-maxillofacial replacements (e.g., KLS Martin WORLD, Tuttlingen, Germany; LOGEEKs Medical Systems, Novosibirsk, Russia; Materialise, Leuwen, Belgium). The applications of 3D printed models in orthopedics are predicted to grow exponentially in the coming years.5 Patient-specific, 3D printed models are also well suited for diagnosis and treatment planning for congenital heart disease, helping to clarify the complicated anatomy of the heart, great vessels, and coronary arteries before intervention in a wide range of defects.19,20,916

A typical process of segmenting a 3D image is illustrated in Fig. 1, beginning with importing a stack of 2D images from a CT or MRI scan and ending with a file suitable for rendered viewing, 3D printing, or finite element meshing and analysis. Segmentation is the task of partitioning the image into non-overlapping, constituent regions, usually within a defined pixel intensity range.7,10,13 Image segmentation is not straightforward, being complicated by factors such as low or overlapping contrast of the object of interest with other areas of the scan, irregular boundaries, partial-volume effects, noise, and motion, to name a few. While a wide range of sophisticated methods is available, most segmentation software uses one or a combination of three basic techniques, threshold-based, edge-based, and region-growing, each of which should be understood when assessing the software. In all methods, tightly enclosing a region of interest (ROI) is the first step (Fig. 1b). Segmentation by thresholding involves setting pixel intensity levels to define bins for assigning constituent objects. Levels can be generated by interactive visual assessment or by automatic methods based on the intensity histogram. One problem arises if pixel intensities of the desired object overlap with the background; thus, finding an optimal threshold that minimizes the overall probability of error in pixel classification is key. Another problem of threshold-based segmentation is insensitivity to the spatial position of a pixel. For example, due to imaging noise, a segmentation may contain very small segments in the middle of larger regions that we know cannot physically exist, or jagged boundaries that are known to be smooth.

Figure 1
figure 1

Illustration of the general process of image segmentation as applied to extracting part of the skull from a set of CT images. Images (a-e) were generated using the software ITK-SNAP, although the process is similar in other segmentation software. The exported STL file shown in (e) was 3D printed to create the part shown in (f), and converted to a finite element analysis (FEA) mesh in COMSOL to create the image in (g).

Edge-based segmentation involves mathematically finding boundaries in an image from the gradient of the intensity field. The norm of this gradient is the “edgeness.” The edge pixels, or “edgels,” are then linked into chains based on the magnitude and direction of the gradient vector. Robust edge-linking is not trivial, particularly in images with smooth transitions and low contrast, and edge-based segmentation almost always requires follow-up processing to deal with spurious or partial edges.

Segmentation by region-growing is the opposite of edge-based segmentation, extracting an image region that is connected based on similarity criteria. To begin, the operator manually selects one or more seed points within the desired region. The algorithm selects all pixels connected to the initial seed based on predefined criteria, such as a similar range of grayscale intensity or edgeness of the boundary between pixels. Each of the new picks becomes a new seed and the process continues to a user-specified criterion to stop the growth. The neighborhood pixels with similar properties are merged to form closed regions, as shown in Fig. 1d.

It is clear from these descriptions that typical undergraduate students in a BME curriculum do not have the background to develop specialized segmentation techniques, and it is difficult or impossible to add more topics to an already over-packed BME curriculum. However, a guided experience with dedicated medical segmentation software and independent practice can help introduce students to acquire basic skills. Students wanting more experience can then continue to advanced projects or graduate school. The guided experience should include understanding the basic process and applications, the type and quality of scans needed, how to generate files suitable for 3D printing, the difficulties encountered, and the limitations.

To bring the clinical and industry trends combining image segmentation and CAD into the undergraduate BME curriculum, the overarching goal of this work was to develop a stand-alone course module to achieve the following learning objectives:

After completing this activity, students should:

  1. 1.

    Be familiar with the storage, visualization, and manipulation of 3D medical images;

  2. 2.

    Be able to segment an anatomic entity from a set of medical images;

  3. 3.

    Recognize the challenges of medical segmentation.

As such, this work had two aims. The first aim was to evaluate five commercially available software packages for their suitability for use in an undergraduate teaching module. Once the best package was selected, the second aim was to develop and implement a stand-alone module addressing these learning objectives that could be incorporated into existing BME courses to train students in the use of the selected image segmentation tool.

Evaluation of Software for Undergraduate Instruction in Image Segmentation

As a first step in developing an instructional module to achieve the learning objectives, five commercially available software packages suitable for segmentation of a wide range of scans were evaluated to identify the most student-friendly software package based on their perceived learning curve, ease of use, tools for segmentation and rendering, special tools, and cost. More details on the software evaluation methods and results are provided below.

Methods for Software Evaluation

For software assessment, we chose soft tissue images of the heart as these were viewed as being more challenging than bone due to added difficulties in gating, contrast, and dichotomy of blood volume versus heart volume reconstruction. IRB approval for a retrospective study of cardiovascular images was obtained from Geisinger Medical Center (Danville, PA). The specific test case chosen for this study was indicated for an ALCAPA heart abnormality (anomalous origin of the left coronary from the pulmonary artery). The total dataset of 224 DICOM (Digital Imaging and Communication in Medicine) images obtained by CT had a slice thickness of 0.625 mm with 512x512 resolution at 0.488 mm per pixel in the transverse directions (xy), with contrast medium centered in the left side of the heart and retrospective gating. Sample images are shown in Figure 2. The goal was a complete segmentation of the heart and great vessels.

Figure 2
figure 2

Two sections from the original DICOM image set used to evaluate the five software packages. Both sections are transversely oriented and show the heart with contrast agent filling the blood cavities, with the right image inferior to the left. The goal of the evaluation was a complete segmentation of the tissue volume of the heart and great vessels, with all blood cavities open.

The segmentation and 3D reconstruction tasks were completed using the five different software packages listed in Table 1. All were downloaded from the developers’ websites and all except OsiriX Lite installed on a Windows desktop with 8 GB RAM memory, i5 processor, and NVidia graphics card. OsiriX Lite, which only runs on Mac computers running from OS X 10.12 to macOS 12, was installed on a Mac laptop.

TABLE 1 Software packages chosen for evaluation.

Other packages were briefly evaluated, such as the image-processing toolbox of MATLAB (Mathworks, Natick, MA) and Fiji, an open-source image-processing package for ImageJ (https://imagej.net/software/fiji/). While both offer extensive tools for analyzing and manipulating images and large user bases, their suitability for exporting printable files was viewed as too limited for an introductory learning module relative to the chosen software packages.

Operators, a team of three students unfamiliar with segmentation and reconstruction and all software packages, were asked to evaluate the five different software packages listed in Table 1. Initial familiarization was achieved through exploration of folders, tools, options, and the online help documentation. Completion of brief tutorials and videos provided on the developer’s websites complemented this to ensure competency with the pertinent tools and techniques. Seven metrics were defined to assess the relative strengths of the software packages for undergraduate teaching: (1) Learning curve, (2) Ease of use, (3) Automatic segmentation capabilities, (4) 3D rendering, (5) Special tools, (6) Help Resources, and (7) Manual Manipulation Required. Operators found in the process of evaluation that all programs provided more-than-adequate support, and all required approximately the same amount of manual definition of the right atrium and ventricle, some interior walls, and, to some extent, removal of the chest wall and spine, so comparisons (6) and (7) were removed. Comments provided in categories (1)-(5) were later used to generate comparative scores for a decision matrix. Relevant tools and techniques for each software package are described below, along with evaluator observations during the segmentation of the test dataset.

ITK-SNAP

Competency was obtained most rapidly of the five packages, as only the most important tools are presented with helpful popups. ITK-SNAP allows for fully manual segmentation, but the Snake tool, which performs automatic segmentation, was most time-effective. According to the documentation,8 “A snake is a closed curve or surface representing a segmentation. The snake evolves from a very rough estimate of the anatomical structure of interest to a close approximation of the structure, based on user-specified parameters determining slowing down near boundaries (edge-based segmentation) and attracting to regions of similar intensity (region-based segmentation).” The Region of Interest (ROI) and the Thresholding technique were set in 3D within the Snake tool to separate major tissue groups. Five seed points were added within the myocardial wall, with maximum Driving force and minimum Smoothing force. The Snake tool was allowed to run until the seed points had spread throughout the entire heart (Fig. 3).

Figure 3
figure 3

ITK-SNAP user interface after completion of the segmentation of the test case. Red color indicates the selected label field. Windows clockwise from upper left show transverse, sagittal, coronal, and 3D views.

Numerous features of ITK-SNAP aided in the segmentation. The layer system was simple to use, the four-panel view aided spatial understanding of anatomy and making effective modifications, seed-based propagation allowed for targeting of very specific regions, the 3D viewer allowed for continuous update in real time during segmentation, and automatic smoothing operations resulted in a cleaner and more realistic rendered model. A downside was that although the snake tool was powerful, it encountered some difficulty in very thin-walled sections and was processor-intensive with too many starting seeds.

3D Slicer

3D Slicer took the longest amount of time to reach competency. The Volume Rendering module automatically rendered the full model, but blood volume only and could not translate this to a segmentation. After Volume Cropping to limit surrounding structures, the Simple Region-Growing Segmentation module produced a basic segmentation of the heart by placing a fiducial marker on a section of heart wall. Iterations with the wide variety of option available (e.g., Smoothing Iterations, Smoothing Parameters, Segmentation Parameters, Timestep, Number of Iterations, Multiplier, Neighborhood) continued until the heart walls were well marked and defined from the rest of the regions. A drawback to the variability is difficulty in recreating the same segmentation twice; otherwise, Simple Region-Growing is a powerful tool that produces substantial results.

Extensive customization of 3D Slicer is possible through open source. Some downsides include a challenging module and file management system. Understanding the purpose of each of the large number of modules and location of resources was time-consuming, many modules were too specific for general segmentation, and options or tools within the modules were sometimes ambiguous. The wide variety of tools to create ROIs meant the optimal tool was always available, but could be difficult to choose from the large selection. In the heart tissue segmentation, 3D Slicer had some difficulty with wall definition near thinner sections and left small holes when the wall did not overlap from slice to slice.

OsiriX

OsiriX was #2 in time to competency. ROI tools created a region on the first slice to enclose the heart and exclude non-heart material, which were adjusted and propagated every five slices. The Grow Region tool then segmented the selected area after choosing a Threshold (Interval) technique and placing a seed point, and adjusting the Interval Value until the best wall definition was achieved. Before accepting the segmentation, the 3D Segmentation Growth option was selected to cascade the current segmentation setting throughout the entire image series.

The ROI tools in OsiriX allow customization not available in other programs, which can drastically reduce the unwanted regions in a segmentation. The Simple Region-Grower auto-segmentation tool has a variety of options to generate a segmentation and tended to result in a more fully defined segmentation than the auto-segmentation tools of the other programs. Unlimited potential for expansion exists through a Plug In utility. OsiriX has a user-friendly organization and functional labeling of tools, although navigation could be troublesome due to the expansive folder system and somewhat confusing file management system. Switching between modules for vital tools was required. The software had difficulty when auto-segmenting thin-walled sections and sometimes left small gaps in the walls due to slices not overlapping properly. The automatic 3D rendering tools could generate a 3D image of the blood volume, but could not translate this to a segmentation.

Mimics

Mimics was #3 in time to competency. The Thresholding tool with the Soft Tissue (CT) preset was first used to generate a segmentation of the entire image series. Thresholding ranges were modified slightly based on visual feedback. The Crop tool then limited the segmentation to only the heart and immediately surrounding regions. The Multi Slice Edit tool with Interpolation option removed the remainder of the unwanted section outside of the heart and also defined the right atrium and ventricle and patched holes within the heart walls. The mask was then rendered in 3D using the Calculate 3D tool. The Wrap and Smooth Object tools, found in the 3D Tools section, smoothed the surfaces of the rendered 3D model and minimized any remaining artifacts (Fig. 4).

Figure 4
figure 4

Mimics user interface after completion of the segmentation of the test case. Windows clockwise from upper left show coronal, transverse, 3D, and sagittal views.

The extensive tool library and segmentation techniques in Mimics required time for familiarization but all tools were cleanly organized and readily accessible, and allowed various paths for every task. The Region-Growing tool can create a new segmentation of only regions that are physically connected within a segmentation, allowing different regions of interest to be separated without individual manual manipulation. The mask system was powerful but could be confusing, although alleviated with consistent mask management. Mimics had some difficulty with wall definition but excellent small vessel definition. The interpolation tool could be inaccurate if boundaries changed nonlinearly between slices. Wrap and Smooth Obect tools could introduce error if used in numerous iterations. An automated tool renders the blood volume of the heart and can additionally separate each region of the heart into a separate segmentation for viewing.

Amira

Amira was #4 in time to competency, slightly behind Mimics. A Multi-Thresholding module was added after data stacks were loaded. After a user-specified first estimation of threshold limits for the heart wall pixel intensities, viewing with an Ortho Slice module to visualize the segmentation allowed adjusting until the best wall-to-blood volume definition was achieved. The Remove Islands and Smooth Label tools were applied to the entire stack to remove isolated pixels and smooth boundaries. Clarity in some regions needed to be sacrificed to get best interior wall definition.

Amira’s module scheme was overwhelming at first but became quickly user-friendly and allowed for deconstruction of large projects into small parts. Links between modules and datasets could become jumbled, avoided with careful screen organization. Help resources and visual presentation were very useful. One drawback was the lack of universal undo button. The segmentation tools in general produced excellent reconstruction of blood volume. The automatic volume rendering tool could render the heart wall as well as the blood volume of the heart to a high resolution. Settings in the segmentation tab make it possible to change the view to see the 3D image, along with other standard 2D views, as the segmentation is being done. Amira’s major benefit is the wide range of options available in each module. Many of the modules are relatively simple and straightforward, yet can become unique and versatile by controlling module options.

Summary of Software Evaluation

All five software packages ably accomplished the segmentation task. For non-teaching use, then, prior familiarization with a package might indicate the method of choice. Although licensing is costly, Mimics did indeed provide value commensurate with its price. With its wide variety of tools, Mimics can easily accomplish nearly any task, from general to advanced, including those beyond the scope of this study such as computer-aided design (CAD) and finite element analysis. ITK-SNAP was a close second to Mimics and is well suited for general reconstruction. Its ease of use and its snake tool allow even a novice user to accomplish a wide variety of tasks. Its main drawback is the limited focus on only segmentation, for those who want additional capabilities.

Amira follows closely behind ITK-SNAP, with segmentation techniques comparable to Mimics but falling behind in editing functionality, and can also handle both general and more advanced uses. Initial mastery is slightly more challenging than the other programs, and the major drawback is reliance on pixel intensity segmentation methods. It also lacks the power of ITK-SNAP’s snake tool. Slicer 3D has less capability for segmentation and thus was not as well suited for the purpose of this research; however, Slicer 3D’s open-ended customizability makes it the most attractive option for those interested in adding their own modules that use existing functionalities. In addition, over 100 modules exist with applications such as diffusion tensor imaging, neurosurgical planning, quantifying small volumetric changes in slow-growing tumors, calculating uptake values from PET data, and planning radiation therapy. OsiriX is best suited for image viewing purposes with more limited segmentation. The different rendering options allow for fast and easy viewing in 3D, but the segmentation tools are not as easily applied. Image series management within OsiriX is exceptional and adds to its primary viewer role. Its availability only for Mac computers is another limiting factor.

A decision matrix (Table 2) summarizes the comparative evaluation of the five software packages based on the chosen metrics, scored on a 1-5 scale with 1 the best performing and 5 the worst performing. Mimics scored the best across these categories (total score of 11 out of 25), ITK-SNAP scored second (13 out of 25), and 3D Slicer was scored the lowest (25 out of 25). Considering the fact that ITK-SNAP is free and Mimics has a high licensing fee, combined with the fact that ITK-SNAP was the easiest to use and learn, ITK-SNAP was identified is the best choice to adopt for an educational package based on the assessed metrics. Its free availability, powerful snake tool, and ease of use provided the best value for an undergraduate introduction to image segmentation. The primary drawback of this software, the focus solely on segmentation, was not a concern given the goals of our learning module.

TABLE 2 Decision matrix summarizing comparative evaluation of the five software packages, with each category scored on a 1-5 scale, where 1 is the best performing and 5 is worst performing.

It is important to note that these assessments focused on creating an undergraduate biomedical tutorial in image segmentation, and software deemed as not useful for student learning purposes may be best for advanced users and other applications, or for those with previous experience with one of the applications. Also, the accuracy of the segmentations, with “accuracy” defined as “resemblance to the ‘true’ value,” was not assessed in this study. Specialized software, such as the freely available package STAPLE,21,22 exists for this purpose. It creates a probabilistic estimate of the true segmentation based on a range of presented segmentations, which may come from human raters or automated segmentation algorithms, and a prior model accounting for the spatial distribution of structures and homogeneity constraints.

Learning Module Development, Implementation and Assessment

After selecting ITK-SNAP, a stand-alone course module was developed to achieve the learning objectives:

After completing this activity, students should:

  1. 1.

    Be familiar with the storage, visualization, and manipulation of 3D medical images;

  2. 2.

    Be able to segment an anatomic entity from a set of medical images;

  3. 3.

    Recognize the challenges of medical segmentation.

This module has been implemented by five different faculty members in three different engineering courses to students ranging from sophomores to seniors. A detailed description of the module, how it has been implemented in different courses, and preliminary assessment of student attainment of learning goals are provided below.

Development of the Learning Module

The module consists of a classroom introduction, a tutorial that guides students through a step-by-step process to extract a skull from a provided set of CT data (as shown in Fig. 1), and a culminating assignment where students use the same software program to extract a different body part from clinical imaging data.

The introductory lecture and introductory parts of the tutorial together are designed to help students achieve learning goal #1, “Students will be familiar with the storage, visualization and manipulation of 3D medical images.” The instructor provides background on how CT scans work and an explanation of the clinical relevance of image segmentation, often using videos.4 Instructors have found that showing videos of specific patients who were treated using these techniques can be effective in motivating students by appealing to empathy and their desire to positively impact human health and well-being. The initial parts of the tutorial achieve the remainder of the learning goal. Students first examine the file structure of the provided dataset, a DICOM image stack from an angio CT of the head and neck.15 The dataset contains 460 slices of 512x512 pixels each, with resolution of 0.488 x 0.488 x 0.700 mm. Students are led through loading the data and examining parameters of the scan such as number of pixels in each direction, size of pixels (i.e., resolution), registration of sections, orientation of sections (i.e., transverse, sagittal, coronal), and method of sectioning for visualization.

To help students achieve learning goal #2, “Students will be able to segment an anatomic entity from a set of medical images,” the tutorial guides students step-by-step through the process of extracting a region of the skull with commands and settings in ITK-SNAP, similar to the process shown schematically in Fig. 1. The step-by-step instructions include screen captures along with written instructions on each step. Of the three different methods of segmentation introduced in the tutorial background section (threshold-based, edge-based, and region-growing), two of the three are implemented in different steps of the tutorial (thresholding and region-growing).

The primary steps explained in the tutorial are (Fig. 1) as follows:

  1. 1.

    Load the image series (Fig. 1a).

  2. 2.

    Adjust the contrast.

  3. 3.

    Use the brush or polygon tool to manually segment a feature on a slice-by-slice basis. (This step is optional, and was first implemented in 2021 to provide students with a basis for appreciating the benefits of 3D auto-segmentation and introduce them to a tool that can be used to touch up their final segmentations.)

  4. 4.

    Perform auto-segmentation with the Snake Tool (Figs. 1b-d)

    1. a.

      Select a region of interest (Fig. 1b)

    2. b.

      Apply custom pre-segmentation thresholding (Fig. 1c)

    3. c.

      Explore different combinations of seed point quantity and size for region-growing

    4. d.

      Propagate seeds for 3D rendering (region-growing) (Fig. 5)

  5. 5.

    Export files suitable for 3D printing or import into a CAD or modeling program (Figs. 1e–1g)

Figure 5
figure 5

Snapshots from the seed placement and region propagation part of the student tutorial involving segmentation of a region of the skull in ITK-SNAP.

The culminating assignment varies by instructor and course, but in all classes has included tasking the students with extracting a different body part from clinical imaging data using ITK-SNAP. Students can choose a different body part contained in the provided image stack, such as the brain, eye, ear, nose, or upper vertebrae, or they can choose to use a different set of imaging data from alternatives provided by the instructor or publicly available datasets found on their own. Independently extracting a body part, which often requires some troubleshooting and optimization of parameters to segment their target tissue, helps students address learning goal #3, “Students will recognize the challenges of image segmentation,” while also demonstrating achievement of learning goal #2, “Students will be able to segment an anatomic entity from a set of medical images.”

In addition to this base task, some instructors have included additional components in the culminating assignment, such as:

  • 3D print extracted body part and evaluate outcome

  • Place images of the extracted body part into a portfolio to share with potential employers

  • Reflect on the challenges of segmenting and/or 3D printing different anatomical structures

This module could be expanded in many other ways. If time permits, students could export their segmented anatomical structure into a CAD program and design or size a medical device to integrate with a patient-specific body part. For example, the instructor could task students with designing glasses to perfectly fit the subject, or custom hearing aids, or custom dental implants.

Implementation

This module was first implemented in upper-division elective classes in 2016 (Biomechanics, Medical Imaging), and then later in a sophomore-level core BME class (Fabrication and Experimental Design, which already included training in CAD and 3D printing). Since then, the module has been implemented in approximately two courses per year, impacting more than 150 students. Class sizes were 8-18 in the elective classes and 18-25 in the core class. Students completed the assignment individually. The introduction and tutorial were in most cases completed in a two-hour laboratory period, with the culminating assignment completed outside of class. A graded worksheet with questions about the process accompanied the tutorial to insure completion and understanding. After the assignment, students were required to complete a reflection assignment with following questions: (1) Would you have been interested in more introductory material covering imaging in general; (2) What were the easiest steps to learn; (3) What were the most challenging steps to learn; (4) What value do you see in this exercise with respect to clinical/industry relevance, helping people, future employment prospects, determining anatomy for design or analysis, or other; (5) Do you think you will use this technique in future; and (6) What else would you like to see as part of this exercise.

Instructor preparation is important for successful execution of the module. In addition to familiarity with the steps in the student tutorial, our instructors have worked through material available on the ITK-SNAP website prior to presentation of the module and familiarized themselves adequately with imaging techniques to provide the introduction.

The module is typically done in a university computer lab on Windows desktop machines, with characteristic specifications of i7-7700 processor, 3.6 GHz clock speed, 8 GB RAM memory, AMD FirePro W4100 graphics adapter with 2 GB video memory, and 1280x720 monitor graphics. In general, RAM memory is the most crucial requirement for all segmentation programs. For the tutorial example, the 512x512x460 image stack contains about 1.2 x 108 voxels and might require 16 bytes per voxel for a segmentation in ITK-SNAP whose ROI encompasses the entire region and which keeps several image layers in memory during the snake operation, thus using almost 2 GB above operating system and other process memory needs. Computer memory of 4 GB is thus a minimum, and 8 GB or more is recommended. Processor speed is not as important, as processes will simply be scaled in time by the speed. A good stand-alone graphics card also improves performance.

The software installer is made available by the instructor, who has downloaded it from the ITK-SNAP website. Students can copy this and install the software on personal laptops for completion of the culminating assignment and any other follow-up assignments. However, we have observed that the variability in quality of student laptops can lead to problems with completing this assignment, where students segment an anatomical entity of their choice as homework, and in those cases recommended that students work on university computer lab machines. This problem can be due to RAM memory limitations or to disk storage, as both the original imaging datasets and the STL files produced by the software can be very large.

Another challenge in implementing this module relates to dataset availability. Although the students are allowed to use any publicly available dataset to complete their culminating assignments, students often struggle to find appropriate human imaging data. To overcome this challenge, the instructors have compiled a variety of datasets that they share with the students as options for completing the culminating assignment.

This module was found to be very robust and easily transferable between courses. Only minor changes were made to the tutorial by different instructors over the years, either to tailor the introduction to a particular course or to clarify a particular instruction that was not clear to students. The only major change, implemented in 2021, was to add step 3 to the tutorial. In step 3, students perform manual segmentation on a few of the images so that they can better appreciate the advantages of automated 3D segmentation when they complete step 4. The tools used in step 3, the brush tool or polygon tool, are also valuable if the students wish to clean up their final segmentation before 3D printing or creating STL files.

Assessment of Learning Module

Leaning goal #1, “Students will be familiar with the storage, visualization and manipulation of 3D medical images” was assessed from the submitted worksheet that questioned students on processes as they were executing the tutorial. Outside of small errors that could be attributed to carelessness, 100% of students achieved this objective. Successful achievement of this goal is also embedded in leaning goal #2, as it is impossible to successfully complete #2 without familiarity with the storage, visualization, and manipulation of images.

Learning objective #2, “Students will be able to segment an anatomic entity from a set of medical images,” was assessed directly from student completion of the culminating assignment in the module, since all faculty have required students to independently extract a new anatomical feature from clinical imaging data. 100% of students in all classes have successfully achieved this learning objective. As shown in Figs. 6 and 7, students were able to segment a wide variety of mineralized (Fig. 6) and soft tissues (Fig. 7) from the provided or publicly available datasets. The ability of students to segment soft tissues on their own after only being trained to segment a single mineralized body part illustrates that students were able to sufficiently master the image segmentation technique in ITK-SNAP using the provided tutorial. Senior-level students were in general faster with completion of the in-class part of the assignment, likely due to having developed better computer skills and 3D visualization over their college career, or to having seen the technique introduced in a previous course, but sophomores were more than adequate to the task. A benefit of introducing this material at the sophomore level is the ability to use the material in subsequent courses.

Figure 6
figure 6

Students were required to complete a segmentation of an anatomical entity of their choice as a culminating assignment, after completing the tutorial involving segmentation of a region of the skull. Shown are examples of student submissions of segmented mineralized body parts from the culminating assignment. Some students chose to focus on one specific bone, while others focused on a region of the body.

Figure 7
figure 7

Examples of student submissions of segmented soft tissue body parts from the culminating assignment. In general, segmentation of internal organs is more challenging than external body parts, such as feet. The heart segmentation was a challenging task but the resulting quality is quite good. The nose model allowed visualization of internal sinuses.

Student attainment of learning goal #3 is most apparent from comments in student reflections, and these comments also provide further support for the achievement of goals #1 and #2. A sampling of four student responses to each reflection prompt is shown in Tables 3 and 4, illustrating the wide range of comments received. However, in reviewing the student reflections, a few themes do emerge. All students found the software to be easy to learn and user-friendly, even when independently applying the software to segment soft tissue. Students also found it easy to 3D print a model using the exported segmentation file, although some students noted that “small details do not print accurately.” All students also reported great value in the exercise in answering question (4) of the reflection. 75% of students in non-imaging courses mentioned being interested in more background on imaging itself, especially coverage of more imaging modalities (e.g., MRI, US, PET), which instructors may choose to include in the future if time allows.

TABLE 3 Sample responses to questions 1-3 in the student reflection exercise assigned after the image segmentation module.
TABLE 4 Sample responses to questions 4-6 in the student reflection exercise assigned after the image segmentation module.

Most students rated the pre-segmentation thresholding as the most challenging step, as slightly different pre-segmentation threshold settings can lead to vastly different final renderings. Another challenge identified by students was understanding the 3D relationship among the three perpendicular planar views. This might suggest a parallel exercise to help with training for 3D visualization. Students also observed that in seed propagation, if the initial seeds are too close together, too large, or too plentiful, the propagation can be very slow, fail to propagate, or completely fail. Hence, if time is available in class, students could be encouraged to spend more time experimenting with seed placement. Students also appreciated the computing demands in storage and manipulation of large graphics files.

Discussion of Learning Module

Successful completion of the tutorial and culminating assignment showed that it is possible for students to learn basic image segmentation techniques in ITK-SNAP in a short period of time, although more difficult segmentations and higher risk segmentations (i.e., for patient implant or surgical planning) would present more challenges. Students had very few difficulties executing the step-by-step tutorial to extract a region of the skull. Students were also able to move into the remainder of the assignment, independent segmentation, without further support, and could segment soft tissue structures even though the tutorial dealt with mineralized tissue. All reported great value in the exercise, were interested in segmenting their own choice of images/body parts, and were particularly excited about 3D printing to obtain a physical object of their own creation. In addition, instructors have observed that students often use these skills in later courses, such as senior design and independent research projects. Several of our BME students have also done internships or taken jobs at companies that create patient-specific implants using a combination of image segmentation and CAD tools.

As stated earlier, typical undergraduate students in a BME curriculum do not have the background to write their own image-processing software. At our institution, only a single upper-level computer science course titled “Image Processing & Analysis” is related to this topic, and since the course covers a wide range of topics dealing with the acquisition, processing, and analysis of digital images, it only covers what segmentation is and how one might tell a machine to do it, and discusses various methods, but does not practice segmentation of medical images.

Five different faculty members have utilized this learning module in their classes. Instructors previously unfamiliar with the technique were also able to learn the package well enough for demonstration to students through completion of the tutorial. It would be recommended that instructors set aside a few hours to practice segmenting different tissues and varying segmentation parameters to be better able to help students troubleshoot problems. ITK-SNAP provides tutorials and set-up chapters that can be used to gain more advanced knowledge, or to share with students with an interest in further developing their skills.

The monetary investment is insignificant as the software is free, and the module introduction and tutorial require less than 2 hours of classroom time to complete. Hence, this module would be easy to incorporate into a variety of lecture and lab classes. So far, the stand-alone module has been successfully implemented in a biomechanics elective, a medical imaging elective, and a biomedical fabrication core course. It has been taught to sophomores, juniors, and seniors, and would also likely be successful in a first-year course as it requires no pre-requisite material (although an understanding of 3D visualization is advantageous).

All students were able to successfully complete the culminating assignment of independently extracting a body part based only upon the training provided in the skull extraction tutorial. A future study could explore adaptation of the learning module to other software platforms, the application of skills learned in this tutorial to other contexts (e.g., senior capstone design or research projects), and the perceived impact of this module on students’ ability to market themselves for internships, jobs, and other post-graduation opportunities.

Conclusions

In conclusion, this stand-alone module provides a low-cost, flexible way to bring the clinical and industry trends combining medical image segmentation, CAD, and 3D printing into the undergraduate BME curriculum. Since image segmentation is not straightforward, being complicated by factors such as low or overlapping contrast of the object of interest with other areas of the scan, irregular boundaries, noise, and motion, to name a few, the goal of this work was to create a stand-alone module introducing the application of image segmentation using available software. ITK-SNAP was selected to implement this module following rigorous evaluation of five commercially available software packages because it is free, easiest to learn, and includes a powerful, semi-automated segmentation tool.

The developed module successfully introduces engineering students to application of the image segmentation process using ITK-SNAP. After a single two-hour class session in which the module is introduced and the tutorial is completed, students attain sufficient mastery of the image segmentation process to independently apply the technique to extract a new body part from clinical imaging data. The module demonstrates to students how to obtain dimensions and shapes necessary for further engineering analysis, such as finite element analysis, for design work, or for clinical interpretation and communication of imaging data. Instructors have observed that students engage well with this assignment and appreciate its clinical relevance. This exercise also lends itself well to inclusion in student portfolios, and provides them with a concrete skill to market when applying for summer internships, jobs, or graduate school.