A Robust Algorithm for Voxel-to-Polygon Mesh Phantom Conversion

Over the past 20 years, the coupling of computational human phantoms within existing Monte Carlo radiation transport codes has required phantoms to be in a voxelized format. Very recently, however, several popular radiation transport codes such as MCNP6, GEANT4, and PHITS now facilitate direct radiation transport in phantoms that are represented by either polygon or tetrahedral mesh structures – surfaces and volumes, respectively, that define both the phantom’s body contour and its array of internal organs. While both voxel-based and mesh-based phantoms provide a high degree of anatomic realism as compared to first-generation stylized (or mathematical) phantoms, mesh-type phantoms are now considered the state of the art as they permit re-sculpting of individual organs, body regions, and overall body size and shape. They also allow for extremity articulations and inclusion of thin tissue layers of radiobiological importance. These are all features that are either not permitted in or are not readily available to voxel-based phantoms. Nevertheless, since the late 1980s, a tremendous number of voxel-based computational human phantoms have been developed from image segmentation of patient CT or MR data. As a result, there is a need for conversion of these existing voxel phantoms to mesh-type formats. The present work describes an efficient and accurate algorithm to convert voxel-based phantoms to mesh-based formats, thus permitting the user to take full advantage of these additional modeling features. For this conversion, a boundary detection algorithm was implemented in conjunction with polygon detection to form high-quality mesh data suitable for radiation transport simulation or finite element analysis. This conversion can result in a significant reduction of required simulation time and can allow current voxel data to be used in modern CAD software.


Introduction
Since their early development in the late 1950s, general-purpose Monte Carlo (MC) radiation transport codes have utilized primitive geometric structures to define material interfaces in their transport geometry, e.g., planes, spheres, ellipsoids, and truncated cones. These structures were used from the 1960s to mid-1980s to geometrically represent the human body in both its outer body contour and internal organ structure. Their geometric simplicity was ideal for the limited computer technology at the time, and addressed the need for computational efficiency in particle tracking. These "stylized" phantoms, while at the time fit for purpose, did not provide an anatomically realistic representation of the human body, particularly in regard to organ shape and inter-organ tissue separation.
Beginning in the late 1980s, the need for improved anatomical accuracy, along with concurrent advances in computational memory and processor speed, led to the subsequent development and use of voxel-based human computational phantoms. Those phantoms were defined by a collection of rectangular parallelepipeds (voxels) of equal or non-equal size defining each tissue material. Voxel phantoms originate from the segmentation of CT or MR image data sets. Consequently, all tissue elements within a voxel phantom are generally of uniform size and shape (x,y,z dimen-sions). The transition from stylized to voxel phantoms necessitated an increase in computational steps during radiation transport, as boundary crossing checks shifted from those associated with entering or leaving an organ or body region to those associated with entering or leaving each voxel defining that organ or body region.
Beginning in the early 2000s, a third generation of human computational phantom -mesh phantoms -was advanced, in which body regions and internal organ structures were once again represented, not by a collection of voxels, but by surfaces defined by 3D control points (non-uniform rational B-splines or NURBS) or arrays of polygons. These mesh-type phantoms allowed for the scalability and deformability provided by stylized phantoms, yet they retained the anatomical realism of voxel phantoms. In the coupling of mesh phantoms to radiation transport codes, however, a final step of voxelization had to be performed as the particle tracking algorithms employed at that time did not recognize NURBS or polygon mesh surfaces. Mesh phantom voxelization thus entailed filling these surfaces with an array of voxels of user-defined dimensions. A second advantage of mesh phantom voxelization was a resolution of potential surface overlaps and intersections introduced during phantom construction, rescaling, and/or deformation. The voxelization processes, by definition, eliminated these tissue incongruencies. Within the past few years, however, significant advances have been made in particle tracking algorithms so as to now enable the direct use of meshed geometries during MC radiation transport simulation. These developments were initially introduced into the MCNP code in 2009 [1], into the GEANT4 code in 2013 [2], and into the PHITS code in 2015 [3,4]. Thus, there are a tremendous number of existing voxel-based computational phantoms that would now benefit from a conversion to mesh-type format.
This chapter reviews a computational algorithm developed to convert voxel phantoms to polygon mesh phantoms suitable for MC transport and importable into modern CAD software. The method eliminates geometric redundancies, allowing for a minimal and optimized geometric representation of the meshed structures. This feature is beneficial for computational human phantoms as voxel size is typically governed by the smallest anatomical structure to be represented, while a mesh phantom is not limited in this respect. The resulting algorithm allows users to continue to use the significant number of existing voxel phantoms that have been developed over the past 20 years without the need for labor-intensive manual modification. Additionally, the algorithm can be used with segmented image data to form mesh geometries free of intersections and incongruences so as to be used in simulation or CAD software.

Voxel to Mesh Conversion Procedure
The voxel-to-mesh conversion procedure is divided into six main steps: (1) data preparation, (2) gridded surface generation, (3) surface simplification, (4) line simplification, (5) polygon detection, and (6) polygon correction. The details of each step are briefly described in this section, which also includes a discussion of the benchmarking procedures used to evaluate the conversion process.

Data Preparation
First, the phantom voxels are read into a three-dimensional array of specified size <n x , n y , n z >. Next, two additional four-dimensional arrays are created of dimensions <n x , n y , 3, 12 > and < n x , n y , 3, 8 > which represent sliding windows of temporary data used to ensure the uniqueness of every facet, vertex, and line that is generated in the newly created mesh phantom. The guarantee of element uniqueness is important to minimizing subsequent memory requirements during the handling of arbitrarily large arrays. It is important to note that the z-axis is chosen to be 3 units wide as this is typically the dimension along the phantom's cranial-caudal (and longest) direction this is chosen to minimize memory requirements. The array is 3 units wide as only adjacent z-slices of voxels can possibly contain information relevant to the current voxel. Several other arrays are also generated: • The vertex array -an array of 3D points • The line array -an array containing two integers representing two connected vertices within the vertex array • The facet array -an array containing arbitrary numbers of integers representing connected lines within the line array • The facet tag array -an array containing the ordered materials which separate the facets.

Gridded Surface Generation
The voxel data is parsed after the data is initialized. Each voxel is checked to determine if neighboring voxels are of a different material from the current voxel. If the neighboring voxels are of the same material, nothing is generated. If a neighboring voxel is found to be of a different material, the next step is to determine the facets to be produced. At this step, a facet is simply a rectangle between two voxels of different materials. As shown in Fig. 17.1, there is a possibility of 6 facets, 8 vertices, and 12 lines that could be produced for each voxel. Facets, vertices, and lines are produced depending on which adjacent voxels are of different materials. Given which neighbors are different materials, the required lines and vertices are determined. Once the required lines and vertices are determined, the sliding window of vertices and lines is checked to determine if this information already exists. If the data has already been generated, it is added to the current voxel position within the sliding window. If the data are not present, the data are generated and stored appropriately. The position of the vertices in 3D space is given by the required facets. At this point, a facet is composed of only four lines forming a rectangle. This process is repeated throughout the phantom array as the window is shifted along the longest axis through which it iterates. At this step, a surface mesh phantom has been generated whose boundaries only differ between different materials (e.g., organs and tissue material of a given elemental composition and mass density). These boundaries are represented by a gridded surface which is further simplified and optimized as shown in Fig. 17.2.

Surface Simplification
The surface simplification process can begin once all necessary facets, vertices, and lines have been generated. Surfaces are first grouped by three values: (1) separated material, (2) whether or not x, y, or z is constant, and (3) the value of this constant. This grouping results in sets of surfaces which all separate the same material and are co-planar to one another (see Fig. 17.3). The purpose of this grouping is twofold. First, the grouping reduces the required computation time for the surface simplification step as comparisons only need to be made between grouped facets rather than across the entire list. Second, this grouping allows the surface simplification step to be performed in parallel.
The facets are then merged after grouping facets of the same material. A Boolean union operation is performed for every facet within each group. To determine if coplanar facets can be merged, the facets are checked to see if they share a common This process is repeated until no more facets can be incorporated within each facet group. The facets marked for removal are then removed from the array. After this process is completed, unused lines and vertices are removed and facets and lines are updated to reflect new vertex and line positions within their respective arrays. At this point in the conversion process, the phantom surface mesh has been reduced to a minimal number of polygons, as shown in Fig. 17.4.

Line Simplification
After a minimum number of polygon surface representations have been generated, these polygons contain more lines than are necessary to enclose the required volume (e.g., organ or body region). Prior to simplifying the lines, they are grouped in a similar manner to the facets. First, lines are scanned iteratively to determine for every vertex how many lines use that vertex. Next, lines are subdivided into colinear groups. This subdivision allows for the line simplification process to be performed in parallel and thus minimizes the required processing time needed since fewer comparisons need to be made.  To simplify lines, each group of co-linear lines is scanned iteratively. Line pairs are flagged if both lines share a common vertex. If the lines share a vertex, a Boolean union operation can be performed if the vertex in common is only used by two lines globally within the phantom. If this is the case, the two lines and the vertex are not necessary to properly represent a given surface, and thus they can be removed without inducing a mesh overlap. The Boolean union process is performed for these two lines in a manner similar to that used for the facets:

Polygon Detection
After these two simplification processes, the facets are now composed of an unordered set of lines and polygons. By construction, the lines contained within each facet must form at least one closed loop (i.e., a polygon). Within each facet, polygons are formed by simply end matching lines until all lines are used. If multiple polygons are formed by construction within one facet, one of these polygons must be interior to the other, thus forming a hole within the outer polygon. This is easily determined by a bounding box as demonstrated in Fig. 17.6. This process is repeated for each facet.

Polygon Correction and Hole Detection
Even though the technique described herein is computationally efficient, using an end-matching method to construct polygons can possibly create self-intersections within each polygon. These may occur because vertex repetition is not checked as each line is added to the polygon as it would result in a significant decrease in  Fig. 17.4 computational efficiency. Instead, after polygons have been created, they are checked to see if any vertices other than the start/end vertex have been used multiple times. An "ear-clipping" method is employed in this situation. To ear-clip a polygon, one creates a new polygon from the lines between the vertex that is used multiple times. These lines are then removed from the larger polygon and a new polygon is added to the facet as demonstrated in Fig. 17.7. At this point, the mesh is now an intersection-free and redundancy-free (all vertices, lines, and facets are unique) mesh that is represented by the least number of surfaces. If the surface-mesh phantom is to be converted to a tetrahedral-mesh phantom, as required by the PHITS radiation transport code, the open-source conversion code TETGEN [5] may be utilized. The mesh can also be triangularized and exported in a file format acceptable to most modern CAD software codes.

Conversion Process Benchmarking
In testing the performance of the voxel-to-mesh conversion algorithm, two benchmarking tasks were performed. First, it was important to test that the algorithm is robust and can handle arbitrary datasets correctly. Thus, a series of random square binary voxel arrays were generated and then meshed to contain between 10 3 and 10 8 elements. One example is shown in Fig. 17.8. Second, it was important that the conversion algorithm performed efficiently in a practical setting. Thus, mesh conversions were applied to the UF/NCI reference adult male phantom [6] at voxel resolutions ranging from 1 cm 3 to 1 mm 3 as shown in Fig. 17.9. Finally, it was important to assess how this conversion algorithm scales across multiple processors. Thus, the previous two benchmarking studies were performed using 1, 2, 4, 8, and 16 cores, respectively. All benchmarking tasks were run on the UF HiPerGator cluster using Intel E5-2698 v3 (2.3 GHz) processors. The code was compiled using Intel's C++ compiler with the -qopenmp and -O3 compiler flags.

Results
For the random array meshing benchmarks on a single core, the time to mesh for the highest resolution dataset (250 × 250 × 250) was 2.5 × 10 4 seconds, while the conversion time for the highest resolution head phantom was approximately 350 seconds. Looking at the time breakdown for each step in the meshing algorithm, the majority of the compute time is devoted to the surface simplification step (see Sect. 2.1.3), which is expected as this step iteratively compares facets to one another causing this portion of the algorithm to have an order of n 2 performance (see Fig. 17.10). For the voxel-to-mesh phantom conversion, a more linear performance is seen, but this is likely due to the less randomized nature of the problem (see Fig. 17.11). Across multiple processors, both benchmarks saw performance gains although, as expected, they are not linear. The voxel-to-mesh phantom conversion speedup for 16 cores was approximately a factor of 4.1, whereas for 8 cores it was only a factor of 3.7. The diminishing returns are likely due to the implementation of OpenMP scheduling. The process can be better optimized in future development of this algorithm.

Conclusions
The presented methodology provides a fast and efficient method to convert voxel data to a polygon mesh format, containing no degenerate facets and no selfintersections, thus making it useful for input to Monte Carlo sampling codes and CAD programs. The algorithm can convert any segmented set of voxelized data to an optimized meshed surface suitable for a variety of applications such as Monte Carlo radiation transport or finite element simulations of the interactions between electromagnetic fields and the human body, e.g., during MRI.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.