Segmentation of fourdimensional, Xray computed tomography data
Abstract
The rapidly improving temporal resolution of Xray computed tomography (CT) imaging methods makes it ever easier to do insitu, timeresolved (4D) experiments. This work describes a method of segmenting 4D Xray CT data that works well for extracting information on the interfacial properties, such as interfacial curvature and velocity. As an example of this method, a segmentation is performed on data from an isothermal coarsening experiment of an AlCu solid/liquid mixture.
Keywords
Segmentation Registration 4D data Computed tomographyBackground
The key characteristic of a 4D experiment is having an insitu measurement method in which the duration of each scan is on the same timescale or faster than the evolution of the sample. Xray computed tomography (CT) is a common way to do 4D studies because it can be used to make consecutive, nondestructive measurements of a sample in 3D. This technique has been used for the last decade [1] with the continually improving spatial and temporal resolution of synchrotron facilities enabling an ever wider variety of materials and processes to be studied. The recent review article by Rowenhorst [2] covers upwards of a dozen of these recent 4D studies and the review articles by Stock [3] and Landis [4] cover Xray CT in great detail. These articles contain many examples in which either 3D or 4D characterization of microstructures has provided valuable information and also how these can be used to validate microstructural evolution simulations that are being use in materialsbydesign methods. Segmentation methods like the one that is presented here are necessary to produce the accurate data that is required for these comparisons.
We have used Xray CT to study the isothermal coarsening dynamics of solid/liquid mixtures of AlCu alloys. The goal of these experiments is to determine the interfacial locations between the liquid and solid phases and how the interfaces evolve over time. This makes it possible to study the interfacial curvature (which sets the chemical potential of the interface and therefore drives coarsening) and how it affects the microstructural evolution of the sample. 4D data is necessary to do this because calculating curvatures requires the interfacial locations in 3D and calculating the interfacial normal velocities requires interfacial locations over time.
The goal of this paper is to share some of our experiences working with 4D Xray CT and the AlCu system and to show some of the analysis tools that we have used and developed to work with these datasets. The methods that will be discussed here have been developed for use with a very specific type of datasets (4D, containing three phases and were created with absorption contrast, Xray CT) but the methods are in fact very general and could easily be applied to datasets containing greater or fewer phases, 2D or 3D data or even completely different characterization techniques like optical or scanning electron microscopy.
The segmentation technique that is presented here has been tailored to determine interfacial locations that are both smooth and accurate. This is in contrast to many other segmentation techniques in which the goal is to produce a segmentation that is as accurate as possible with little regard for the smoothness of the boundaries. To achieve this, an iterative technique is employed in which the goal is to determine sooth interface locations that are still consistent with the noisy data.
Methods
Experimental methods
The samples used in these studies are all hypoeutectic AlCu alloys with compositions between 14 wt% Cu and 28 wt% Cu. The samples are prepared by directionally solidifying a 12mm diameter rod, then cutting out 1mm diameter cylinders. These samples are held at 553°C, which is 5°C above the eutectic temperature, for the duration of the experiment to create a solid/liquid mixture with constant phase fractions. The furnace used in these experiments is able to hold the temperature to well within a degree of the setpoint, which negates the possibility of noteworthy microstructural changes due to temperature variations. Experiments are run for up to 12 hours to develop a significant amount of change in the microstructure.
These experiments were carried out at the TOMCAT beamline at the Swiss Light Source, Paul Scherrer Institut in 2008 and use an exposure time of 250 ms and 721 projections per reconstruction. This results in low noise reconstructions and a total scan time of about 4 minutes, which is well matched to the rate of change of the sample. The reconstructed data is 1024 × 1024 × 1024 voxels and each voxel has an edge length of 1.4 μ m.
Registration methods
One advantage of using Xray CT to perform a 3D characterization is that it results in a rigid 3D dataset that does not require any registration of various 2D sections as is required with many serial section methods of collecting 3D data. That said, registration is still an important step in the analysis of 4D CT data because the sample rotates through 180° during the CT scan and when it is done, it must return to the initial position. If it doesn’t return to the exact starting position of the previous scan, the resulting two datasets will be slightly rotated relative to one another.
This misalignment, which is typically on the order of fractions of a degree, can be removed by registering the multiple datasets. The registration process is an optimization problem in which a metric that quantifies the alignment of the two datasets is defined and optimized with respect to the degrees of freedom that are allowed for the spatial transformations. Since the 3D datasets can be considered rigid, only affine transformations need be considered.
Because the segmentations are done in 4D, registration must be done on ρ before segmenting the data. The registration starts with the first 3D dataset being the reference dataset and the second dataset being registered to it. When this is complete, the transformed second dataset is used as the reference and the third dataset is registered relative to the second. In this way, the differences between datasets that is due to sample evolution is negligible compared to the misalignment.
Because mutual information is a measure of the similarity between two datasets, it will be maximized when the two datasets are properly aligned.
Segmentation methods
The main goal of these segmentations is to determine the interface locations between the liquid and solid phases with sufficient accuracy to be able to calculate interfacial curvature and velocity. Both of these quantities can be significantly affected by even slight amounts of noise in the data. Thus, the goal of the segmentation are to determine interfacial locations that are smooth in both time and space while keeping the interfacial location consistent with the reconstructed data.
The segmentation method succeeds by separating the image intensity information from the interfacial location information, then iteratively applying diffusive smoothing and accuracy maintaining steps to the interface location data. The accuracy maintaining step utilizes smoothing to operate on the larger features more than on the smaller features that are more likely to be noise; thus, maintaining the accuracy of the larger features while the smoothing reduces the noise.
To define the interfacial location information, a levelset based method is used to implicitly define the interface locations using a signed distance function (SDF). A SDF is a dense array of data in which the value at each voxel is the distance to the nearest interface with negative values being considered on one side of the interface and positive values on the other side of the interface; thus, the interface is defined as where the SDF crosses zero. The implicit interface locations allows for subvoxel level interface positioning and also lends itself to simple, accurate curvature calculation.
All arrays used in this work are defined in 4D; however, there are some instances where operations are performed in 3D to every timestep individually. The reinitialization of a SDF is one of these instances because the definition of a SDF is only a spatial relationship and not explicitly a temporal one.
The segmentation technique is largely based on the piecewisesmooth method for segmenting two phases presented in [7]. In the current work, modifications have been made to account for multiple phases. Similar multiphase modifications have been made in [8, 9]; however, the current implementation is somewhat simpler than these at the expense of some generality.
The piecewisesmooth method that is used is based on having a few distinct regions, in which the intensity values are smooth and slowly varying with abrupt changes in intensity at the boundaries of the phases. A variation of the piecewisesmooth method is used in this work in which the smooth intensities for each phase are replaced with constant values. The piecewiseconstant approach is used here because the data used contains three distinct regions, each of which has a roughly constant intensity value. The piecewisesmooth method from [7] could easily be incorporated into these methods.
While this method can be used for 2D, 3D or 4D data, there are some distinct advantages to working with the highest dimensionality of data that is available due to the increasing data density with increasing dimensionality. For example, a single voxel in a 3D dataset has 6 neighbors within a one voxel radius and 32 neighbors within a two voxel radius, compared to 4 and 12 neighbors within one and two voxel radii for a 2D dataset. In 4D, this effect is compounded by the temporal neighbor data^{a}.
The overall scheme of this segmentation technique is comprised of an initialization stage and an iterative stage. In the initialization, the following are calculated:

The initial interfacial locations

The intensity values of the individual phases

The piecewise constant approximation of the intensity values
Each iteration contains the following steps:

Compute the difference between the original intensity data and the piecewise constant approximation

Use this difference to update the interfacial locations

Smooth the interfacial location arrays

Recompute intensity values of the individual phases

Recompute the piecewise constant approximation of the intensity data
The details of this procedure are given below and an example of its use, along with values used for the parameters, is given in the Results section. The software used to do these segmentations is written in Fortran and parallelized using MPI to take advantage of the large amounts of memory available on a distributed memory machine.
Initialization
The three, previously described regions which will be segmented are identified with Ω_{1}, Ω_{2} and Ω_{3} which will represent the sample holder, the solid phase and the liquid phase, respectively. These Ω_{i} variables are sets in ${\mathbb{R}}^{4}$. Voxel locations will be described by the 4D vector $\overrightarrow{x}$; therefore, if a voxel is in the solid phase, it will be described as $\overrightarrow{x}\in {\Omega}_{2}$. The mathematical definition of these regions will be given later.
where τ is an artificial smoothing time. This equation is evolved for a predetermined amount of time that is related to the interfacial width in ρ. This step is only necessary if the edges in the data are blurred; if they are sharp, this step can be omitted. This smoothing is applied in 3D and not 4D because the CT data collection process is a series of sequential, 3D measurements so any blurring due to data collection would be strictly spatial and not temporal.
Iterations
where ϕ_{i} is either ϕ_{1} or ϕ_{2}, τ is again an artificial smoothing time, and the prefactors D_{x}, D_{y}, D_{z} and D_{t} are the diffusivitylike terms that are adjusted to account for differing resolutions in space and time in the experiment. After smoothing, a SDF reinitialization procedure is then applied to the ϕ_{1} and ϕ_{2} arrays to ensure they maintain their SDF properties. This is the same reinitialization procedure as used in the initialization of the segmentation method.
where β is an adjustable parameter. These equations are applied once per overall iteration. Since these smoothing and accuracypreserving steps work together to ensure the interfaces do not become too rough or too inaccurate, their coefficients must be selected to work together. In this work, a value was chosen for β, and the amount of smoothing was selected to work well with that value.
The intensity values of the individual phases are recomputed the same as before using Equations 9, 10 and 11 and the piecewise constant approximation is recalculated using Equation 12.
Analysis tools
where the subscripts indicate partial derivatives.
where ε is a parameter to determine the width of the interface; a value of ε=1.5 is used on the recommendation of [10]. The delta function has units of inverse length and can be used as an area weighting term when used inside a volume integral or summation; for example, the total areaweighted curvature is $\sum _{\overrightarrow{x}}\delta \left(\varphi \right(\overrightarrow{x}\left)\right)H\left(\overrightarrow{x}\right)\mathrm{\Delta V}$, where $H\left(\overrightarrow{x}\right)$ is the curvature of a voxel at location $\overrightarrow{x}$ and Δ V is the volume per voxel.
Results
The optimal parameters in this segmentation technique will depend strongly on a variety of features, including the amount of noise in the data, the size scale of the features being segmented and the intensity range of the starting data. For the data used here, the average feature size is approximately 20 voxels at the first timestep and grows to approximately 40 voxels by the end of the experiment. The noise level in the data is relatively low and can be seen in Figure 1. The data in this figure has been scaled such that a value of 0 corresponds to black and a value of 1.5 corresponds to white. Threshold values of T_{1/2}=0.4 and T_{2/3}=1.0 are used, resulting in initial intensity values of $\widehat{{\rho}_{1}}=0.242$, $\widehat{{\rho}_{2}}=0.669$ and $\widehat{{\rho}_{3}}=1.184$. By the end of the segmentation, these values changed slightly to $\widehat{{\rho}_{1}}=0.256$, $\widehat{{\rho}_{2}}=0.724$ and $\widehat{{\rho}_{3}}=1.182$.
For this data, smoothing was applied to $\widehat{\rho}$ for 1.0 artificial time units to make the intensity profile through the interface similar to the data. This was then used to compute Δ ρ, which was smoothed for 5.0 artificial time units to reduce the influence of noise. This value was chosen because it removes the majority of the noise in Δ ρ without significantly affecting the larger features.
The interface location arrays are updated using a value of β=0.5; this value is the largest that could be used without numerical instabilities occurring. The value of β was chosen early in determining the optimal parameters and fixed, then the amount of smoothing was varied until the desired results were obtained. The arrays ϕ_{1} and ϕ_{2} were then smoothed with the 4D smoothing method for 0.5 time units, using prefactors of D_{x}=1.0, D_{y}=1.0, D_{z}=1.0 and D_{t}=0.5. The lower temporal diffusivity was used because the structure is more finely discretized in time relative to it’s rate of evolution than it is in space relative to the average feature size. The individual intensity values, $\widehat{{\rho}_{1}}$, $\widehat{{\rho}_{2}}$ and $\widehat{{\rho}_{3}}$, were updated at every iteration. A total of 25 iterations were used for this segmentation.
Discussion
There are advantages to working with 4D data; however, there are also times when it makes more sense to treat time resolved data as individual, 3D datasets. One of these instances is when the temporal resolution is relatively low. If the sample changes significantly between timesteps, smoothing in time can lead to inaccuracies just like smoothing spatially underresolved data can lead to problems. The other reason for possibly not using the full 4D dataset is the computational burden. This is primarily due to storing the full 4D dataset in memory since a dataset can be up to 10^{11} or 10^{12} voxels.
Conclusions
Registration and segmentation methods are presented that work quite well for processing 4D isothermal coarsening data of AlCu alloys. These methods work very well for determining smooth interface locations from noisy, 4D data.
This method has been presented for a particular case; however, it is quite easy to generalize to many other uses. For example, it could be used in 2D or 3D instead of 4D or for two or four or more phases instead of three. Also, some of the smoothing processes could be replaced with things that are better suited to different problems, such as motion by mean curvature or nonlinear diffusion smoothing instead of the linear diffusion smoothing presented here.
Endnotes
^{a} The number of neighbors for the 4D data is intentionally not reported because the differing spatial and temporal resolutions make this analysis less straightforward than the uniform 2D and 3D examples.
^{b} This is not an actual diffusion equation due to the presence of the temporal component.
Notes
Acknowledgements
This work was funded by the DOE under grant DEFG0299ER45782/A012 and by a DOE NNSA Stewardship Science Graduate Fellowship (grant DEFC5208NA28752). The authors would like to thank Dr. Julie L. Fife for the use of her data in the development of these methods and Dr. E. Begum Gulsoy for guidance with registration methods.
Supplementary material
References
 1.Schmidt S, Nielsen S, Gundlach C, Margulies L, Huang X, Jensen DJ: Watching the growth of bulk grains during recrystallization of deformed metals. Science 2004, 305(5681):229–232. 10.1126/science.1098627CrossRefGoogle Scholar
 2.Rowenhorst DJ, Voorhees PW: Measurement of interfacial evolution in three dimensions. Annu Rev Mater Res 2012, 42: 105–124. 10.1146/annurevmatsci070511155028CrossRefGoogle Scholar
 3.Stock SR: Recent advances in Xray microtomography applied to materials. Int Mater Rev 2008, 53(3):129–181. 10.1179/174328008X277803CrossRefGoogle Scholar
 4.Landis EN, Keane DT: Xray microtomography. Mater Charact 2010, 61(12):1305–1316. 10.1016/j.matchar.2010.09.012CrossRefGoogle Scholar
 5.Pluim JPW, Maintz JBA, Viergever MA: Mutualinformationbased registration of medical images: a survey. IEEE Trans Med Imaging 2003, 22(8):986–1004. 10.1109/TMI.2003.815867CrossRefGoogle Scholar
 6.Gulsoy E, Simmons J, DeGraef M: Application of joint histogram and mutual information to registration and data fusion problems in serial sectioning microstructure studies. Scr Mater 2009, 60(6):381–384. 10.1016/j.scriptamat.2008.11.004CrossRefGoogle Scholar
 7.Tsai A, Yezzi A, Willsky AS: A curve evolution approach to smoothing and segmentation using the MumfordShah functional. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662). New York, NY: IEEE; 2000:119–124.CrossRefGoogle Scholar
 8.Yezzi A, Tsai A, Willsky A: A fully global approach to image segmentation via coupled curve evolution equations. J Vis Comm Image Represent 2002, 13(1–2):195–216.CrossRefGoogle Scholar
 9.Chan TF, Vese LA: A level set algorithm for minimizing the mumfordshah functional in image processing. In Proceedings. IEEE Workshop on Variational and Level Set Methods in Computer Vision, 2001. New York, NY: IEEE; 2001:161–168.CrossRefGoogle Scholar
 10.Osher S, Fedkiw R: Level set methods and dynamic implicit surfaces. Applied Mathematical Sciences, 2003., vol. 153: Springer, New York Springer, New YorkGoogle Scholar
 11.Goldman R: Curvature formulas for implicit curves and surfaces. Comput Aided Geomet Des 2005, 22(7):632–658. 10.1016/j.cagd.2005.06.005CrossRefGoogle Scholar
Copyright information
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.