Segmentation of four-dimensional, X-ray computed tomography data
The rapidly improving temporal resolution of X-ray computed tomography (CT) imaging methods makes it ever easier to do in-situ, time-resolved (4D) experiments. This work describes a method of segmenting 4D X-ray CT data that works well for extracting information on the interfacial properties, such as interfacial curvature and velocity. As an example of this method, a segmentation is performed on data from an isothermal coarsening experiment of an Al-Cu solid/liquid mixture.
KeywordsSegmentation Registration 4D data Computed tomography
The key characteristic of a 4D experiment is having an in-situ measurement method in which the duration of each scan is on the same timescale or faster than the evolution of the sample. X-ray computed tomography (CT) is a common way to do 4D studies because it can be used to make consecutive, non-destructive measurements of a sample in 3D. This technique has been used for the last decade  with the continually improving spatial and temporal resolution of synchrotron facilities enabling an ever wider variety of materials and processes to be studied. The recent review article by Rowenhorst  covers upwards of a dozen of these recent 4D studies and the review articles by Stock  and Landis  cover X-ray CT in great detail. These articles contain many examples in which either 3D or 4D characterization of microstructures has provided valuable information and also how these can be used to validate microstructural evolution simulations that are being use in materials-by-design methods. Segmentation methods like the one that is presented here are necessary to produce the accurate data that is required for these comparisons.
We have used X-ray CT to study the isothermal coarsening dynamics of solid/liquid mixtures of Al-Cu alloys. The goal of these experiments is to determine the interfacial locations between the liquid and solid phases and how the interfaces evolve over time. This makes it possible to study the interfacial curvature (which sets the chemical potential of the interface and therefore drives coarsening) and how it affects the microstructural evolution of the sample. 4D data is necessary to do this because calculating curvatures requires the interfacial locations in 3D and calculating the interfacial normal velocities requires interfacial locations over time.
The goal of this paper is to share some of our experiences working with 4D X-ray CT and the Al-Cu system and to show some of the analysis tools that we have used and developed to work with these datasets. The methods that will be discussed here have been developed for use with a very specific type of datasets (4D, containing three phases and were created with absorption contrast, X-ray CT) but the methods are in fact very general and could easily be applied to datasets containing greater or fewer phases, 2D or 3D data or even completely different characterization techniques like optical or scanning electron microscopy.
The segmentation technique that is presented here has been tailored to determine interfacial locations that are both smooth and accurate. This is in contrast to many other segmentation techniques in which the goal is to produce a segmentation that is as accurate as possible with little regard for the smoothness of the boundaries. To achieve this, an iterative technique is employed in which the goal is to determine sooth interface locations that are still consistent with the noisy data.
The samples used in these studies are all hypo-eutectic Al-Cu alloys with compositions between 14 wt% Cu and 28 wt% Cu. The samples are prepared by directionally solidifying a 12mm diameter rod, then cutting out 1mm diameter cylinders. These samples are held at 553°C, which is 5°C above the eutectic temperature, for the duration of the experiment to create a solid/liquid mixture with constant phase fractions. The furnace used in these experiments is able to hold the temperature to well within a degree of the setpoint, which negates the possibility of noteworthy microstructural changes due to temperature variations. Experiments are run for up to 12 hours to develop a significant amount of change in the microstructure.
These experiments were carried out at the TOMCAT beamline at the Swiss Light Source, Paul Scherrer Institut in 2008 and use an exposure time of 250 ms and 721 projections per reconstruction. This results in low noise reconstructions and a total scan time of about 4 minutes, which is well matched to the rate of change of the sample. The reconstructed data is 1024 × 1024 × 1024 voxels and each voxel has an edge length of 1.4 μ m.
One advantage of using X-ray CT to perform a 3D characterization is that it results in a rigid 3D dataset that does not require any registration of various 2D sections as is required with many serial section methods of collecting 3D data. That said, registration is still an important step in the analysis of 4D CT data because the sample rotates through 180° during the CT scan and when it is done, it must return to the initial position. If it doesn’t return to the exact starting position of the previous scan, the resulting two datasets will be slightly rotated relative to one another.
This misalignment, which is typically on the order of fractions of a degree, can be removed by registering the multiple datasets. The registration process is an optimization problem in which a metric that quantifies the alignment of the two datasets is defined and optimized with respect to the degrees of freedom that are allowed for the spatial transformations. Since the 3D datasets can be considered rigid, only affine transformations need be considered.
Because the segmentations are done in 4D, registration must be done on ρ before segmenting the data. The registration starts with the first 3D dataset being the reference dataset and the second dataset being registered to it. When this is complete, the transformed second dataset is used as the reference and the third dataset is registered relative to the second. In this way, the differences between datasets that is due to sample evolution is negligible compared to the misalignment.
Because mutual information is a measure of the similarity between two datasets, it will be maximized when the two datasets are properly aligned.
The main goal of these segmentations is to determine the interface locations between the liquid and solid phases with sufficient accuracy to be able to calculate interfacial curvature and velocity. Both of these quantities can be significantly affected by even slight amounts of noise in the data. Thus, the goal of the segmentation are to determine interfacial locations that are smooth in both time and space while keeping the interfacial location consistent with the reconstructed data.
The segmentation method succeeds by separating the image intensity information from the interfacial location information, then iteratively applying diffusive smoothing and accuracy maintaining steps to the interface location data. The accuracy maintaining step utilizes smoothing to operate on the larger features more than on the smaller features that are more likely to be noise; thus, maintaining the accuracy of the larger features while the smoothing reduces the noise.
To define the interfacial location information, a level-set based method is used to implicitly define the interface locations using a signed distance function (SDF). A SDF is a dense array of data in which the value at each voxel is the distance to the nearest interface with negative values being considered on one side of the interface and positive values on the other side of the interface; thus, the interface is defined as where the SDF crosses zero. The implicit interface locations allows for sub-voxel level interface positioning and also lends itself to simple, accurate curvature calculation.
All arrays used in this work are defined in 4D; however, there are some instances where operations are performed in 3D to every timestep individually. The reinitialization of a SDF is one of these instances because the definition of a SDF is only a spatial relationship and not explicitly a temporal one.
The segmentation technique is largely based on the piecewise-smooth method for segmenting two phases presented in . In the current work, modifications have been made to account for multiple phases. Similar multi-phase modifications have been made in [8, 9]; however, the current implementation is somewhat simpler than these at the expense of some generality.
The piecewise-smooth method that is used is based on having a few distinct regions, in which the intensity values are smooth and slowly varying with abrupt changes in intensity at the boundaries of the phases. A variation of the piecewise-smooth method is used in this work in which the smooth intensities for each phase are replaced with constant values. The piecewise-constant approach is used here because the data used contains three distinct regions, each of which has a roughly constant intensity value. The piecewise-smooth method from  could easily be incorporated into these methods.
While this method can be used for 2D, 3D or 4D data, there are some distinct advantages to working with the highest dimensionality of data that is available due to the increasing data density with increasing dimensionality. For example, a single voxel in a 3D dataset has 6 neighbors within a one voxel radius and 32 neighbors within a two voxel radius, compared to 4 and 12 neighbors within one and two voxel radii for a 2D dataset. In 4D, this effect is compounded by the temporal neighbor dataa.
The overall scheme of this segmentation technique is comprised of an initialization stage and an iterative stage. In the initialization, the following are calculated:
The initial interfacial locations
The intensity values of the individual phases
The piecewise constant approximation of the intensity values
Each iteration contains the following steps:
Compute the difference between the original intensity data and the piecewise constant approximation
Use this difference to update the interfacial locations
Smooth the interfacial location arrays
Recompute intensity values of the individual phases
Recompute the piecewise constant approximation of the intensity data
The details of this procedure are given below and an example of its use, along with values used for the parameters, is given in the Results section. The software used to do these segmentations is written in Fortran and parallelized using MPI to take advantage of the large amounts of memory available on a distributed memory machine.
The three, previously described regions which will be segmented are identified with Ω1, Ω2 and Ω3 which will represent the sample holder, the solid phase and the liquid phase, respectively. These Ωi variables are sets in . Voxel locations will be described by the 4D vector ; therefore, if a voxel is in the solid phase, it will be described as . The mathematical definition of these regions will be given later.
where τ is an artificial smoothing time. This equation is evolved for a predetermined amount of time that is related to the interfacial width in ρ. This step is only necessary if the edges in the data are blurred; if they are sharp, this step can be omitted. This smoothing is applied in 3D and not 4D because the CT data collection process is a series of sequential, 3D measurements so any blurring due to data collection would be strictly spatial and not temporal.
where ϕi is either ϕ1 or ϕ2, τ is again an artificial smoothing time, and the pre-factors Dx, Dy, Dz and Dt are the diffusivity-like terms that are adjusted to account for differing resolutions in space and time in the experiment. After smoothing, a SDF reinitialization procedure is then applied to the ϕ1 and ϕ2 arrays to ensure they maintain their SDF properties. This is the same reinitialization procedure as used in the initialization of the segmentation method.
where β is an adjustable parameter. These equations are applied once per overall iteration. Since these smoothing and accuracy-preserving steps work together to ensure the interfaces do not become too rough or too inaccurate, their coefficients must be selected to work together. In this work, a value was chosen for β, and the amount of smoothing was selected to work well with that value.
where the subscripts indicate partial derivatives.
where ε is a parameter to determine the width of the interface; a value of ε=1.5 is used on the recommendation of . The delta function has units of inverse length and can be used as an area weighting term when used inside a volume integral or summation; for example, the total area-weighted curvature is , where is the curvature of a voxel at location and Δ V is the volume per voxel.
The optimal parameters in this segmentation technique will depend strongly on a variety of features, including the amount of noise in the data, the size scale of the features being segmented and the intensity range of the starting data. For the data used here, the average feature size is approximately 20 voxels at the first timestep and grows to approximately 40 voxels by the end of the experiment. The noise level in the data is relatively low and can be seen in Figure 1. The data in this figure has been scaled such that a value of 0 corresponds to black and a value of 1.5 corresponds to white. Threshold values of T1/2=0.4 and T2/3=1.0 are used, resulting in initial intensity values of , and . By the end of the segmentation, these values changed slightly to , and .
For this data, smoothing was applied to for 1.0 artificial time units to make the intensity profile through the interface similar to the data. This was then used to compute Δ ρ, which was smoothed for 5.0 artificial time units to reduce the influence of noise. This value was chosen because it removes the majority of the noise in Δ ρ without significantly affecting the larger features.
The interface location arrays are updated using a value of β=0.5; this value is the largest that could be used without numerical instabilities occurring. The value of β was chosen early in determining the optimal parameters and fixed, then the amount of smoothing was varied until the desired results were obtained. The arrays ϕ1 and ϕ2 were then smoothed with the 4D smoothing method for 0.5 time units, using pre-factors of Dx=1.0, Dy=1.0, Dz=1.0 and Dt=0.5. The lower temporal diffusivity was used because the structure is more finely discretized in time relative to it’s rate of evolution than it is in space relative to the average feature size. The individual intensity values, , and , were updated at every iteration. A total of 25 iterations were used for this segmentation.
There are advantages to working with 4D data; however, there are also times when it makes more sense to treat time resolved data as individual, 3D datasets. One of these instances is when the temporal resolution is relatively low. If the sample changes significantly between timesteps, smoothing in time can lead to inaccuracies just like smoothing spatially under-resolved data can lead to problems. The other reason for possibly not using the full 4D dataset is the computational burden. This is primarily due to storing the full 4D dataset in memory since a dataset can be up to 1011 or 1012 voxels.
Registration and segmentation methods are presented that work quite well for processing 4D isothermal coarsening data of Al-Cu alloys. These methods work very well for determining smooth interface locations from noisy, 4D data.
This method has been presented for a particular case; however, it is quite easy to generalize to many other uses. For example, it could be used in 2D or 3D instead of 4D or for two or four or more phases instead of three. Also, some of the smoothing processes could be replaced with things that are better suited to different problems, such as motion by mean curvature or non-linear diffusion smoothing instead of the linear diffusion smoothing presented here.
a The number of neighbors for the 4D data is intentionally not reported because the differing spatial and temporal resolutions make this analysis less straightforward than the uniform 2D and 3D examples.
b This is not an actual diffusion equation due to the presence of the temporal component.
This work was funded by the DOE under grant DE-FG02-99ER45782/A012 and by a DOE NNSA Stewardship Science Graduate Fellowship (grant DE-FC52-08NA28752). The authors would like to thank Dr. Julie L. Fife for the use of her data in the development of these methods and Dr. E. Begum Gulsoy for guidance with registration methods.
- 10.Osher S, Fedkiw R: Level set methods and dynamic implicit surfaces. Applied Mathematical Sciences, 2003., vol. 153: Springer, New York Springer, New YorkGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.