Advertisement

Journal of Visualization

, Volume 21, Issue 4, pp 637–647 | Cite as

A novel robust color gradient estimator for photographic volume visualization

  • Bin Zhang
  • Zhiguang Zhou
  • Yubo Tao
  • Hai Lin
Regular Paper
  • 141 Downloads

Abstract

Photographic volume visualization has been widely applied in various fields ranging from medicine to biology. Different from scalar volume data, photographic volume data are directly captured by means of the modern cryo-imaging systems. The voxels are recorded as RGB vectors, which makes it difficult to estimate accurate gradient for the shading and the design of transfer functions. In this paper, we propose a robust color gradient estimation method to produce accurate and robust gradient results for photographic volumes. First, a robust color morphological gradient (RCMG) operator is employed to estimate the gradient in a dominant direction and the low-pass filters are then applied to reduce the effects of noises. Then, an aggregation operator is applied to estimate the accurate gradient directions and the gradient magnitudes. Based on the obtained color gradients, the shading effects of internal materials are enhanced and the features can be better specified in a 2D transfer function space. At last, the effectiveness of the robust gradient estimation for photographic volume is demonstrated based on a large number of experimental rendering results, especially for those noisy photographic volume data sets.

Graphical abstract

Keywords

Photographic volume Color gradient Volume rendering Transfer function 

1 Introduction

As an important branch of volume visualization, photographic volume visualization has been widely used in many fields such as medical and biological researches. Different from the conventional scalar volume data, photographic volume data are obtained by means of the modern cryo-imaging systems and each voxel is recorded in the form of original color elements such as a RGB vector. A large amount of photographic volume data sets have been acquired in this way to help researchers explore the internal structures of their research subjects, such as the whole mouse data set (Roy et al. 2009) and the human data sets from the Visible Human Project at the National Library of Medicine (Spitzer et al. 1996).

Direct volume rendering is an effective way to project the internal features of volume data sets onto 2D images, with the help of transfer functions that are used to define a mapping from voxels to visual elements such as color and opacity values. The gradient plays an important role in the course of direct volume rendering and transfer function design. For example, the gradient is employed to calculate surface normal for the generation of lighting effects (Max 1995; Kniss et al. 2003) in direct volume rendering. In addition, the gradient magnitude is usually applied to detect material boundaries in a 2D transfer function space (Kindlmann and Durkin 1998; Pfister et al. 2001; Roettger et al. 2005; Sereda et al. 2006).

The gradient can be easily calculated by means of the finite difference for a scalar volume data set, while it is difficult to derive accurate gradient from a photographic volume data set due to the color vector form of voxels. A feasible solution to estimate gradient for photographic volume data sets is converting the data sets to grayscale through a RGB-to-grayscale conversion. However, it relies heavily on the decolonization and hardly approaches to accurate gradient directions and magnitudes. Another option is to estimate gradients from colors directly, such as color distance gradient (Ebert et al. 2002), which replaces the finite-differential calculation with the measurement of color distance. However, it is still difficult to achieve accurate gradient directions for further volume rendering and transfer function design, especially for those noisy data sets.

In this paper, a robust color gradient estimation method for photographic volume is proposed to achieve accurate gradient directions and magnitude values. The gradient component in a specified direction should be computed by the change rate of the voxel data in the specified direction. Therefore, the gradient estimator should be applied in the specified direction. In the proposed method, first, a robust color morphological gradient (RCMG) operator (Evans and Liu 2006) is employed to define a gradient estimator in a dominant direction, and the low-pass filters are then applied in the other two orthogonal directions to reduce the effect of noise. By applying the RCMG operator and the low-pass filters in different orders, we can get a group of different estimations of the same gradient component. By changing the order in which the RCMG operator and the low-pass filters are applied, feature information that is blurred out by the low-pass filter in one estimation can be preserved in other estimations in which the RCMG operator is applied before the low-pass filter. An aggregation operator is further applied to generate the final gradient. With the color gradient derived by our method, the lighting effects in the visualization results of photographic volumes are largely improved, especially for those noisy data sets. The more accurate gradient magnitude can also make the transfer functions based on gradient magnitude more effective. We demonstrate the effectiveness of the proposed gradient estimation method in photographic volume visualization with a rich set of experimental results.

2 Related work

Shading effects, which are usually generated by means of the gradient-based Blinn–Phong shading model, are important visual cues in the field of computer graphics. Shading effects are able to enhance the shape and depth perception of 3D structures. Correa et al. (2011) studied a gradient estimation method to render the unstructured-mesh volume data and provided a detailed description of different gradient estimation methods. Although the finite difference is often used to calculate the gradient for scalar volume data sets, it is not suitable for the gradient estimation for photographic volume data sets, due to the 3D color representation of the voxels. A most simple way for the gradient estimation of photographic volumes is to transfer the color values into grayscale values, and calculate the grayscale gradient by means of a finite-difference operator. Another kind of methods derives the gradient directly from color vectors. Gradient estimation methods differ in different color spaces, such as RGB, CIELUV, CIELAB, and HSV (Plataniotis and Venetsanopoulos 2000). For example, the CIELUV and CIELAB color spaces are perceptually uniform and the color gradient is largely approachable to the visual perception of human, so that they are always applied for the gradient estimation of photographic volume data sets. Ebert et al. (2002) and Morris and Ebert (2002) calculated the color distance gradient in the CIELUV color space, and then design a gradient-based transfer function for users to specify visual cues for the features of interest. Gargesha et al. (2009) extended the color gradient estimation to a new design of transfer function and presented meaningful results for feature detection.

Gradient is also important in the course of feature exploration for volume visualization, which can be used to define transfer functions for users to specify opacities and colors for features of interest. Levoy (1988) first took the gradient vector for surface normal for direct volume rendering, and calculated the gradient magnitude as a dimension of transfer functions to help users find boundary features of interest. Inspired by the classification based on gradient magnitude, a large number of 2D transfer functions have been designed for further feature exploration. For example, Kindlmann et al. proposed a semi-automatic generation scheme of both 1D and 2D transfer functions (Kindlmann and Durkin 1998; Pfister et al. 2001).  Roettger et al. (2005) proposed spatialized transfer function taking the spatial information into consideration in the course of the design of transfer functions. Sereda et al. (2006) proposed the LH transfer function, which is generated based on a histogram by following the changes of gradient directions. In the previous study (Zhang et al. 2015), we proposed an intuitive color-based transfer function for photographic volumes and extended it to 2D with the gradient magnitude.

A large number of works have been studied on the color gradient estimation in the field of image processing. Several effective operators have been proposed to derive accurate gradient such as minimum vector dispersion (MVD) edge detector (Russo and Lazzari 2005), robust color morphological gradient (RCMG) operator (Evans and Liu 2006), and robust gradient vector estimation scheme. The usefulness of these color gradient estimation operators for color images has been demonstrated, which further inspires the use of them in photographic volume visualization. In this paper, we propose a new gradient estimation method to calculate both the gradient direction and magnitude accurately and robustly, especially for those noisy photographic volume data sets.

3 Robust color gradient estimation

As the gradients depict the changes of data in different directions, the gradient magnitude is small within similar materials, while it is large within the boundary regions of different materials. However, the gradient estimation for the photographic volume data is different from that for the scalar volume data. The gradients represent the direction of the changes of color values for the photographic volume data, which are not only difficult to estimate effectively, but also disturb the human visual perception. In the field of image processing, a number of robust gradient estimation schemes for color images have been studied to generate accurate and effective gradients, which can be further applied for different applications such as image segmentation, edge detection. For example, a RCMG estimator (Evans and Liu 2006) can produce effective gradients for color images in the presence of noise, and another scheme proposed by Nezhadarya and Ward (2011) is able to derive a better gradient direction and make the estimation of the color gradient much more robust. In this paper, we employ the models in the field of image processing to achieve more accurate gradients for photographic volume data, to better depict the normal of structure surfaces and the boundary features of internal materials.

In the field of signal processing, high-pass filters are always applied to estimate gradients for highlighting edge features, while low-pass filters are employed to smooth the signals for the reduction of noises. As the original data sets are easily affected by noises, it is necessary to use low-pass filters to reduce the noises from the data sets. However, it is not a good practice to just apply a low-pass filter before gradient estimation, because some detailed features are also blurred out by the low-pass filter, which leads to inaccurate gradients. In the field of image processing, high-pass filters and low-pass filters are usually combined to construct a gradient estimator to reduce the influence of noises (Ercan and Whyte 2001). Inspired by the traditional gradient estimation methods, we apply a high-pass filter to calculate the gradient in one direction, and use low-pass filters to reduce the effect of noise in the orthogonal directions. As a state-of-the-art vector-valued gradient estimator, RCMG has excellent robustness against noise (Mittal et al. 2012). Therefore, we use RCMG in the proposed method.
Fig. 1

Process of calculation of the gradient component \(g_x\) with the proposed method applied in a window of size \(3\times 3\times 3\)

To calculate the gradients of a color vector at \(p(x_{p},y_{p},z_{p})\), a cubic window W that is centered at \((x_{p},y_{p},z_{p})\) with the size \(3\times 3\times 3\) is defined. The three components of the gradient \(\varvec{g}=(g_{x},g_{y},g_{z})\) are separately calculated, and the illustration of the process is shown in Fig. 1. ij and k are the natural numbers in the range of [1, 3], which represent a color vector in the sample window W. The gradient estimation process is composed with three kinds of operations, including the high-pass filtering, the low-pass filtering and an aggregation operation.

3.1 High-pass filtering

For each row (jk) along the x-axis, the RCMG operator is applied to perform the high-pass filtering. Assuming that \(\varvec{v}_{1},\varvec{v}_{2}\) and \(\varvec{v}_{3}\) are the input vectors of the RCMG estimator, the vector differences between each pair of color vectors can be defined as a \(3\times 3\) matrix, as shown in Eq. 1:
$$\begin{aligned} D = \{\varvec{d}_{i,j}|\varvec{d}_{i,j}=\varvec{v}_{i}-\varvec{v}_{j},\ \forall i,j=1,2,3\}. \end{aligned}$$
(1)
Inspired by the pairwise pixel rejection scheme (Evans and Liu 2006), the pairs of vectors with the maximum Euclidean norm are removed from the matrix \(\varvec{D}\) during the course of gradient estimation. Assuming that \(\varvec{D'}\) is the set of differential vectors with the values removed, a row of input vectors \({\varvec{v}_{1},\varvec{v}_{2}, \varvec{v}_{3}}\) can be processed by a high-pass filter, as shown in Eq. 2
$$\begin{aligned} \varvec{H}(\varvec{v}_{1},\varvec{v}_{2},\varvec{v}_{3}) = \varvec{d}_{\hat{i},\hat{j}},\quad \forall \varvec{d}_{i,j}\in D',\Vert \varvec{d}_{\hat{i},\hat{j}}\Vert > \Vert \varvec{d}_{i,j}\Vert . \end{aligned}$$
(2)
The output of Eq. 2 is a differential vector that is selected from \(D'\). Each row of the colors in the sample window W can be filtered as
$$\begin{aligned} \varvec{h}_{j,k} = \varvec{H1}(\varvec{f}_{1,j,k}, \varvec{f}_{2,j,k},\varvec{f}_{3,j,k}). \end{aligned}$$
(3)
A matrix of differential vectors of size \(3\times 3\) is produced by the high-pass operation, as shown in the top left picture of Fig. 1.

3.2 Low-pass filtering

To reduce the influence of noises in the original data, we take advantage of a low-pass filter \(\varvec{L}\) to smooth the data in both y and z directions:
$$\begin{aligned} \varvec{L}(\varvec{v}_{1},\varvec{v}_{2},\varvec{v}_{3})==\mathrm{median}(\varvec{v}_{1},\varvec{v}_{2},\varvec{v}_{3}), \end{aligned}$$
(4)
where \({\varvec{v}_{1},\varvec{v}_{2}, \varvec{v}_{3}}\) are the three input vectors. median stands for the elementwise-median operation. The \(3\times 3\) matrix is smoothed by the median filter \(\varvec{L}\) in vertical direction and a \(1\times 3\) vector \(\varvec{v^{L}}\) is produced, as shown in the top left picture of Fig. 1.
$$\begin{aligned} \varvec{v^{L}}_{k} = \varvec{L}(\varvec{h}_{1,k}, \varvec{h}_{2,k}, \varvec{h}_{3,k}). \end{aligned}$$
(5)
The low-pass filter is further applied to operate the vector \(\varvec{v^{L}}\) and the estimation of the gradient component is calculated as its norm:
$$\begin{aligned} g_{1} = \Vert \varvec{L}(\varvec{v^{L}}_{1}, \varvec{v^{L}}_{2},\varvec{v^{L}}_{3})\Vert . \end{aligned}$$
(6)

3.3 Aggregation operation

As the RCMG operator is nonlinear, we can get another five different gradient component estimation results by changing the order of filters. For example, we can obtain the result \(g_{5}\), as shown in Fig. 1 by first applying the low-pass filter in z direction, then applying the high-pass filter in x direction and the low-pass filter in y direction. Since the inputs and outputs of \(\varvec{H1}\), \(\varvec{H2}\), and \(\varvec{H3}\) in Fig. 1 differ from each other, different symbols are employed to distinguish different RCMG operators. For \(\varvec{H1}\), the input is a three-dimensional window and the output is a two-dimensional matrix. The input of \(\varvec{H2}\) is a two-dimensional matrix and the output is a vector. The input of \(\varvec{H3}\) is a vector and the output is a single value, which is the norm of Eq. 2. The definitions of \(\varvec{L1}\), \(\varvec{L2}\) and \(\varvec{L3}\) are similar to those of \(\varvec{H1}\), \(\varvec{H2}\) and \(\varvec{H3}\). When the results \(g_{1},g_{2},...,g_{6}\) are obtained after all of the possible orders are traversed, an aggregation operator signed mean is proposed to aggregate the six results to produce the final gradient component, as shown in
$$\begin{aligned} g &= {} A(g_{1},g_{2},...,g_{6}, \varvec{v^{*}})\nonumber \\ &= {} \sum _{i=1}^{6}{\frac{\varvec{g}_{i}}{6}}\times \mathrm{sign}\left( \Vert \frac{\varvec{v^{*}}_2+\varvec{v^{*}}_3}{2}\Vert - \Vert \frac{\varvec{v^{*}}_1+\varvec{v^{*}}_2}{2}\Vert \right) . \end{aligned}$$
(7)
The sign is computed from the vector of the outputs of the second filter operation. The vector, which is denoted by \(\varvec{v^{*}}\) in Eq. 7, could be \(\varvec{v^{L}}\) from a low filter or \(\varvec{v^{H}}\) from a high-pass filter. Finally, we can get the gradient component \(g_{x}\) with the aggregation operation, as shown in
$$\begin{aligned} g_{x} = A(g_{1},g_{2},...,g_{6}, \varvec{v}). \end{aligned}$$
(8)
The processes of the estimation for the gradient components \(g_{y}\) and \(g_{z}\) are similar to that for the gradient component \(g_{x}\). The difference is that the directions of high-pass and low-pass operators differ from each other.

4 Evaluation

In recent years, researches on photographic volumes focus on transfer function design, rendering quality (Lee et al. 2016) and its usage in specific domains in combination with other types of data set (Vandenberghe et al. 2016), while few researches focus on gradient estimation and its extended applications in volume shading or transfer function design. Besides of the gray-scale volume-based finite difference, color distance gradient is the most widely used gradient estimation method for photographic volumes. Therefore, Sobel operator, a gray-scale volume-based method, is chosen for comparison rather than the frequently used finite difference. In addition, the color distance gradient, a vector-valued method, is also used for comparison.

In this section, we will first evaluate the result of the proposed method and then discuss its usage in different processes of volume visualization, such as shading and transfer function design. The performance of different methods is analyzed at last.

4.1 Comparative result

Fig. 2

Gradient magnitudes obtained from the color distance gradient (b), the Sobel operator (c), and the proposed method (d) for the noised image (a)

To evaluate the effectiveness of the proposed robust gradient estimator, a photographic mouse data set (Roy et al. 2009) and a photographic human data set (Spitzer et al. 1996) are visualized and analyzed. The experimental results are compared with those obtained by means of the traditional Sobel operator and the color distance gradient method. The Sobel operator is performed on the grayscale volume data sets generated from the photographic volume data sets, and the color distance gradient is achieved by measuring the distance between two colors. However, the unsigned distance metric proposed by Ebert et al. (2002) cannot indicate the direction of gradients. Therefore, the sign of each gradient component in the color distance gradient method is produced by the difference of corresponding voxels in the grayscale volume. With the development of the application of photographic volumes, noise can be generated in each procedure such as data acquisition, data interpretation, and data processing, which brings large uncertainty to the exploration of photographic volumes. For example, different lighting situation, exposure time, and aperture setting will result in quite different photographic images. If any of these conditions is not reasonable enough, noise will be introduced to the result data during acquisition. Therefore, we add common noise to the original data sets in this paper, to simulate these situations and better analyze the robustness and practicality of different gradient estimation methods.

First, to increase the difficulty of gradient estimation and to further evaluate the robustness of the proposed method, we add the pepper and salt noise into the original data sets with signal noise ratio 0.9. The proposed method and the comparison methods are applied to estimate gradients for a part of the noisy photographic human data set. The experimental results are shown in Fig. 2. (a) Is an original slice of the photographic volume data set. The gradient magnitudes obtained from the color distance gradient estimation, the Sobel operator, and the proposed method are displayed in (b), (c), and (d), respectively. It is obvious that neither the color distance gradient method nor the Sobel operator can handle the noises well and derive accurate gradient values. The color distance gradient method is largely affected by the noises, resulting in undesired gradient magnitudes all over the data set. Although the Sobel operator is able to generate better gradient magnitudes inside the human head, the boundaries of sinuses, which are green in the original image, can hardly be found. The noises in the empty regions outside the human head cannot be properly handled either. At the same time, the result of the proposed method is able to achieve more accurate gradients in the presence of noise, which can be much more useful for further data analysis.
Fig. 3

Gradient magnitudes obtained from the color distance gradient (b), the median filtered color distance gradient (c), the Sobel operator (d), the median filtered Sobel operator (e), and the proposed method (f) for the volume slice (a)

It is a common sense that when the input data are noisy a noise reduction filter should be applied before gradient estimation. We applied the median filter of window size \(3\times 3\times 3\) on the noised human head data set before applying the color distance gradient method and the Sobel operator. The result gradient magnitudes of a volume slice are shown in Fig. 3. Although the median filter removes the noise of the input data, some detail features are also blurred out. We can see that the outline of the maxilla and teeth is not complete in Fig. 3c and can hardly be recognized in Fig. 3e, comparing to the result of the proposed method in Fig. 3f. These features are quite helpful in the research of stomatology, in which we need to extract the maxilla and the whole teeth from the data set. Although normal noise reduction filters such as the median filter can reduce the noise of the input data set, they do not have the ability to preserve detail features, which are often blurred out during the filtering. In the proposed method, we also apply the median filter to reduce the effect of noise. However, by applying the median filter and the RCMG operator in different orders, feature information that is blurred out in one result can be preserved in other results. Therefore, we can overcome this problem through aggregating the results produced by different filter orders and get the result in Fig. 3f.
Fig. 4

Visualization results of the noised human leg data set with gradients obtained from a color distance gradient, b Sobel operator, and c proposed method

Then, we use the gradients for volume shading to evaluate the accuracy of the gradient directions generated by the proposed method. Figure 4 shows the rendered results of the leg part of the human data set, which is also disturbed with the pepper and salt noises. The internal muscle structures are shaded with little specular lights, because the gradient directions generated by means of the color distance gradient method are irregular, as shown in (a). The rendered result based on the Sobel operator is shown in (b), in which the shape perception of muscle structures is enhanced. Given the more accurate gradients derived by means of the proposed gradient estimation, the surface of internal structures can be achieved more clearly and the global shape as well as the local detail structures can be better perceived, as shown in (c).
Fig. 5

Results of the whole mouse data set. ac Are the rendering results based on the gradients estimated from the color distance gradient, the Sobel operator, and our method, respectively. The corresponding gradient magnitudes of one selected slice are shown in df. By adding noises to this data set, the rendering results are shown in gi, and the gradient magnitudes of the selected slice are shown in jl. It can be seen that our method is more robust for noised data

To further demonstrate the effectiveness of the proposed robust gradient estimation, we compare the rendered results based on the original mouse data set and the noisy mouse data set, as shown in Fig. 5. Figure 5a–c shows the rendered results based on the gradients generated from the color distance gradient method, the Sobel operator, and our method, respectively. The gradient magnitudes of a slice of volume are correspondingly, as shown in Fig. 5d–f. We can see that the details in the regions of interest can be better perceived, which, however, are absent in the results obtained by means of the other two methods. When the noises are added into the original data set, the usefulness of our method can be further demonstrated by comparing the rendered results shown in Fig. 5g–l. It is obvious that the more accurate gradient magnitudes are obtained through our method, allowing the users to better perceive the shape perception of internal materials, especially for those noisy data sets.

4.2 Applications in transfer function design

Fig. 6

Visualization results based on a specified 2D transfer function. a Skin, b muscles, c sinuses, d brain and bone marrow, e vessels, f multiple features rendered result, g the specified 2D transfer function space

Except the application of gradient direction in volume shading that we have shown previously, it is well known that the gradient magnitude is an important attribute to assist users in volume classification. To use the robust gradients to enhance the classification of photographic volume data sets, we integrate the color arrangement which is proposed in our previous study (Zhang et al. 2015) with the gradient magnitude derived by the proposed method to construct a 2D transfer function space. With a rich set of interactions provided, users are able to specify internal features of interest intuitively. Figure 6g shows the 2D transfer function space which uses a 2D histogram to depict the distributions of the human head data. The boundary features are usually tenuous and the gradient magnitude values of them are tended to be larger. Figure 6a presents the human skin features. The tough features are located in the regions with lower gradient magnitude values, such as the muscles, sinuses, brain, and bone marrow. We can specify these features by making use of the convenient user interactions. Results are shown in Fig. 6b–d. To find detailed features with smaller distributions such as vessels, we first determine their color intervals according to prior knowledge and then filter the regions by traversing different gradient magnitude intervals. Figure 6e shows the vessels based on the above interactive operations. The mixed rendered result of these features is presented in (f).
Fig. 7

Visualization results based on a spatialized transfer function. Highlighted features in the corresponding transfer function space are rendered. a Muscles, b bones and soft tissues, c skin, d mixed results of multiple features. The spatialized transfer functions shown in the second row are generated by the color distance gradient (e), the Sobel operator (f), and the Soble operator that applies a median filter first. Transfer function in h is generated with the result of the Sobel operator by decreasing the classification threshold in the spatialized transfer function

Although the interactive operations provided by the 2D transfer function widget make it convenient to use, it is still a tedious work to identify structures of interest in the 2D transfer function space, especially for those features with minor scales. In this paper, a spatialized transfer function model is applied to automate the interactive design of 2D transfer functions. In the evaluation model, besides the two spatial measurements, position and shape information, which are used in the method proposed by Roettger et al. (2005), color similarity are employed in our method to compare the similarity of features in adjacent histogram bins. As a result, the transfer function space is automatically separated into several regions. According to the classification threshold, one feature could be separated into several different regions by the automatic transfer function design process. We can explore the data set conveniently by clicking on some region and checking the visualization result. Features specified with the spatialized transfer function are shown in Fig. 7. Highlighted regions in the transfer function space are used for rendering.

Figure 7a presents the muscle features, which are tend to be red and have lower gradient magnitude values. Bones and soft tissues, as shown in Fig. 7b, have similar color and similar gradient magnitude values, so they are classified into the same region by the 2D transfer function. Skin lies in the boundaries between internal structures and the empty space outside the human body. It should have high gradient magnitude values. Therefore, when exploring the data set, we can directly check the regions with high magnitude values, as shown in Fig. 7c. Figure 7d presents a mixed result of both the muscle and the skin features. The transfer functions generated by comparison method are also shown in Fig. 7. With the results of the color distance gradient and the Sobel operator, it is hard to get meaningful classification results of the spatialized transfer function. As shown in Fig. 7e, f, and h, features such as muscle and skin are not separated. To separate these large areas in the histogram, we decrease the classification threshold. However, the histogram is separated into too much little regions, which is not convenient for data exploration.

According to the above applications, it can be concluded that the robust gradients obtained with the proposed method are able to better describe the distributions of features in the 2D transfer function space. It can also play important roles in other volume rendering and classification applications for photographic volume data sets.

4.3 Performance analysis

As described in Sect. 3, in the implementation of the proposed method, we need to calculate six estimation results to get the final result of a gradient component. The calculation of each of the six estimation results requires one time RCMG filtering and two times median filtering. The time cost by the RCMG operator is also much longer than the comparison methods. As a result, the time cost by the proposed method is about ten times of that by the comparison methods, which apply one time median filter before gradient estimation. However, with the help of GPU parallel computation, we can make a significant performance improvement. Time spent on the gradient estimation of several data sets are shown in Table 1, where CDG stands for the color distance gradient method and the prefix MF-means the method applies a median filter before gradient estimation. We implement a GPU version of the Sobel method for comparison. Although the proposed method costs a lot of time, it can be accelerated by means of GPU computing to the same level of the GPU version of the Sobel operator.
Table 1

Time performance of different gradient estimation methods (millisecond)

Data set

Dimensions

CDG

MF-CDG

Sobel

MF-Sobel

Sobel (GPU)

Proposed

Proposed (GPU)

Chest

\(256 \times 128 \times 144\)

1422

10,935

4499

14,479

1203

115,485

2313

Foot

\(256 \times 512 \times 178\)

2420

20,475

10,207

29,057

1887

306,406

4435

Head

\(380 \times 256 \times 260\)

2734

23,765

11,141

39,685

2297

336,924

4939

Leg

\(256 \times 256 \times 282\)

3078

24,940

10,055

33641

2638

266,467

4751

Mouse

\(256 \times 256 \times 208\)

3203

23,070

9203

31,857

2845

204,015

4498

5 Conclusion

In this paper, we proposed a novel robust color gradient estimation method for photographic volumes, which is performed by combining the RCMG operators and low-pass filters for each gradient component in the CIELUV color space. It has been demonstrated with a large number of experimental results that the gradient obtained with the proposed method are more accurate and robust than the commonly-used gradient estimators in photographic volume visualization, especially for those noisy photographic volume data sets. An obvious limitation for the proposed method is that it takes longer time to perform the filtering operations. However, it can be accelerated by means of GPU computing. To further improve the performance, we will focus on optimizing the filtering operations in the future work.

Notes

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by NSF of China Project Nos. 61303133, 61472354, the National Statistical Scientific Research Project No. 2015LD03, the China Postdoctoral Science Foundation No. 2015M571846, the Zhejiang Science and Technology Plan of China No. 2014C31057, and the National Key Technology Research and Development Program of the Ministry of Science and Technology of China under Grant 2014BAK14B01.

References

  1. Correa C, Hero R, Ma KL (2011) A comparison of gradient estimation methods for volume rendering on unstructured meshes. IEEE Trans Vis Comput Gr 17(3):305–319.  https://doi.org/10.1109/TVCG.2009.105 CrossRefGoogle Scholar
  2. Ebert DS, Morris CJ, Rheingans P, Yoo TS (2002) Designing effective transfer functions for volume rendering from photographic volumes. IEEE Trans Vis Comput Gr 8(2):183–197CrossRefGoogle Scholar
  3. Ercan G, Whyte P (2001) Digital image processing. US Patent 6,240,217Google Scholar
  4. Evans AN, Liu XU (2006) A morphological gradient approach to color edge detection. IEEE Trans Image Process 15(6):1454–1463CrossRefGoogle Scholar
  5. Gargesha M, Qutaish M, Roy D, Steyer G, Bartsch H, Wilson DL (2009) Enhanced volume rendering techniques for high-resolution color cryo-imaging data. In: SPIE Medical Imaging, International Society for Optics and Photonics, p 72622V.  https://doi.org/10.1117/12.813756
  6. Kindlmann G, Durkin JW (1998) Semi-automatic generation of transfer functions for direct volume rendering. In: Proceedings of the 1998 IEEE symposium on volume visualization, ACM, pp 79–86Google Scholar
  7. Kniss J, Premoze S, Hansen C, Shirley P, McPherson A (2003) A model for volume lighting and modeling. IEEE Trans Vis Comput Gr 9(2):150–162CrossRefGoogle Scholar
  8. Lee B, Kwon K, Shin BS (2016) Interactive high-quality visualization of color volume datasets using GPU-based refinements of segmentation data. J X Ray Sci Technol 24(4):537–548.  https://doi.org/10.3233/XST-160572 CrossRefGoogle Scholar
  9. Levoy M (1988) Display of surfaces from volume data. IEEE Comput Gr Appl 8(3):29–37CrossRefGoogle Scholar
  10. Max N (1995) Optical models for direct volume rendering. IEEE Trans Vis Comput Gr 1(2):99–108CrossRefGoogle Scholar
  11. Mittal A, Sofat S, Hancock E (2012) Detection of edges in color images: a review and evaluative comparison of state-of-the-art techniques. In: Proceedings of the third international conference on autonomous and intelligent systems, AIS’12, pp 250–259.  https://doi.org/10.1007/978-3-642-31368-4_30
  12. Morris CJ, Ebert D (2002) Direct volume rendering of photographic volumes using multi-dimensional color-based transfer functions. In: Proceedings of the symposium on data visualisation 2002, Eurographics Association, pp 115-ffGoogle Scholar
  13. Nezhadarya E, Ward RK (2011) A new scheme for robust gradient vector estimation in color images. IEEE Trans Image Process 20(8):2211–2220MathSciNetCrossRefzbMATHGoogle Scholar
  14. Pfister H, Lorensen B, Bajaj C, Kindlmann G, Schroeder W, Avila LS, Raghu K, Machiraju R, Lee J (2001) The transfer function bake-off. IEEE Comput Gr Appl 21(3):16–22CrossRefGoogle Scholar
  15. Plataniotis KN, Venetsanopoulos AN (2000) Color image processing and applications. Springer, BerlinCrossRefGoogle Scholar
  16. Roettger S, Bauer M, Stamminger M (2005) Spatialized transfer functions. In: Proceedings of the seventh joint eurographics/IEEE VGTC conference on visualization, Eurographics Association, pp 271–278Google Scholar
  17. Roy D, Steyer GJ, Gargesha M, Stone ME, Wilson DL (2009) 3D cryo-imaging: a very high-resolution view of the whole mouse. Anat Rec 292(3):342–351CrossRefGoogle Scholar
  18. Russo F, Lazzari A (2005) Color edge detection in presence of gaussian noise using nonlinear prefiltering. IEEE Trans Instrum Meas 54(1):352–358CrossRefGoogle Scholar
  19. Sereda P, Bartroli AV, Serlie IW, Gerritsen FA (2006) Visualization of boundaries in volumetric data sets using LH histograms. IEEE Trans Vis Comput Gr 12(2):208–218CrossRefGoogle Scholar
  20. Spitzer V, Ackerman MJ, Scherzinger AL, Whitlock D (1996) The visible human male: a technical report. J Am Med Inform Assoc 3(2):118–130CrossRefGoogle Scholar
  21. Vandenberghe ME, Hrard AS, Souedet N, Sadouni E, Santin MD, Briet D, Carr D, Schulz J, Hantraye P, Chabrier PE, Rooney T, Debeir T, Blanchard V, Pradier L, Dhenain M, Delzescaux T (2016) High-throughput 3D whole-brain quantitative histopathology in rodents. Sci Rep 6:20958.  https://doi.org/10.1038/srep20958 CrossRefGoogle Scholar
  22. Zhang B, Tao Y, Lin H, Dong F, Clapworthy G (2015) Intuitive transfer function design for photographic volumes. J Vis 18(4):571–580.  https://doi.org/10.1007/s12650-014-0267-5 CrossRefGoogle Scholar

Copyright information

© The Visualization Society of Japan 2018

Authors and Affiliations

  1. 1.State Key Lab of CAD&CGZhejiang UniversityHangzhouChina
  2. 2.School of InformationZhejiang University of Finance and EconomicsHangzhouChina

Personalised recommendations