# A novel robust color gradient estimator for photographic volume visualization

- 141 Downloads

### Abstract

Photographic volume visualization has been widely applied in various fields ranging from medicine to biology. Different from scalar volume data, photographic volume data are directly captured by means of the modern cryo-imaging systems. The voxels are recorded as RGB vectors, which makes it difficult to estimate accurate gradient for the shading and the design of transfer functions. In this paper, we propose a robust color gradient estimation method to produce accurate and robust gradient results for photographic volumes. First, a robust color morphological gradient (RCMG) operator is employed to estimate the gradient in a dominant direction and the low-pass filters are then applied to reduce the effects of noises. Then, an aggregation operator is applied to estimate the accurate gradient directions and the gradient magnitudes. Based on the obtained color gradients, the shading effects of internal materials are enhanced and the features can be better specified in a 2D transfer function space. At last, the effectiveness of the robust gradient estimation for photographic volume is demonstrated based on a large number of experimental rendering results, especially for those noisy photographic volume data sets.

### Graphical abstract

## Keywords

Photographic volume Color gradient Volume rendering Transfer function## 1 Introduction

As an important branch of volume visualization, photographic volume visualization has been widely used in many fields such as medical and biological researches. Different from the conventional scalar volume data, photographic volume data are obtained by means of the modern cryo-imaging systems and each voxel is recorded in the form of original color elements such as a RGB vector. A large amount of photographic volume data sets have been acquired in this way to help researchers explore the internal structures of their research subjects, such as the whole mouse data set (Roy et al. 2009) and the human data sets from the Visible Human Project at the National Library of Medicine (Spitzer et al. 1996).

Direct volume rendering is an effective way to project the internal features of volume data sets onto 2D images, with the help of transfer functions that are used to define a mapping from voxels to visual elements such as color and opacity values. The gradient plays an important role in the course of direct volume rendering and transfer function design. For example, the gradient is employed to calculate surface normal for the generation of lighting effects (Max 1995; Kniss et al. 2003) in direct volume rendering. In addition, the gradient magnitude is usually applied to detect material boundaries in a 2D transfer function space (Kindlmann and Durkin 1998; Pfister et al. 2001; Roettger et al. 2005; Sereda et al. 2006).

The gradient can be easily calculated by means of the finite difference for a scalar volume data set, while it is difficult to derive accurate gradient from a photographic volume data set due to the color vector form of voxels. A feasible solution to estimate gradient for photographic volume data sets is converting the data sets to grayscale through a RGB-to-grayscale conversion. However, it relies heavily on the decolonization and hardly approaches to accurate gradient directions and magnitudes. Another option is to estimate gradients from colors directly, such as color distance gradient (Ebert et al. 2002), which replaces the finite-differential calculation with the measurement of color distance. However, it is still difficult to achieve accurate gradient directions for further volume rendering and transfer function design, especially for those noisy data sets.

In this paper, a robust color gradient estimation method for photographic volume is proposed to achieve accurate gradient directions and magnitude values. The gradient component in a specified direction should be computed by the change rate of the voxel data in the specified direction. Therefore, the gradient estimator should be applied in the specified direction. In the proposed method, first, a robust color morphological gradient (RCMG) operator (Evans and Liu 2006) is employed to define a gradient estimator in a dominant direction, and the low-pass filters are then applied in the other two orthogonal directions to reduce the effect of noise. By applying the RCMG operator and the low-pass filters in different orders, we can get a group of different estimations of the same gradient component. By changing the order in which the RCMG operator and the low-pass filters are applied, feature information that is blurred out by the low-pass filter in one estimation can be preserved in other estimations in which the RCMG operator is applied before the low-pass filter. An aggregation operator is further applied to generate the final gradient. With the color gradient derived by our method, the lighting effects in the visualization results of photographic volumes are largely improved, especially for those noisy data sets. The more accurate gradient magnitude can also make the transfer functions based on gradient magnitude more effective. We demonstrate the effectiveness of the proposed gradient estimation method in photographic volume visualization with a rich set of experimental results.

## 2 Related work

Shading effects, which are usually generated by means of the gradient-based Blinn–Phong shading model, are important visual cues in the field of computer graphics. Shading effects are able to enhance the shape and depth perception of 3D structures. Correa et al. (2011) studied a gradient estimation method to render the unstructured-mesh volume data and provided a detailed description of different gradient estimation methods. Although the finite difference is often used to calculate the gradient for scalar volume data sets, it is not suitable for the gradient estimation for photographic volume data sets, due to the 3D color representation of the voxels. A most simple way for the gradient estimation of photographic volumes is to transfer the color values into grayscale values, and calculate the grayscale gradient by means of a finite-difference operator. Another kind of methods derives the gradient directly from color vectors. Gradient estimation methods differ in different color spaces, such as RGB, CIELUV, CIELAB, and HSV (Plataniotis and Venetsanopoulos 2000). For example, the CIELUV and CIELAB color spaces are perceptually uniform and the color gradient is largely approachable to the visual perception of human, so that they are always applied for the gradient estimation of photographic volume data sets. Ebert et al. (2002) and Morris and Ebert (2002) calculated the color distance gradient in the CIELUV color space, and then design a gradient-based transfer function for users to specify visual cues for the features of interest. Gargesha et al. (2009) extended the color gradient estimation to a new design of transfer function and presented meaningful results for feature detection.

Gradient is also important in the course of feature exploration for volume visualization, which can be used to define transfer functions for users to specify opacities and colors for features of interest. Levoy (1988) first took the gradient vector for surface normal for direct volume rendering, and calculated the gradient magnitude as a dimension of transfer functions to help users find boundary features of interest. Inspired by the classification based on gradient magnitude, a large number of 2D transfer functions have been designed for further feature exploration. For example, Kindlmann et al. proposed a semi-automatic generation scheme of both 1D and 2D transfer functions (Kindlmann and Durkin 1998; Pfister et al. 2001). Roettger et al. (2005) proposed spatialized transfer function taking the spatial information into consideration in the course of the design of transfer functions. Sereda et al. (2006) proposed the LH transfer function, which is generated based on a histogram by following the changes of gradient directions. In the previous study (Zhang et al. 2015), we proposed an intuitive color-based transfer function for photographic volumes and extended it to 2D with the gradient magnitude.

A large number of works have been studied on the color gradient estimation in the field of image processing. Several effective operators have been proposed to derive accurate gradient such as minimum vector dispersion (MVD) edge detector (Russo and Lazzari 2005), robust color morphological gradient (RCMG) operator (Evans and Liu 2006), and robust gradient vector estimation scheme. The usefulness of these color gradient estimation operators for color images has been demonstrated, which further inspires the use of them in photographic volume visualization. In this paper, we propose a new gradient estimation method to calculate both the gradient direction and magnitude accurately and robustly, especially for those noisy photographic volume data sets.

## 3 Robust color gradient estimation

As the gradients depict the changes of data in different directions, the gradient magnitude is small within similar materials, while it is large within the boundary regions of different materials. However, the gradient estimation for the photographic volume data is different from that for the scalar volume data. The gradients represent the direction of the changes of color values for the photographic volume data, which are not only difficult to estimate effectively, but also disturb the human visual perception. In the field of image processing, a number of robust gradient estimation schemes for color images have been studied to generate accurate and effective gradients, which can be further applied for different applications such as image segmentation, edge detection. For example, a RCMG estimator (Evans and Liu 2006) can produce effective gradients for color images in the presence of noise, and another scheme proposed by Nezhadarya and Ward (2011) is able to derive a better gradient direction and make the estimation of the color gradient much more robust. In this paper, we employ the models in the field of image processing to achieve more accurate gradients for photographic volume data, to better depict the normal of structure surfaces and the boundary features of internal materials.

To calculate the gradients of a color vector at \(p(x_{p},y_{p},z_{p})\), a cubic window *W* that is centered at \((x_{p},y_{p},z_{p})\) with the size \(3\times 3\times 3\) is defined. The three components of the gradient \(\varvec{g}=(g_{x},g_{y},g_{z})\) are separately calculated, and the illustration of the process is shown in Fig. 1. *i*, *j* and *k* are the natural numbers in the range of [1, 3], which represent a color vector in the sample window *W*. The gradient estimation process is composed with three kinds of operations, including the high-pass filtering, the low-pass filtering and an aggregation operation.

### 3.1 High-pass filtering

*j*,

*k*) along the

*x*-axis, the RCMG operator is applied to perform the high-pass filtering. Assuming that \(\varvec{v}_{1},\varvec{v}_{2}\) and \(\varvec{v}_{3}\) are the input vectors of the RCMG estimator, the vector differences between each pair of color vectors can be defined as a \(3\times 3\) matrix, as shown in Eq. 1:

*W*can be filtered as

### 3.2 Low-pass filtering

*y*and

*z*directions:

### 3.3 Aggregation operation

*z*direction, then applying the high-pass filter in

*x*direction and the low-pass filter in

*y*direction. Since the inputs and outputs of \(\varvec{H1}\), \(\varvec{H2}\), and \(\varvec{H3}\) in Fig. 1 differ from each other, different symbols are employed to distinguish different RCMG operators. For \(\varvec{H1}\), the input is a three-dimensional window and the output is a two-dimensional matrix. The input of \(\varvec{H2}\) is a two-dimensional matrix and the output is a vector. The input of \(\varvec{H3}\) is a vector and the output is a single value, which is the norm of Eq. 2. The definitions of \(\varvec{L1}\), \(\varvec{L2}\) and \(\varvec{L3}\) are similar to those of \(\varvec{H1}\), \(\varvec{H2}\) and \(\varvec{H3}\). When the results \(g_{1},g_{2},...,g_{6}\) are obtained after all of the possible orders are traversed, an aggregation operator

*signed mean*is proposed to aggregate the six results to produce the final gradient component, as shown in

## 4 Evaluation

In recent years, researches on photographic volumes focus on transfer function design, rendering quality (Lee et al. 2016) and its usage in specific domains in combination with other types of data set (Vandenberghe et al. 2016), while few researches focus on gradient estimation and its extended applications in volume shading or transfer function design. Besides of the gray-scale volume-based finite difference, color distance gradient is the most widely used gradient estimation method for photographic volumes. Therefore, Sobel operator, a gray-scale volume-based method, is chosen for comparison rather than the frequently used finite difference. In addition, the color distance gradient, a vector-valued method, is also used for comparison.

In this section, we will first evaluate the result of the proposed method and then discuss its usage in different processes of volume visualization, such as shading and transfer function design. The performance of different methods is analyzed at last.

### 4.1 Comparative result

To evaluate the effectiveness of the proposed robust gradient estimator, a photographic mouse data set (Roy et al. 2009) and a photographic human data set (Spitzer et al. 1996) are visualized and analyzed. The experimental results are compared with those obtained by means of the traditional Sobel operator and the color distance gradient method. The Sobel operator is performed on the grayscale volume data sets generated from the photographic volume data sets, and the color distance gradient is achieved by measuring the distance between two colors. However, the unsigned distance metric proposed by Ebert et al. (2002) cannot indicate the direction of gradients. Therefore, the sign of each gradient component in the color distance gradient method is produced by the difference of corresponding voxels in the grayscale volume. With the development of the application of photographic volumes, noise can be generated in each procedure such as data acquisition, data interpretation, and data processing, which brings large uncertainty to the exploration of photographic volumes. For example, different lighting situation, exposure time, and aperture setting will result in quite different photographic images. If any of these conditions is not reasonable enough, noise will be introduced to the result data during acquisition. Therefore, we add common noise to the original data sets in this paper, to simulate these situations and better analyze the robustness and practicality of different gradient estimation methods.

To further demonstrate the effectiveness of the proposed robust gradient estimation, we compare the rendered results based on the original mouse data set and the noisy mouse data set, as shown in Fig. 5. Figure 5a–c shows the rendered results based on the gradients generated from the color distance gradient method, the Sobel operator, and our method, respectively. The gradient magnitudes of a slice of volume are correspondingly, as shown in Fig. 5d–f. We can see that the details in the regions of interest can be better perceived, which, however, are absent in the results obtained by means of the other two methods. When the noises are added into the original data set, the usefulness of our method can be further demonstrated by comparing the rendered results shown in Fig. 5g–l. It is obvious that the more accurate gradient magnitudes are obtained through our method, allowing the users to better perceive the shape perception of internal materials, especially for those noisy data sets.

### 4.2 Applications in transfer function design

Although the interactive operations provided by the 2D transfer function widget make it convenient to use, it is still a tedious work to identify structures of interest in the 2D transfer function space, especially for those features with minor scales. In this paper, a spatialized transfer function model is applied to automate the interactive design of 2D transfer functions. In the evaluation model, besides the two spatial measurements, position and shape information, which are used in the method proposed by Roettger et al. (2005), color similarity are employed in our method to compare the similarity of features in adjacent histogram bins. As a result, the transfer function space is automatically separated into several regions. According to the classification threshold, one feature could be separated into several different regions by the automatic transfer function design process. We can explore the data set conveniently by clicking on some region and checking the visualization result. Features specified with the spatialized transfer function are shown in Fig. 7. Highlighted regions in the transfer function space are used for rendering.

Figure 7a presents the muscle features, which are tend to be red and have lower gradient magnitude values. Bones and soft tissues, as shown in Fig. 7b, have similar color and similar gradient magnitude values, so they are classified into the same region by the 2D transfer function. Skin lies in the boundaries between internal structures and the empty space outside the human body. It should have high gradient magnitude values. Therefore, when exploring the data set, we can directly check the regions with high magnitude values, as shown in Fig. 7c. Figure 7d presents a mixed result of both the muscle and the skin features. The transfer functions generated by comparison method are also shown in Fig. 7. With the results of the color distance gradient and the Sobel operator, it is hard to get meaningful classification results of the spatialized transfer function. As shown in Fig. 7e, f, and h, features such as muscle and skin are not separated. To separate these large areas in the histogram, we decrease the classification threshold. However, the histogram is separated into too much little regions, which is not convenient for data exploration.

According to the above applications, it can be concluded that the robust gradients obtained with the proposed method are able to better describe the distributions of features in the 2D transfer function space. It can also play important roles in other volume rendering and classification applications for photographic volume data sets.

### 4.3 Performance analysis

Time performance of different gradient estimation methods (millisecond)

Data set | Dimensions | CDG | MF-CDG | Sobel | MF-Sobel | Sobel (GPU) | Proposed | Proposed (GPU) |
---|---|---|---|---|---|---|---|---|

Chest | \(256 \times 128 \times 144\) | 1422 | 10,935 | 4499 | 14,479 | 1203 | 115,485 | 2313 |

Foot | \(256 \times 512 \times 178\) | 2420 | 20,475 | 10,207 | 29,057 | 1887 | 306,406 | 4435 |

Head | \(380 \times 256 \times 260\) | 2734 | 23,765 | 11,141 | 39,685 | 2297 | 336,924 | 4939 |

Leg | \(256 \times 256 \times 282\) | 3078 | 24,940 | 10,055 | 33641 | 2638 | 266,467 | 4751 |

Mouse | \(256 \times 256 \times 208\) | 3203 | 23,070 | 9203 | 31,857 | 2845 | 204,015 | 4498 |

## 5 Conclusion

In this paper, we proposed a novel robust color gradient estimation method for photographic volumes, which is performed by combining the RCMG operators and low-pass filters for each gradient component in the CIELUV color space. It has been demonstrated with a large number of experimental results that the gradient obtained with the proposed method are more accurate and robust than the commonly-used gradient estimators in photographic volume visualization, especially for those noisy photographic volume data sets. An obvious limitation for the proposed method is that it takes longer time to perform the filtering operations. However, it can be accelerated by means of GPU computing. To further improve the performance, we will focus on optimizing the filtering operations in the future work.

## Notes

### Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by NSF of China Project Nos. 61303133, 61472354, the National Statistical Scientific Research Project No. 2015LD03, the China Postdoctoral Science Foundation No. 2015M571846, the Zhejiang Science and Technology Plan of China No. 2014C31057, and the National Key Technology Research and Development Program of the Ministry of Science and Technology of China under Grant 2014BAK14B01.

## References

- Correa C, Hero R, Ma KL (2011) A comparison of gradient estimation methods for volume rendering on unstructured meshes. IEEE Trans Vis Comput Gr 17(3):305–319. https://doi.org/10.1109/TVCG.2009.105 CrossRefGoogle Scholar
- Ebert DS, Morris CJ, Rheingans P, Yoo TS (2002) Designing effective transfer functions for volume rendering from photographic volumes. IEEE Trans Vis Comput Gr 8(2):183–197CrossRefGoogle Scholar
- Ercan G, Whyte P (2001) Digital image processing. US Patent 6,240,217Google Scholar
- Evans AN, Liu XU (2006) A morphological gradient approach to color edge detection. IEEE Trans Image Process 15(6):1454–1463CrossRefGoogle Scholar
- Gargesha M, Qutaish M, Roy D, Steyer G, Bartsch H, Wilson DL (2009) Enhanced volume rendering techniques for high-resolution color cryo-imaging data. In: SPIE Medical Imaging, International Society for Optics and Photonics, p 72622V. https://doi.org/10.1117/12.813756
- Kindlmann G, Durkin JW (1998) Semi-automatic generation of transfer functions for direct volume rendering. In: Proceedings of the 1998 IEEE symposium on volume visualization, ACM, pp 79–86Google Scholar
- Kniss J, Premoze S, Hansen C, Shirley P, McPherson A (2003) A model for volume lighting and modeling. IEEE Trans Vis Comput Gr 9(2):150–162CrossRefGoogle Scholar
- Lee B, Kwon K, Shin BS (2016) Interactive high-quality visualization of color volume datasets using GPU-based refinements of segmentation data. J X Ray Sci Technol 24(4):537–548. https://doi.org/10.3233/XST-160572 CrossRefGoogle Scholar
- Levoy M (1988) Display of surfaces from volume data. IEEE Comput Gr Appl 8(3):29–37CrossRefGoogle Scholar
- Max N (1995) Optical models for direct volume rendering. IEEE Trans Vis Comput Gr 1(2):99–108CrossRefGoogle Scholar
- Mittal A, Sofat S, Hancock E (2012) Detection of edges in color images: a review and evaluative comparison of state-of-the-art techniques. In: Proceedings of the third international conference on autonomous and intelligent systems, AIS’12, pp 250–259. https://doi.org/10.1007/978-3-642-31368-4_30
- Morris CJ, Ebert D (2002) Direct volume rendering of photographic volumes using multi-dimensional color-based transfer functions. In: Proceedings of the symposium on data visualisation 2002, Eurographics Association, pp 115-ffGoogle Scholar
- Nezhadarya E, Ward RK (2011) A new scheme for robust gradient vector estimation in color images. IEEE Trans Image Process 20(8):2211–2220MathSciNetCrossRefzbMATHGoogle Scholar
- Pfister H, Lorensen B, Bajaj C, Kindlmann G, Schroeder W, Avila LS, Raghu K, Machiraju R, Lee J (2001) The transfer function bake-off. IEEE Comput Gr Appl 21(3):16–22CrossRefGoogle Scholar
- Plataniotis KN, Venetsanopoulos AN (2000) Color image processing and applications. Springer, BerlinCrossRefGoogle Scholar
- Roettger S, Bauer M, Stamminger M (2005) Spatialized transfer functions. In: Proceedings of the seventh joint eurographics/IEEE VGTC conference on visualization, Eurographics Association, pp 271–278Google Scholar
- Roy D, Steyer GJ, Gargesha M, Stone ME, Wilson DL (2009) 3D cryo-imaging: a very high-resolution view of the whole mouse. Anat Rec 292(3):342–351CrossRefGoogle Scholar
- Russo F, Lazzari A (2005) Color edge detection in presence of gaussian noise using nonlinear prefiltering. IEEE Trans Instrum Meas 54(1):352–358CrossRefGoogle Scholar
- Sereda P, Bartroli AV, Serlie IW, Gerritsen FA (2006) Visualization of boundaries in volumetric data sets using LH histograms. IEEE Trans Vis Comput Gr 12(2):208–218CrossRefGoogle Scholar
- Spitzer V, Ackerman MJ, Scherzinger AL, Whitlock D (1996) The visible human male: a technical report. J Am Med Inform Assoc 3(2):118–130CrossRefGoogle Scholar
- Vandenberghe ME, Hrard AS, Souedet N, Sadouni E, Santin MD, Briet D, Carr D, Schulz J, Hantraye P, Chabrier PE, Rooney T, Debeir T, Blanchard V, Pradier L, Dhenain M, Delzescaux T (2016) High-throughput 3D whole-brain quantitative histopathology in rodents. Sci Rep 6:20958. https://doi.org/10.1038/srep20958 CrossRefGoogle Scholar
- Zhang B, Tao Y, Lin H, Dong F, Clapworthy G (2015) Intuitive transfer function design for photographic volumes. J Vis 18(4):571–580. https://doi.org/10.1007/s12650-014-0267-5 CrossRefGoogle Scholar