Efficient Multimodality Volume Fusion Using Graphics Hardware
We propose a novel technique of multimodality volume fusion using graphics hardware that solves the depth cueing problem with less time consumption. Our method consists of three steps. First, it takes two volumes and generates sample planes orthogonal to the viewing direction following 3D texture mapping volume rendering. Second, it composites textured slices each from different modalities with several compositing operations. Third, alpha blending for all the slices is performed. For the efficient volume fusion, a pixel program is written in HLSL(High Level Shader Language). Experimental results show that our hardware-accelerated method distinguishes the depth of overlapping region of the volume and renders them much faster than conventional ones on software.
- 3.Zuiderveld, K.J., Viergever, M.A.: Multi-modal Volume Visualization using Object-oriented methods. In: Proc. IEEE/ACM Volume Visualization 1994 Symposium, pp. 59–66 (1994)Google Scholar
- 4.Hong, H., Shin, Y.: Intensity-based registration and combined visualization of multimodal brain images for noninvasive epilepsy surgery planning. In: Proc. of SPIE Medical Imaging (2003)Google Scholar