Skip to main content

Advertisement

Log in

Enhancing accuracy of magnetic resonance image fusion by defining a volume of interest

  • Diagnostic Neuroradiology
  • Published:
Neuroradiology Aims and scope Submit manuscript

Abstract

We compared the registration accuracy for corresponding anatomical landmarks in two MR images after fusing the complete volume (CV) and a defined volume of interest (VOI) of both MRI data sets. We carried out contrast-enhanced T1-weighted gradient-echo and T2-weighted fast spin-echo MRI (matrix 256×256) in 39 cases. The CV and a defined VOI data set were each fused using prototype software. We measured and analysed the distance between 25 anatomical landmarks in predefined areas identified at levels L1–L5 corresponding to defined axial sections. Fusion technique, landmark areas and level of fusion were further processed using a feed-forward neural network to calculate the difference which can be expected based on the measurements. We identified 975 landmarks for both T1- and T2-weighted images and found a significant difference in registration accuracy (P<0.01) for all landmarks between CV (1.6±1.2 mm) and VOI (0.7±1.0 mm). From cranial (L1) to caudal (L5), mean deviations were: L1 CV 1.5 mm, VOI 0.5 mm; L2 CV 1.8 mm, VOI 0.4 mm; L3 CV 1.7 mm, VOI 0.4 mm; L4 CV 1.6 mm, VOI 0.6 mm; and L5 CV 1.6 mm, VOI 1.6 mm. Neural network analysis predicted a higher accuracy for VOI (0.05–0.15 mm) than for CV fusion (0.9–1.6 mm). Deviations due to magnetic susceptibility changes between air and tissue seen on gradient-echo images can decrease fusion accuracy. Our VOI fusion technique improves image fusion accuracy to <0.5 mm by excluding areas with marked susceptibility changes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.

Similar content being viewed by others

References

  1. Mongioj V, Brusa A, Loi G, et al (1999) Accuracy evaluation of fusion of CT, MR, and spect images using commercially available software packages (SRS PLATO and IFS). Int J Radiat Oncol Biol Phys 43: 227–234

    Article  CAS  PubMed  Google Scholar 

  2. Julow J, Major T, Emri M, et al (2000) The application of image fusion in stereotactic brachytherapy of brain tumours. Acta Neurochir 142: 1253–1258

    Article  CAS  Google Scholar 

  3. Braun V, Dempf S,Tomczak R, et al (2001) Multimodal cranial neuronavigation: direct integration of functional magnetic resonance imaging and positron emission tomography data: technical note. Neurosurgery 48: 1178–1181

    CAS  PubMed  Google Scholar 

  4. Alexander E 3rd, Kooy KM, van Herk M, et al (1995) Magnetic resonance image-directed stereotactic neurosurgery: use of image fusion with computerized tomography to enhance spatial accuracy. J Neurosurg 83: 271–276

    PubMed  Google Scholar 

  5. Barillot C, Lemoine D, Le Briquer L, et al (1993) Data fusion in medical imaging: merging multimodal and multipatient images, identification of structures and 3D display aspects. Eur J Radiol 17: 22–27

    CAS  PubMed  Google Scholar 

  6. Mutic S, Dempsey JF, Bosch WR, et al (2001) Multimodality image registration quality assurance for conformal three-dimensional treatment planning. Int J Radiat Oncol Biol Phys 51: 255–260

    Article  CAS  PubMed  Google Scholar 

  7. Hemler PF, Napel S, Sumanaweera TS, et al (1995) Registration error quantification of a surface-based multimodality image fusion system. Med Phys 22: 1049–1056

    Article  CAS  PubMed  Google Scholar 

  8. Studholme C, Hill DLG, Hawkes DJ (1996) Automated 3D registration of MR and CT images of the head. Med Image Anal 1: 163–175

    Article  CAS  PubMed  Google Scholar 

  9. Wells WM, Viola P, Atsumi H, et al (1996) Multi-modal volume registration by maximization of mutual information. Med Image Anal 1: 35–51

    PubMed  Google Scholar 

  10. Moche M, Busse H, Dannenberg C, et al (2001) Fusion von MRT-, fMRT- und intraoperativen MRT-Daten. Radiologe 41: 993–1000

    Article  CAS  PubMed  Google Scholar 

  11. Rohlfing T, West JB, Beier J, et al (2000) Registration of functional and anatomical MRI: accuracy assessment and application in navigated neurosurgery. Comput Aided Surg 5: 414–425

    Article  CAS  PubMed  Google Scholar 

  12. Viergever MA, Maintz JB, Stokking R (1997) Integration of functional and anatomical brain images. Biophys Chem 68: 207–219

    Article  CAS  PubMed  Google Scholar 

  13. Noz ME, Maguire GQ Jr, Zeleznik MP, et al (2001) A versatile functional-anatomic image fusion method for volume data sets. J Med Syst 25: 297–307

    CAS  PubMed  Google Scholar 

  14. Meyer CR, Boes JL, Kim B, et al (1997) Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations. Med Image Anal 1: 195–206

    Article  CAS  PubMed  Google Scholar 

  15. Yu C, Petrovich Z, Apuzzo ML, et al (2001) An image fusion study of the geometric accuracy of magnetic resonance imaging with the Leksell stereotactic localization system. J Appl Clin Med Phys 2: 42–50

    Article  CAS  PubMed  Google Scholar 

  16. Oehler MC, Schmalbrock P, Chakeres D, et al (1995) Magnetic susceptibility artifacts on high-resolution MR of the temporal bone. AJNR 16: 1135–1143

    CAS  Google Scholar 

  17. Sakurai K, Fujita N, Harada K, et al (1992) Magnetic susceptibility artifact in spin-echo MR imaging of the pituitary gland. AJNR 13: 1301–1308

    CAS  Google Scholar 

  18. Port JD, Pomper MG (2000) Quantification and minimization of magnetic susceptibility artifacts on GRE images. J Comput Assist Tomogr 24: 958–964

    Article  CAS  PubMed  Google Scholar 

  19. Ojemann JG, Akbudak E, Snyder AZ, et al (1997) Anatomic localization and quantitative analysis of gradient refocused echo-planar fMRI susceptibility artifacts. Neuroimage 6: 156–167

    Article  CAS  PubMed  Google Scholar 

  20. Cho ZH, Ro YM (1992) Reduction of susceptibility artifact in gradient-echo imaging. Magn Reson Med 23: 193–200

    CAS  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to B. M. Hoelper.

Appendix

Appendix

1. The rigid transformation algorithm

A 3D image co-ordinate transformation is termed rigid if it consists only of rotations and translations. One of the images is termed fixed, the other floating image. Subscripts "fix" (fixed image) and "float" (floating image) refer to quantities of the fixed and the floating images, respectively. Any rigid 3D transformation from the co-ordinate system of the fixed to that of the floating image can be described by a single matrix-vector equation

$${{\left( {\matrix{ {{x_{{float}} }} \cr {{y_{{float}} }} \cr {{z_{{float}} }} \cr } } \right)} = R \cdot {\left( {\matrix{ {{x_{{fix}} }} \cr {{y_{{fix}} }} \cr {{z_{{fix}} }} \cr } } \right)} + t}$$
(2)

where

$${t=(t_{x},t_{y},t_{z})^{T}}$$
(3)

is a constant arbitrary translation vector and R a constant orthogonal rotation matrix

$${R = {\left( {\matrix{ {{\cos \varphi _{y} \cos \varphi _{z} }} & {{\cos \varphi _{y} \sin \varphi _{z} }} & {{ - \sin \varphi _{y} }} \cr {{\sin \varphi _{x} \sin \varphi _{y} \cos \varphi _{z} - \cos \varphi _{x} \sin \varphi _{z} }} & {{\sin \varphi _{x} \sin \varphi _{y} \sin \varphi _{z} + \cos \varphi _{x} \cos \varphi _{z} }} & {{\sin \varphi _{x} \cos \varphi _{y} }} \cr {{\cos \varphi _{x} \sin \varphi _{y} \cos \varphi _{z} + \sin \varphi _{x} \sin \varphi _{z} }} & {{\cos \varphi _{x} \sin \varphi _{y} \sin \varphi _{z} - \sin \varphi _{x} \cos \varphi _{z} }} & {{\cos \varphi _{x} \cos \varphi _{y} }} \cr } } \right)}}$$
(4)

Here, φX, φz, φy (Euler angles) denote the rotation angles around the x, y and z axis, respectively. Altogether, any rigid transformation can be represented by a particular instance of the six transformation parameters \({t_{x} ,t_{y} ,t_{z} ,\varphi _{x} ,\varphi _{y} ,\varphi _{z} }\).

This Euler angle representation is just one of several possible representations. Alternatively, there exist different representations based on, e.g., quaternions. Usually, fixed image voxel positions \({x_{{fix}} ,y_{{fix}} ,z_{{fix}} }\) will not transform to integer positions in the floating image. Hence, some interpolation scheme is required to extract the intensity at the transformed position.

2. The similarity measure

Mutual information (MI) is based on the shared information or relative entropy between the overlapping regions of the two images, which at registration should be maximised. The MI between an image A and an image B is given by

$${MI(A,B) = H(A) + H(B) - H(A,B).}$$
(5)

Here, H(A) and H(A) denote the entropies of the overlapping portions of the images A and B, respectively. H(A, B) denotes the joint entropy of the image overlap. Equivalently, the MI between the (fixed) image A and the (transformed floating) image B computes as

$${MI(A,B) = {\sum\limits_{i_{A} }^{} {} }{\sum\limits_{i_{B} }^{} {p(i_{A} ,i_{B} ) \cdot \log {\left( {{{p(i_{A} ,i_{B} )} \over {p(i_{A} ) \cdot p(i_{B} )}}} \right)}} }}$$
(6)

where \({p(i_{A} )}\); \({p(i_{B} )}\) and \({p(i_{A} ,i_{B} )}\) denote the probabilities of occurrence of intensities iA and iB in the histograms of A, B and the joint histogram, respectively. Voxels which do not lie in the overlapping image portions are excluded from the computation of the histograms. Note that MI is a complicated, non-linear function of the transformation parameters \({t_{x} ,t_{y} ,t_{z} ,\varphi _{x} ,\varphi _{y} ,\varphi _{z} }\).

Evaluations of the mutual information measure are quite "expensive" since they require looping through all voxels for histogram computations.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hoelper, B.M., Soldner, F., Lachner, R. et al. Enhancing accuracy of magnetic resonance image fusion by defining a volume of interest. Neuroradiology 45, 804–809 (2003). https://doi.org/10.1007/s00234-003-1071-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00234-003-1071-4

Keywords

Navigation