Model-Based Segmentation of Multimodal Images

  • Xin Hong
  • Sally McClean
  • Bryan Scotney
  • Philip Morrow
Conference paper

DOI: 10.1007/978-3-540-74272-2_75

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4673)
Cite this paper as:
Hong X., McClean S., Scotney B., Morrow P. (2007) Model-Based Segmentation of Multimodal Images. In: Kropatsch W.G., Kampel M., Hanbury A. (eds) Computer Analysis of Images and Patterns. CAIP 2007. Lecture Notes in Computer Science, vol 4673. Springer, Berlin, Heidelberg

Abstract

This paper proposes a model-based method for intensity-based segmentation of images acquired from multiple modalities. Pixel intensity within a modality image is represented by a univariate Gaussian distribution mixture in which the components correspond to different segments. The proposed Multi-Modality Expectation-Maximization (MMEM) algorithm then estimates the probability of each segment along with parameters of the Gaussian distributions for each modality by maximum likelihood using the Expectation-Maximization (EM) algorithm. Multimodal images are simultaneously involved in the iterative parameter estimation step. Pixel classes are determined by maximising a posteriori probability contributed from all multimodal images. Experimental results show that the method exploits and fuses complementary information of multimodal images. Segmentation can thus be more precise than when using single-modality images.

Keywords

data fusion multimodal images model-based segmentation Gaussian mixture maximum likelihood EM algorithm 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Xin Hong
    • 1
  • Sally McClean
    • 1
  • Bryan Scotney
    • 1
  • Philip Morrow
    • 1
  1. 1.School of Computing and Information Engineering, University of Ulster, Cromore Road, Coleraine, BT52 1SA, Northern IrelandUK

Personalised recommendations