A Modular Framework for 2D/3D and Multi-modal Segmentation with Joint Super-Resolution

  • Benjamin Langmann
  • Klaus Hartmann
  • Otmar Loffeld
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7584)

Abstract

A versatile multi-image segmentation framework for 2D/3D or multi-modal segmentation is introduced in this paper with possible application in a wide range of machine vision problems. The framework performs a joint segmentation and super-resolution to account for images of unequal resolutions gained from different imaging sensors. This allows to combine high resolution details of one modality with the distinctiveness of another modality. A set of measures is introduced to weight measurements according to their expected reliability and it is utilized in the segmentation as well as the super-resolution. The approach is demonstrated with different experimental setups and the effect of additional modalities as well as of the parameters of the framework are shown.

Keywords

Segmentation Image Processing Range Imaging Time-of-Flight (ToF) Photonic Mixer Device (PMD) 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bleiweiss, A., Werman, M.: Fusing Time-of-Flight Depth and Color for Real-Time Segmentation and Tracking. In: Kolb, A., Koch, R. (eds.) Dyn3D 2009. LNCS, vol. 5742, pp. 58–69. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  2. 2.
    Langmann, B., Hartmann, K., Loffeld, O.: Comparison of depth super-resolution methods for 2d/3d images. International Journal of Computer Information Systems and Industrial Management Applications 3, 635–645 (2011)Google Scholar
  3. 3.
    Wang, O., Finger, J., Yang, Q., Davis, J., Yang, R.: Automatic natural video matting with depth. In: Proceedings of the 15th Pacific Conference on Computer Graphics and Applications, Maui, Hawaii, pp. 469–472 (2007)Google Scholar
  4. 4.
    Langmann, B., Ghobadi, S.E., Hartmann, K., Loffeld, O.: Multi-modal background subtraction using gaussian mixture models. In: ISPRS Technical Commission III Symposium on Photogrammetry Computer Vision and Image Analysis (PCV 2010), pp. 61–66 (2010)Google Scholar
  5. 5.
    Jager, F.: Contour-based segmentation and coding for depth map compression. In: IEEE Visual Communications and Image Processing (VCIP), pp. 1–4 (November 2011)Google Scholar
  6. 6.
    Kahler, O., Rodner, E., Denzler, J.: On fusion of range and intensity information using graph-cut for planar patch segmentation. International Journal of Intelligent Systems Technologies and Applications 5(3), 365–373 (2008)CrossRefGoogle Scholar
  7. 7.
    Wählby, C., Sintorn, I.M., Erlandsson, F., Borgefors, G., Bengtsson, E.: Combining intensity, edge and shape information for 2d and 3d segmentation of cell nuclei in tissue sections. Journal of Microscopy 215(Pt 1), 67–76 (2004)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Boukerroui, D.: Segmentation of ultrasound images: multiresolution 2d and 3d algorithm based on global and local statistics. Pattern Recognition Letters 24(4-5), 779–790 (2003)CrossRefGoogle Scholar
  9. 9.
    Comaniciu, D., Meer, P.: Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002)CrossRefGoogle Scholar
  10. 10.
    Scharstein, D., Pal, C.: Learning conditional random fields for stereo. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (June 2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Benjamin Langmann
    • 1
  • Klaus Hartmann
    • 1
  • Otmar Loffeld
    • 1
  1. 1.ZESS - Center for Sensor SystemsUniversity of SiegenSiegenGermany

Personalised recommendations