Advertisement

Neural Processing Letters

, Volume 17, Issue 3, pp 217–238 | Cite as

A Generative Probabilistic Oriented Wavelet Model for Texture Segmentation

  • Inna Stainvas
  • David Lowe
Article

Abstract

This Letter addresses image segmentation via a generative model approach. A Bayesian network (BNT) in the space of dyadic wavelet transform coefficients is introduced to model texture images. The model is similar to a Hidden Markov model (HMM), but with non-stationary transitive conditional probability distributions. It is composed of discrete hidden variables and observable Gaussian outputs for wavelet coefficients. In particular, the Gabor wavelet transform is considered. The introduced model is compared with the simplest joint Gaussian probabilistic model for Gabor wavelet coefficients for several textures from the Brodatz album [1]. The comparison is based on cross-validation and includes probabilistic model ensembles instead of single models. In addition, the robustness of the models to cope with additive Gaussian noise is investigated. We further study the feasibility of the introduced generative model for image segmentation in the novelty detection framework [2]. Two examples are considered: (i) sea surface pollution detection from intensity images and (ii) image segmentation of the still images with varying illumination across the scene.

Bayesian networks and ensembles Dyadic wavelet transform Gabor wavelet transform generative probabilistic model image texture segmentation novelty detection 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brodatz, P.: A Photographic Album for Artists and Designers. Dover, New York, 1996.Google Scholar
  2. 2.
    Bishop, C. M.: Novelty detection and neural network validation. IEE Proceedings: Vision, Image and Signal Processing 141(6) (1994), 217–222.Google Scholar
  3. 3.
    Jain, A. K.: Fundamentals of Digital Image Processing. Prentice Hall, London, 1989.Google Scholar
  4. 4.
    Gonzalez, R. C. and Wintz, P.: Digital Image Processing, Addison-Wesley Publishing Company, 1993.Google Scholar
  5. 5.
    Tuceryan, M. and Jain, A. K.: Texture analysis, In: C. H. Chen, L. F. Pau, and P. S. P. Wang, (eds.), The Handbook of Pattern Recognition and Computer Vision, World Scientific Publishing Co., 1988, pp. 207–248.Google Scholar
  6. 6.
    Duda, R. O. and Hart, P. E.: Pattern Classification and Scene Analysis, John Wiley, New York, 1973.Google Scholar
  7. 7.
    Papoulis, A.: Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York-Toronto, 1991. Third edition.Google Scholar
  8. 8.
    Geman, S. and German, D.: Stochastic relaxation, Gibbs distributions and the bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6(6) (1984), 721–741.Google Scholar
  9. 9.
    Chellappa, R. and Jain, A.: Markov Random Fields: Theory and Applications, Academic Press, 1993.Google Scholar
  10. 10.
    Romberg, J. K., Choi, H. and Baraniuk, R. G.: Bayesian wavelet domain image modeling using Hidden Markov trees, In: International Conference on Image Processing-ICIP'99, Kobe, Japan, October 1999, pp. 1–5.Google Scholar
  11. 11.
    De Bonet, J. S. and Viola, P.: Texture recognition using a non-parametric multi-scale statistical model. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition, 1988.Google Scholar
  12. 12.
    Wainwright, M. J., Simonceli, E. P. and Willsky, A. S.: Random cascades on wavelet trees and their use on analyzing and modeling natural images. Applied and Computational Harmonic Analysis 11(1) (2001), 89–123.Google Scholar
  13. 13.
    Mallat, S. G.: A Wavelet Tour of Signal Processing Academic Press, San Diego, 1998.Google Scholar
  14. 14.
    Simoncelli, E. P.: Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, November 1997, pp. 2–5.Google Scholar
  15. 15.
    Portilla, J. and Simoncelli, E. P.: A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision 40, 49–71, December 2000.Google Scholar
  16. 16.
    Adelson, E. H., Simoncelli, E. P. and Hingorani, R.: Orthogonal pyramid transforms for image coding. In SPIE Visual Communications and Image Processing II, volume 845, October 1987, pp. 50–58.Google Scholar
  17. 17.
    Daugman, J. G.: Complete discrete 2D Gabor transforms by neural networks for image analysis and compression. IEEE Transactions on ASSP 36(7) (1988), 1169–1179.Google Scholar
  18. 18.
    Jain, A. K. and Farrokhnia, F.: Unsupervised texture segmentation using Gabor filters. Pattern Recognition 4(12) (1991), 1167–1186.Google Scholar
  19. 19.
    Nestares, O., Navarro, R., Portilla, J. and Tabernero, A.: Efficient spatial-domain implementation of a multiscale image representation based on Gabor functions. Journal of Electronic Imaging, SPIE 07(01) (1998), 166–173.Google Scholar
  20. 20.
    Marr, D.: Vision. Imprint FREEMAN, New York, 1982.Google Scholar
  21. 21.
    Jensen. F. V.: An Introduction to Bayesian Networks. Springer Verlag, New York, 1996.Google Scholar
  22. 22.
    Crouse, M. S., Nowak, R. D. and Baraniuk, R. G.: Wavelet-based signal processing using Hidden Markov Models. IEEE Transactions on Signal Processing, 46, 886–902, April 1998.Google Scholar
  23. 23.
    Jordan, M.: Learning in Graphical Models. The MIT Press, Cambridge, Massachusetts, London, England, 1999.Google Scholar
  24. 24.
    Rabiner, L. R. and Juang, B. H.: An Introduction to Hidden Markov Models. In IEEE ASSP Magazine, January 1986, pp. 4–16.Google Scholar
  25. 25.
    Dempster, A. P., Laird, N. M. and Rubin, D. B.: Maximum likelihood from incomplete data via the EM algorithm. Proceedings of the Royal Statistical Society B-39 (1977), 1–38.Google Scholar
  26. 26.
    Frey, B. J.: Graphical Models for Machine Learning and Digotal Communiation. The MIT Press, Cambridge, Massachusetts London, England, 1998.Google Scholar
  27. 27.
    Opper, M. and Saad, D.: Advanced Mean Field Methods. The MIT Press, Cambridge, 2001.Google Scholar
  28. 28.
    Ronen, O., Rohlicek, J. R. and Ostendorf, M.: Parameter Estimation of Dependence Tree Models Using the EM Algorithm. IEEE Signal Processing Letters 2(8) (1995), 157–159.Google Scholar
  29. 29.
    Kschischang, F. R., Frey, B. J. and Loeliger, H. A.: Factor graphs and the Sum-Product Algorithm. IEEE Transactions on Information Theory 47(2) (2001), 498–519.Google Scholar
  30. 30.
    Efron B. and Tibshirani, R.: An Introduction to the Bootstrap. Chapman and Hall, New York, 1993.Google Scholar
  31. 31.
    Bellman, R. E.: Adaptive Control Processes. Princeton University Press, Princeton, NJ, 1961.Google Scholar
  32. 32.
    Geman, S., Bienenstock, E. and Doursat, R.: Neural networks and the bias-variance dilemma. Neural Computation 4 (1992), 1–58.Google Scholar
  33. 33.
    Friedman, N. and Russel, S.: Image segmentation in video sequences: A probabilistic approach. In The Thirteenth Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers, 1997, pp. 175–181.Google Scholar
  34. 34.
    Stauffer, C. and Grimson, W. E. L.: Adaptive background mixture model for real-time tracking. In IEEE Computer Society Conference om Computer Vision and Pattern Recognition, Cat. No. PR00149, volume 2, June 23-25 1999, pp. 22–46.Google Scholar
  35. 35.
    McLachlan, G. and Peel, D.: Finite Mixture Models. Wiley Series in Probability and Statistics, New York, 2000.Google Scholar
  36. 36.
    Bilmes, J A.: A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and Hidden Markov models. Technical report, tr-97-021, International Computer Science Institute and Computer Science Division U. C. Berkeley, 1998.Google Scholar
  37. 37.
    Stainvas, I. and Lowe, D.: Towards sea surface pollution detection from visible band images. IEICE Transactions on Electronics a Special Issue on New Technologies in Signal Processing for Electromagnetic-wave Sensing and Imaging, E84C(12), 1848–1856, December 2001.Google Scholar
  38. 38.
    Stainvas, I. and Lowe, D.: A generative model for separating illumination and reflectance from images. Technical report, Aston University, NCRG/2002/024, June 2002.Google Scholar
  39. 39.
    Heskes, T.: Selecting weighing factors in logarithmic opinion pools. Advances in Neural Information Processing Systems, 1998.Google Scholar
  40. 40.
    Hinton, G. E.: Products of experts. In Proceedings of the Ninth International Conference on Artificial Neural Networks (ICANN99), volume 1, Edinburgh, Scotland, 1999, pp. 1–6.Google Scholar
  41. 41.
    Ghahramani, Z. and Jordan, M.: Factorial Hidden Markov models. Machine Learning, 29 (1997), 245–273.Google Scholar

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Inna Stainvas
    • 1
  • David Lowe
    • 2
  1. 1.Orbotech Ltd.YavneIsrael
  2. 2.Neural Computing Research Group, Information EngineeringAston UniversityUnited Kingtom

Personalised recommendations