Advertisement

Efficient Spatial Classification Using Decoupled Conditional Random Fields

  • Chi-Hoon Lee
  • Russell Greiner
  • Osmar Zaïane
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4213)

Abstract

We present a discriminative method to classify data that have interdependencies in 2-D lattice. Although both Markov Random Fields (MRFs) and Conditional Random Fields (CRFs) are well-known methods for modeling such dependencies, they are often ineffective and inefficient, respectively. This is because many of the simplifying assumptions that underlie the MRF’s efficiency compromise its accuracy. As CRFs are discriminative, they are typically more accurate than the generative MRFs. This also means their learning process is more expensive. This paper addresses this situation by defining and using “Decoupled Conditional Random Fields (DCRFs)”, a variant of CRFs whose learning process is more efficient as it decouples the tasks of learning potentials. Although our model is only guaranteed to approximate a CRF, our empirical results on synthetic/real datasets show that DCRF is essentially as accurate as other CRF variants, but is many times faster to train.

Keywords

Support Vector Machine Markov Random Field Conditional Random Field Sequential Minimal Optimization Tumor Segmentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Besag, J.: On the statistical analysis of dirty pictures. Journal of Royal Statistical Society. Series B 48(3), 259–302 (1986)MATHMathSciNetGoogle Scholar
  2. 2.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. In: ICCV, pp. 377–384 (1999)Google Scholar
  3. 3.
    Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 121–167 (1998)CrossRefGoogle Scholar
  4. 4.
    Dietterich, T.G.: Machine-learning research: Four current directions. The AI Magazine 18(4), 97–136 (1998)Google Scholar
  5. 5.
    Garcia, C., Moreno, J.: Kernel based method for segmentation and modeling of magnetic resonance images. In: Lemaître, C., Reyes, C.A., González, J.A. (eds.) IBERAMIA 2004. LNCS, vol. 3315, pp. 636–645. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Jordan, M.I. (ed.): Learning in Graphical Models. MIT Press, Cambridge (1999)Google Scholar
  7. 7.
    Kaus, M., Warfield, S., Nabavi, A., Black, P., Jolesz, F., Kikinis, R.: Automated segmentation of MR images of brain tumors. Radiology 218, 586–591 (2001)Google Scholar
  8. 8.
    Kindermann, R., Snell, J.: Makrov random fields and their applications. American Mathematical Society (1980)Google Scholar
  9. 9.
    Kumar, S., Hebert, M.: Discriminative fields for modeling spatial dependencies in natural images. In: NIPS (2003)Google Scholar
  10. 10.
    Lafferty, J., Pereira, F., McCallum, A.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML (2001)Google Scholar
  11. 11.
    Lee, C., Schmidt, M., Greiner, R.: Support vector random fields for spatial classification. In: Jorge, A.M., Torgo, L., Brazdil, P.B., Camacho, R., Gama, J. (eds.) PKDD 2005. LNCS, vol. 3721, pp. 121–132. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  12. 12.
    Li, S.Z.: Markov Random Field Modeling in Image Analysis. Springer, Tokyo (2001)MATHGoogle Scholar
  13. 13.
    Lin, H.-T., Lin, C.-J., Weng, R.C.: A note on platt’s probabilistic outputs for support vector machine. Technical report (2003)Google Scholar
  14. 14.
    Platt, J.: Fast training of support vector machines using sequential minimal optimization. In: Advances in Kernel Methods - Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)Google Scholar
  15. 15.
    Platt, J.: Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. MIT Press, Cambridge (2000)Google Scholar
  16. 16.
    Schmidt, M.: Automatic brain tumor segmentation. Master’s thesis, University of Alberta (2005)Google Scholar
  17. 17.
    Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge (2004)Google Scholar
  18. 18.
    Taskar, B., Chatalbashev, V., Koller, D.: Learning associative markov networks. In: ICML 2004, p. 102. ACM Press, New York (2004)CrossRefGoogle Scholar
  19. 19.
    Torralba, A., Murphy, K.P., Freeman, W.T.: Contextual models for object detection using boosted random fields. In: NIPS, vol. 17. MIT Press, Cambridge (2005)Google Scholar
  20. 20.
    Zhang, J., Ma, K., Er, M., Chong, V.: Tumor segmentation from magnetic resonance imaging by learning via one-class support vector machine. In: International Workshop on Advanced Image Technology, pp. 207–211 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Chi-Hoon Lee
    • 1
  • Russell Greiner
    • 1
  • Osmar Zaïane
    • 1
  1. 1.Department of Computing ScienceUniversity of AlbertaEdmontonCanada

Personalised recommendations