Advertisement

A Study on Object Contour Tracking with Large Motion Using Optical Flow and Active Contour Model

Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 253)

Abstract

In this study, an object contour tracking method is proposed for an object with large motion and irregular shapes in video sequences. To track object contour accurately, an active contour model was used, and the initial snake point of the next frame is set by calculating an optical flow of feature points with changing curvature in the object contour tracked from the previous frame. Here, any misled optical flow due to irregular changes in shapes or fast motion was filtered by producing an edge map different from the previous frame, and as a solution to the energy shortage of objects with complex contour, snake points were added according to partial curvature for better performance. Findings from experiments with real video sequences showed that the contour of an object with large motion and irregular shapes was extracted precisely.

Keywords

Object contour tracking Optical flow Active contour model 

Notes

Acknowledgments

This research is supported by Ministry of Culture, Sports and Tourism (MCST) and Korea Creative Content Agency (KOCCA) in the Culture Technology (CT) Research and Development Program [R2012030006].

References

  1. 1.
    Lee YS (2011) The trends and prospects of 2D-3D conversion technology. J. Korean Inst Electron Eng 38(2):37–43Google Scholar
  2. 2.
    Okino T, Murata G, Taima K, Iinuma T, Oketani K (1996) New television with 2D/3D image conversion technologies. Proc SPIE 2653:96–103Google Scholar
  3. 3.
    Kim J, Lee J, Kim C (2011) Video object extraction using contour information. J Korean Inst Electron Eng 48(1):33–45Google Scholar
  4. 4.
    Li Y, Sun J, Shum HY (2005) Video object cut and paste. J ACM Trans Graph 24(3):595–600Google Scholar
  5. 5.
    Li B, Yuan B, Sun Y (2006) Moving object segmentation using dynamic 3D graph cuts and GMM. IEEE Int Conf Signal Process 2:16–20Google Scholar
  6. 6.
    Javed O, Rasheed Z, Shafique K, Shah M (2003) Tracking across multiple cameras with disjoint views. IEEE Int Conf Comp Vis 2:952–957Google Scholar
  7. 7.
    Kass M, Witkin A, Terzopoulos D (1988) Snakes - active contour models. Int J Comp Vis 1(4):321–331Google Scholar
  8. 8.
    Bing X, Wei Y, Charoensak C (2004) Face contour tracking in video using active contour model. Int Conf Image Process 2:1024–1024Google Scholar
  9. 9.
    Chenyang X, Prince JL (1997) Gradient vector flow: a new external force for snakes. In: IEEE computer society conference on computer vision and pattern recognition, pp 66–71 Google Scholar
  10. 10.
    Leymarie F, Levince MD (1993) Tracking deformable object in the plane using an active contour model. IEEE Trans Pattern Anal Mach Intell 15(6):617–634CrossRefGoogle Scholar
  11. 11.
    Ling P, Fan J, Shen C (2007) Color image segmentation for objects of interest with modified geodesic active contour method. J Math Imaging Vis 27(1):51–57CrossRefGoogle Scholar
  12. 12.
    Kim D, Lee D, Paik J (2007) Combined active contour model and motion estimation for real-time object tracking. J Inst Electron Eng Korea 44(5):64–72Google Scholar
  13. 13.
    Lee JH (2009) A study on an improved object detection and contour tracking algorithm based on local curvature. Master’s Thesis, Paichai University, Daejeon, KoreaGoogle Scholar
  14. 14.
    Lee JW, You S, Neumann U (2000) Large motion estimation for omnidirectional vision. In: Proceedings of IEEE Workshop on Omnidirectional Vision, pp 161–168 Google Scholar
  15. 15.
    Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of 1981 DARPA Imaging Understanding Workshop, pp 121–130Google Scholar
  16. 16.
    Zitnick CL, Kang SB, Uyttendaele M, Winder S, Szeliski R (2004) High-quality video view interpolation using a layered representation. In: Proceedings of ACM SIGGRAPH and ACM transaction on graphics, Los Angeles, CA, pp 600–608Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.CT Research InstituteSeongnam-siRepublic of Korea
  2. 2.Department of Interactive MediaGachon UniversitySeongnam-siRepublic of Korea

Personalised recommendations