Object Tracking Based on Position Vectors and Pattern Matching

Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 490)

Abstract

Object tracking systems using camera have become an essential requirement in today’s society. In-expensive and high-quality video cameras, availability and demand for analysis of automated video have produced a lot of interest for numerous fields. Almost all conventional algorithms are developed based on background subtraction, frame difference, and static background. They fail to track in environments such as variation in illumination, cluttered background, and occlusions. The image segmentation based object tracking algorithms fail to track in real-time. Feature extraction of an image is an indispensable first step in object tracking applications. In this paper, a novel real-time object tracking based on position and feature vectors is developed. The proposed algorithm involves two phases. The first phase is extraction of features for region of interest object in first frame and nine position features of second frame in video. The second phase is similarity estimation of extracted features of two frames using Euclidean distance. The nearest match is considered by minimum distance between first frame feature vectors and nine different feature vectors of second frame. The proposed algorithm is compared with other existing algorithms using different feature extraction techniques for object tracking in video. The proposed method is simulated and evaluated by statistical, discrete wavelet transform, Radon transform, scale-invariant feature transform and features from accelerated segment test. The performance evaluation shows that the proposed algorithm can be applied for any feature extraction technique and object tracking in video depends on tracking accuracy.

Keywords

Object detection Object tracking Pattern matching SIFT FAST 

References

  1. 1.
    Li X, Hu W, Shen C, Zhang Z, Dick A, Van Den Hengel A (2013) A survey of appearance models in visual object tracking. ACM Trans Intell Syst Technol 4(4). Article ID 58Google Scholar
  2. 2.
    Wu Y, Lim J, Yang M-H (2013) Online object tracking: a benchmark. In: Proceedings of IEEE conference on computer vision and pattern recognition, Portland, OR, USA, June 2013, pp 2411–2418Google Scholar
  3. 3.
    Mei X, Ling H (2011) Robust visual tracking and vehicle classification via sparse representation. IEEE Trans Pattern Anal Mach Intell 33(11):2259–2272CrossRefGoogle Scholar
  4. 4.
    Zhang T, Ghanem B, Liu S, Ahuja N (2012) Robust visual tracking via multi-task sparse learning. In: Proceedings of IEEE conference on computer vision and pattern recognition, Providence, RI, USA, June 2012, pp 2042–2049. Nicole R (in press) Title of paper with only first word capitalized. J Name Stand AbbrevGoogle Scholar
  5. 5.
    Breitenstein MD, Reichlin F, Leibe B, Koller-Meier E, Van Gool L (2011) Online multiperson tracking-by-detection from a single, uncalibrated camera. IEEE Trans Pattern Anal Mach Intell 33(9):1820–1833CrossRefGoogle Scholar
  6. 6.
    Yang M, Lv F, Xu W, Gong Y (2009) Detection driven adaptive multi-cue integration for multiple human tracking. In: Proceedings of IEEE international conference on computer vision, Kyoto, Japan, Sept 2009, pp 1554–1561Google Scholar
  7. 7.
    Xing J, Ai H, Lao S (2009) Multi-object tracking through occlusions by local tracklets filtering and global tracklets association with detection responses. In: Proceedings of IEEE conference on computer vision and pattern recognition, Miami, USA, June 2009, pp 1200–1207Google Scholar
  8. 8.
    Khan Z, Balch T, Dellaert F (2006) MCMC data association and sparse factorization updating for real time multitarget tracking with merged and multiple measurements. IEEE Trans Pattern Anal Mach Intell 28(12):1960–1972CrossRefGoogle Scholar
  9. 9.
    Kuo CH, Huang C, Nevatia R (2010) Multi-target tracking by on-line learned discriminative appearance models. In: Proceedings of IEEE conference on computer vision and pattern recognition, San Francisco, USA, June 2010, pp 685–692Google Scholar
  10. 10.
    Lowe DG (2004) Distinctive Image Features from Scale-Invariant Keypoints. Int J Comput Vision. 60(2):91–110. doi:https://doi.org/10.1023/B:VISI.0000029664.99615.94
  11. 11.
    Benfold B, Reid I (2011) Stable multi-target tracking in real-time surveillance video. In: Proceedings of IEEE conference on computer vision and pattern recognition, Colorado Springs, USA, June 2011, pp 3457–3464Google Scholar
  12. 12.
    Yang B, Nevatia R (2012) Multi-target tracking by online learning of non-linear motion patterns and robust appearance model. In: Proceedings of IEEE conference on computer vision and pattern recognition, Providence, USA, June 2012, pp 1918–1925Google Scholar
  13. 13.
    Poiesi F, Mazzon R, Cavallaro A (2013) Multi-target tracking on confidence maps: an application to people tracking. Comput Vis Image Underst 117(10):1257–1272CrossRefGoogle Scholar
  14. 14.
    Lee Y, Lee K, Pan S (2005) Local and global feature extraction for face recognition. Springer, BerlinCrossRefGoogle Scholar
  15. 15.
    Vuppala SK, Grigorescu SM, Ristic D, Gräser A (2007) Robust color object recognition for a service robotic task in the system FRIEND II. In: 10th international conference on rehabilitation robotics—ICORR’07, 2007Google Scholar
  16. 16.
    Trajkovic M, Hedley M (1998) FAST corner detector. Image Vis Comput 16:75–87Google Scholar
  17. 17.
    Juan L, Gwun O (2009) A comparison of SIFT, PCA-SIFT, and SURF. Int J Image Proc (IJIP) 3(4):143–152Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.School of Electronics EngineeringVellore Institute of TechnologyChennaiIndia

Personalised recommendations