Election Based Pose Estimation of Moving Objects

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 729)

Abstract

In this work, a key-points based method is presented to track and estimate the pose of rigid objects, which is achieved by using the tracked points of the object to calculate the attitude changes [1]. We propose to select a few points to represent the posture of the object and maintain efficiency. A standard feature point tracking algorithm is applied to detect and match feature points. The presented method is able to overcome key-points’ errors as well as decrease the computational complexity. In order to reduce the error caused by feature points detection, we use the tacked key-points and their relation with the target center to get the most reliable tracking result. To avoid introducing errors, the model will maintain the features generated in initialization. Finally, the most reliable candidates will be picked out to calculate the pose information, and the small amount of key-points with highly accuracy can ensure real-time performance.

Keywords

Tracking Positioning Key-points Voting Online-learning 

References

  1. 1.
    Dementhon, D.F., Davis, L.S.: Model-based object pose in 25 lines of code. Int. J. Comput. Vis. 15(1), 123–141 (1995)CrossRefGoogle Scholar
  2. 2.
    Mikami, D., Otsuka, K., Yamato, J.: Memory-based particle filter for face pose tracking robust under complex dynamics. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 999–1006. (2009)Google Scholar
  3. 3.
    Subbarao, R., Genc, Y., Meer, P.: Nonlinear mean shift for robust pose estimation. In: Eighth IEEE Workshop on Applications of Computer Vision. IEEE Computer Society, p. 6. (2007)Google Scholar
  4. 4.
    Hare, S.: Efficient online structured output learning for keypoint-based object tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1894–1901. (2012)Google Scholar
  5. 5.
    Grabner, M., Grabner, H., Bischof, H.: Learning Features for Tracking, pp. 1–8. (2007)Google Scholar
  6. 6.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: Proceedings 17th British Machine Vision Conference, vol. 1, pp. 47–56. (2006)Google Scholar
  7. 7.
    Kumar, S., Hebert, M.: Multiclass discriminative fields for parts-based object detection. In: Snowbird Learning Workshop. (2004)Google Scholar
  8. 8.
    Bergtholdt, M., Kappes, J., Schmidt, S., et al.: A study of parts-based object class detection using complete graphs. Int. J. Comput. Vis. 87(1), 93–117 (2010)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Bay, H., Tuytelaars, T., Gool, L.V.: SURF: speeded up robust features. Comput. Vis. Image Underst. 110(3), 404–417 (2006)Google Scholar
  10. 10.
    Rublee, E., Rabaud, V., Konolige, K., et al.: ORB: an efficient alternative to SIFT or SURF. In: IEEE International Conference on Computer Vision, pp. 2564–2571. IEEE (2011)Google Scholar
  11. 11.
    Leibe, B., Leonardis, A., Schiele, B.: Robust object detection with interleaved categorization and segmentation. Int. J. Comput. Vis. 77(1), 259–289 (2008)CrossRefGoogle Scholar
  12. 12.
    Ma, W., Ma, B., Zhan, X.: Kalman particle PHD filter for multi-target visual tracking. In: Zhang, Y., Zhou, Z.-H., Zhang, C., Li, Y. (eds.) IScIDE 2011. LNCS, vol. 7202, pp. 341–348. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-31919-8_44 CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd 2017

Authors and Affiliations

  1. 1.School of SoftwareBeijing Institute of TechnologyBeijingChina

Personalised recommendations