Closing the loop: Pursuing a moving object by a moving observer

  • Peter Nordlund
  • Tomas Uhlin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 970)


We present an integrated system, able to pursue a moving object by controlling a robot-head to maintain the moving object centered in the image. This system runs continuously in time and updates the object localization at a frame-rate of 25 Hz. The moving object can be tracked although the observer performs an unknown independent motion, involving both translation and rotation. We focus on a simple algorithm, since computational cost is of major importance for real-time systems with feedback. The algorithm is noniterative and computationally cheap. The running time complexity is \(\mathcal{O}(n)\) for an image containing n pixels.


normal velocity real-time system tracking 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    J. L. Barren, D. J. Fleet, and S. S. Beauchemin. Performance of optical flow techniques. IJCV, 12(1):43–77, 1994.Google Scholar
  2. 2.
    J. L. Crowley and H. I. Christensen, editors. Vision as Process. Basic Research Series. Springer-Verlag, 1995.Google Scholar
  3. 3.
    R. Deriche and O. Faugeras. Tracking line segments. In 1st ECCV, volume 427, pages 259–268, April 1990.Google Scholar
  4. 4.
    M. Irani and S. Peleg. Motion analysis for image enhancement: Resolution, occlusion, and transparency. Journal of Visual Communication and Image Representation, 4(4):324–335, 1993.Google Scholar
  5. 5.
    J. Jonides and S. Yantis. Uniqueness of abrupt visual onset in capturing attention. Percept. Psychophys., 43(4):346–354, 1988.Google Scholar
  6. 6.
    P. McLeod, J. Driver, Z. Dienes, and J. Crisp. Filtering by movement in visual search. J. of Exp. Psychology: Human Perc. and Performance, 17(1):55–64, 1991.Google Scholar
  7. 7.
    F. G. Meyer and P. Bouthemy. Region-based tracking using affine motion models in long image sequences. CVGIP, 60(2):119–140, 1994.Google Scholar
  8. 8.
    K. Pahlavan. Active Robot Vision and Primary Ocular Processes. Ph. D. dissertation, Royal Inst. Tech., 1993. ISRN KTH/NA/P-93/16-SE.Google Scholar
  9. 9.
    K. Pahlavan and J.-O. Eklundh. Mechatronics of active vision. Mechatronics, 4(2):113–123, 1994.Google Scholar
  10. 10.
    A. Treisman. Features and objects: The fourteenth Bartlett memorial lecture. The Quarterly J. of Exp. Psychology, 40A(2):201–237, 1988.Google Scholar
  11. 11.
    T. Uhlin, P. Nordlund, A. Maki, and J.-O. Eklundh. Towards an active visual observer. Technical Report ISRN KTH/NA/P-95/08-SE, Royal Inst. Tech., 1995. Shortened version to appear in 5th ICCV.Google Scholar
  12. 12.
    John Y. A. Wang and Edward H. Adelson. Spatio-temporal segmentation of video data. In Proc. of the SPIE: Image and Video Processing II, volume 2182, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Peter Nordlund
    • 1
  • Tomas Uhlin
    • 1
  1. 1.Computational Vision and Active Perception Laboratory (CVAP) Department of Numerical Analysis and Computing ScienceRoyal Institute of TechnologyStockholmSweden

Personalised recommendations