Advertisement

International Journal of Computer Vision

, Volume 16, Issue 3, pp 205–228 | Cite as

Driving saccade to pursuit using image motion

  • David W. Murray
  • Kevin J. Bradshaw
  • Philip F. McLauchlan
  • Ian D. Reid
  • Paul M. Sharkey
Article

Abstract

Within the context of active vision, scant attention has been paid to the execution of motion saccades—rapid re-adjustments of the direction of gaze to attend to moving objects. In this paper we first develop a methodology for, and give real-time demonstrations of, the use of motion detection and segmentation processes to initiate “capture saccades” towards a moving object. The saccade is driven by both position and velocity of the moving target under the assumption of constant target velocity, using prediction to overcome the delay introduced by visual processing. We next demonstrate the use of a first order approximation to the segmented motion field to compute bounds on the time-to-contact in the presence of looming motion. If the bound falls below a safe limit, a “panic saccade” is fired, moving the camera away from the approaching object. We then describe the use of image motion to realize smooth pursuit, tracking using velocity information alone, where the camera is moved so as to null a single constant image motion fitted within a central image region. Finally, we glue together capture saccades with smooth pursuit, thus effecting changes in both what is being attended to and how it is being attended to. To couple the different visual activities of waiting, saccading, pursuing and panicking, we use a finite state machine which provides inherent robustness outside of visual processing and provides a means of making repeated exploration. We demonstrate in repeated trials that the transition from saccadic motion to tracking is more likely to succeed using position and velocity control, than when using position alone.

Keywords

Visual Processing Segmented Motion Motion Detection Finite State Machine Smooth Pursuit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aloimonos, J., Weiss, I., and Bandyopadhyay, A. 1988. Active vision.International Journal of Computer Vision, l(4):333–356Google Scholar
  2. Bajcsy, R. 1985. Active perception vs. passive perception. InProceedings of the 3rd Workshop on Computer Vision: Representation and Control, Bellaire MI. IEEE Computer Society Press, Silver Springs MD.Google Scholar
  3. Bajcsy, R. 1988. Active perception.Proc. IEEE, 76:996–1005.Google Scholar
  4. Bajcsy, R. and Campos, M. 1992. Active and exploratory perception.CVGIP: Image Understanding, 56(l):31–40.Google Scholar
  5. Ballard, D.H. 1989. Behavioural constraints on computer vision. Image and Vision Computing, 7(1).Google Scholar
  6. Ballard, D.H. 1991. Animate vision.Artificial Intelligence, 48: 57–86.Google Scholar
  7. Ballard, D.H. and Brown, CM. 1992. Principles of animate vision.CVGIP: Image Understanding, 56(1):3–21.Google Scholar
  8. Ballard, D.H. and Ozcandarli, A. 1988. Eye fixation and early vision: kinetic depth. InProceedings of the 2nd IEEE International Conference on Computer Vision, Tampa FL, IEEE Computer Society Press, Washington DC, p. 524.Google Scholar
  9. Birnbaum, L., Brand, M., and Cooper, P. 1993. Looking for trouble: using causal semantics to direct focus of attention. InProceedings of the 4th International Conference on Computer Vision, Berlin, IEEE Computer Society Press, Los Alamitos, CA, pp. 49–56.Google Scholar
  10. Brown, CM. 1990. Gaze control with interactions and delays.IEEE Trans. Sys. Man and Cybernet., TSMC-20(2):518–527.Google Scholar
  11. Brown, CM. 1990. Prediction and cooperation in gaze control.Biological Cybernetics, 63:61–70.Google Scholar
  12. Brown, CM., Ballard, D.H., Becker, T.G., Gans, R.F., Martin, N.G., Ohlson, T.J., Potter, R.D., Rimey, R.D., Tilley, D.G., and Whitehead, S.D. 1988. The Rochester Robot. Technical Report TR 257, Computer Science Department, University of Rochester, Rochester, NY.Google Scholar
  13. Campani, M. and Verri, A. 1990. Computing optical flow from an overconstrained system of linear algebraic equations. InProceedings of the 3rd IEEE International Conference on Computer Vision, Osaka, Japan, IEEE Computer Society Press, Washington DC, pp. 22–26.Google Scholar
  14. Campani, M. and Verri, A. 1992. Motion analysis from first-order properties of optical flow.CVGIP: Image Understanding, 56(1):90–107.Google Scholar
  15. Carpenter, R.H.S. 1988.Movements of the Eyes. Pion Press, London.Google Scholar
  16. Cipolla, R. and Blake, A. 1990. The dynamic analysis of apparent contours. InProceedings of the 3rd IEEE International Conference on Computer Vision, Osaka, Japan, IEEE Computer Society Press, Washington DC, pp. 616–632.Google Scholar
  17. Cipolla, R. and Blake, A. 1992. Surface orientation and time to contact from image divergence and deformation. InProceedings of the 2nd European Conference on Computer Vision, Santa Margherita Ligure, Italy, Springer-Verlag, Berlin, pp. 187–202.Google Scholar
  18. Clark, J.J. and Ferrier, N.J. 1988. Modal control of an attentive vision system. InProceedings of the 2nd International Conference on Computer Vision, Tampa FL, IEEE Computer Society Press, Washington DC, pp. 514–523.Google Scholar
  19. Fennema, C.L. and Thompson, W.L. 1979. Velocity determination in scenes containing several moving objects.Computer Graphics and Image Processing, 9:301–315.Google Scholar
  20. Fermueller, C. and Aloimonos, Y. 1993. The role of fixation in visual motion analysis.International Journal of Computer Vision, 11(2): 165–186.Google Scholar
  21. Grosso, E. and Ballard, D.H. 1993. Head-centred orientation strategies in animate vision. InProceedings of the 4th IEEE International Conference on Computer Vision, Berlin, IEEE Computer Society Press, Washington DC, pp. 395–402.Google Scholar
  22. Horn, B.K.P. and Schunck, B.G. 1981. Determining optical flow.Artificial Intelligence, 17:185–203.Google Scholar
  23. Koenderink, J.J. and van Doorn, A.J. 1975. Invariant properties of the motion parallax field due to the movement of rigid bodies relative to an observer.Optica Acta, 22(9):773–791.Google Scholar
  24. McLauchlan, P.F. and Murray, D.W. 1993. Active camera calibration for a head-eye platform using a variable state dimension filter. InSPIE Sensor Fusion VI, Boston.Google Scholar
  25. McLauchlan, P.F. and Reid, I.D. 1993. A 2D vision system for real time gaze control. Technical Report WP3/Oxford/930112/2D Vision, Dept. of Engineering Science, University of Oxford.Google Scholar
  26. McLauchlan, P.F., Reid, I.D., and Murray, D.W. 1992. Coarse motion for saccade control. In D. Hogg and R. Boyle (eds.),Proceedings of the 3rd British Machine Vision Conference, Leeds, UK, Springer-Verlag, pp. 357–366.Google Scholar
  27. Nelson, R.C. 1991. Qualitative detection of motion by a moving observer. International Journal of Computer Vision, 7(1):33–46.Google Scholar
  28. Olson, T.J. and Coombs, D.J. 1991. Real-time vergence control for binocular robots.International Journal of Computer Vision, 7(l):67–89.Google Scholar
  29. Pahlavan, K. and Eklundh, J.-O. 1992. A head-eye system-analysis and design.CVGIP: Image Understanding, 56(l):41–56.Google Scholar
  30. Rimey, R.D. and Brown, CM. 1992. Where to look next using a Bayes net: incorporating geometric relations. In G. Sandini (ed.), Proceedings of the 2nd European Conference on Computer Vision, Santa Margherita Ligure, Italy, Springer-Verlag, pp. 542–550.Google Scholar
  31. Sharkey, P.M. and Murray, D.W. 1993. Coping with delays for realtime gaze control. InSPIE Sensor Fusion VI, Boston.Google Scholar
  32. Sharkey, P.M., Murray, D.W., Vandevelde, S., Reid, I.D., and McLauchlan, P.F. 1993. A modular head/eye platform for realtime reactive vision.Mechatronics, 3(4): 517–535.Google Scholar
  33. Sharkey, P.M., Reid, I.D., McLauchlan, P.F., and Murray, D.W. 1992. Real-time control of an active stereo head/eye platform. InProceedings of the 2nd International Conference on Automation, Robotics and Computer Vision, Singapore.Google Scholar
  34. Swain, M.J. and Stricker, M.A. 1993. Promising directions in active vision.International Journal of Computer Vision, 11(2): 109–126.Google Scholar
  35. Verri, A. and Poggio, T. 1987. Against quantitative optical flow. InProc. 1st Int. Conf. on Computer Vision, IEEE Computer Society Press, Washington DC, pp. 171–180.Google Scholar
  36. Verri, A. and Poggio, T. 1989. Motion field and optical flow: qualitative properties.IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-11:490–498.Google Scholar
  37. Yarbus, A.L. 1967.Eye Movements and Vision. Plenum, New York.Google Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • David W. Murray
    • 1
  • Kevin J. Bradshaw
    • 1
  • Philip F. McLauchlan
    • 1
  • Ian D. Reid
    • 1
  • Paul M. Sharkey
    • 1
  1. 1.Department of Engineering ScienceUniversity of OxfordOxfordUK

Personalised recommendations