Skip to main content
Log in

Two-frame structure from motion using optical flow probability distributions for unmanned air vehicle obstacle avoidance

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

See-and-avoid behaviors are an essential part of autonomous navigation for Unmanned Air Vehicles (UAVs). To be fully autonomous, a UAV must be able to navigate complex urban and near-earth environments and detect and avoid imminent collisions. While there have been significant research efforts in robotic navigation and obstacle avoidance during the past few years, this previous work has not focused on applications that use small autonomous UAVs. Specific UAV requirements such as non-invasive sensing, light payload, low image quality, high processing speed, long range detection, and low power consumption, etc., must be met in order to fully use this new technology. This paper presents single camera collision detection and avoidance algorithm. Whereas most algorithms attempt to extract the 3D information from a single optical flow value at each feature point, we propose to calculate a set of likely optical flow values and their associated probabilities—an optical flow probability distribution. Using this probability distribution, a more robust method for calculating object distance is developed. This method is developed for use on a UAV to detect obstacles, but it can be used on any vehicle where obstacle detection is needed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Koch, R.: 3D Surface Reconstruction from stereoscopic image sequences. In: Proceedings of the Fifth International Conference on Computer Vision, pp. 109–114. Cambridge, MA (1995)

  2. Gonzalez, R.C., Cancelas, J.A., Alvarez, J.C., Fernandez, J.A., Enguita, J.M.: Fast stereo vision algorithm for robotic applications. In: Proceedings of IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ‘99), vol.1, pp. 97–104. Barcelona, Spain (1999)

  3. Gimel’farb, G.L.: Intensity-based bi- and trinocular stereo vision: Bayesian decisions and regularizing assumptions. In: Proceedings of the 12th IAPR International Conference on Pattern Recognition, Conference A: Computer Vision and Image Processing, vol.1, pp. 717–719. Jerusalem, Israel (1994)

  4. Ruichek, Y., Issa, H., Postaire, J.-G.: Approach for obstacle detection using linear stereo vision. In: Proceedings of the IEEE Intelligent Vehicles Symposium 2000. Dearborn, MI (2000)

  5. Goldberg D.E.: Genetic Algorithms in Search Optimization and Machine Learning. Addison-Wesley, Reading (1989)

    MATH  Google Scholar 

  6. Cantu-Paz E.: Efficient and Accurate Parallel Genetic Algorithms. Kluwer Academic Publishers, Boston (2000)

    MATH  Google Scholar 

  7. Chambers L.: The Practical Handbook of Genetic Algorithms: Applications. Chapman and Hall, Boca Raton (2001)

    Google Scholar 

  8. Woo, S., Dipanda, A.: Matching lines and points in an active stereo vision system using genetic algorithms. In: Proceedings International Conference on Image Processing, vol.3, pp. 332–335. Vancouver, BC (2000)

  9. Olson C.F., Matthies L.H., Schoppers M., Maimone M.W.: Rover navigation using stereo ego-motion. Robot. Auton. Syst. 43(4), 215–229 (2003)

    Google Scholar 

  10. Gandhi, R.K.T., Devadiga, S., Camps, O.: Detection of obstacles on runway using egomotion compensation and tracking of significant features. In: Proceedings 3rd IEEE Workshop on Applications of Computer Vision, pp. 168–173. WACV 96 (1996)

  11. Krishnan S., Raviv D.: 2D feature tracking algorithm for motion analysis. Pattern Recognit. 28, 1103– (1995)

    Article  Google Scholar 

  12. Desouza G., Kak A.: Vision for mobile robot navigation: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24, 237– (2002)

    Article  Google Scholar 

  13. Illah C.T., Nourbakhsh R., Andre D., Genesereth M.R.: Mobile robot obstacle avoidance via depth from focus. Robot. Autonom. Syst. 22, 151–158 (1997)

    Article  Google Scholar 

  14. Petland, Darrell, T., Turk, M., Huang, W.: A simple real-time range camera. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 256–261 (1989)

  15. Darrell, T., Wohn, K.: Pyramid based depth from focus. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 504–509 (1988)

  16. Rioux M., Blais F.: Compact three-dimensional cameras for robotic applications. J. Opt. Soc. Am. 3(9), 1518– (1986)

    Article  Google Scholar 

  17. Guissin, R., Ullman, S.: Direct computation of the focus of expansion from velocity field measurements. In: Proceedings of the IEEE Workshop on Visual Motion, pp. 146–155, Oct 1991

  18. Convertino B.G., Distante A.: Focus of expansion estimation with a neural network. IEEE Int. Conf. Neural Networks 3, 1693– (1996)

    Google Scholar 

  19. Rangarajan K., Shah M.: Interpretation of motion trajectories using focus of expansion. IEEE Trans. Pattern Anal. Mach. Intell. 14, 1205– (1992)

    Article  Google Scholar 

  20. Burger, W., Bhanu, B.: On computing a “fuzzy” focus of expansion for autonomous navigation. In: Proceedings CVPR ‘89., IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 563–568, Jun 1989

  21. Lonquet-Higgins H.C.: A computer algorithm for reconstructing a scene from two projections. Nature 293, 133– (1981)

    Article  Google Scholar 

  22. Faugeras, O.D., Mourrain, B.: What can be seen in three dimensions with an uncalibrated stereo rig? Eur. Conf. Comput. Vis. (ECCV), pp. 563–576 (1992)

  23. Hartley, R.I.: Estimation of relative camera positions for uncalibrated cameras. Eur. Conf. Comput. Vis. (ECCV), pp. 579–587 (1992)

  24. Tomasi C., Kanade T.: Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vis. 9, 137– (1992)

    Article  Google Scholar 

  25. Jepson A., Heeger D.: Linear Subspace Methods for Recovering Translational Direction. Cambridge University Press, (1992)

    Google Scholar 

  26. Thomas I., Simoncelli E.: Linear Structure from Motion. Technical Report IRCS 94–26. University of Pennsylvania, (1994)

    Google Scholar 

  27. Oliensis J., Genc Y.: Fast algorithms for projective multi-frame structure from motion. Proc. IEEE Int. Conf. Comput. Vis. 1, 536– (1999)

    Article  Google Scholar 

  28. Oliensis J.: Multi-frame structure-from-motion algorithm under perspective projection. Int. J. Comput. Vis. 34(2/3), 163– (1999)

    Article  Google Scholar 

  29. Broida T.J., Chellappa R.: Estimating the kinematics and structure of a rigid object from a sequence of monocular images. IEEE Trans. Pattern Anal. Mach. Intell. 13, 497– (1991)

    Article  Google Scholar 

  30. Oliensis J., Thomas J.I.: Incorporating motion error in multi-frame structure from motion. IEEE Trans. Pattern Anal. Mach. Intell. 21, 665– (1999)

    Article  Google Scholar 

  31. Soatto S., Perona P.: Rucursive 3D visual-motion estimation using subspace constraints. Int. J. Comput. Vis. 22, 235– (1997)

    Article  Google Scholar 

  32. Xiong Y., Shafer S.A.: Dense structure from a dense optical flow sequence. Comput. Vis. Image Underst. 69(2), 222– (1998)

    Article  Google Scholar 

  33. Oliensis, J.: Direct Multi-Frame Structure from Motion for Hand-Held Cameras. Proc. Int. Conf. Pattern Recognit., pp. 889–895 (2000)

  34. Oliensis, J.: Multiframe structure from motion in perspective. In: Proceedings IEEE Workshop on Representation of Visual Scenes, pp. 77–84 (1995)

  35. Shum, H.Y., Ke, Q., Zhang, Z.: Efficient bundle adjustment with virtual key frames: a hierarchical approach to multi-frame structure from motion. Proc. Conf. Comput. Vis. Pattern Recognit., pp. 538–543 (1999)

  36. Favaro, P., Jin, H., Soatto, S.: A semi-direct approach to structure from motion. Int. Conf. Image Anal. Proc., pp. 250–255 (2001)

  37. Oliensis, J.: Direct multi-frame structure from motion for hand-held cameras. Proc. Int. Conf. Pattern Recognit., pp. 889–895 (2000)

  38. Beardsley, P., Torr, P., Zisserman. A.: 3D Model acquisition from extended image sequences. Eur. Conf. Comput. Vis. (ECCV), pp. 683–695 (1996)

  39. Forsyth, D., Ioffe, S., Haddon, J.: Bayesian structure from motion. Int. Conf. Comput. Vis. (ICCV), pp. 660–665 (1999)

  40. Zhang Z.: Determining the epipolar geometry and its uncertainty—a review. Int. J. Comput. Vis. 27(2), 161– (1998)

    Article  Google Scholar 

  41. Merrell, P.C.: Structure from motion using optical flow probability distributions. Master’s Thesis, Brigham Young University (2005)

  42. Thomas, I., Simoncelli, E.: Linear Structure from Motion. Technical Report IRCS, pp. 94–26. University of Pennsylvania (1994)

  43. Aanaes H., Fisker R., Astrom K., Carstensen J.M.: Robust Factorization. IEEE Trans. Pattern Anal. Mach. Intell. 24, 1215– (2002)

    Article  Google Scholar 

  44. Gruber A., Weiss Y.: Multibody factorization with uncertainty and missing data using the EM algorithm. Proc. Conf. Comput. Vis. Pattern Recognit. 1, I:707– (2004)

    Google Scholar 

  45. Irani, M., Anandan, P.: Factorization with uncertainty. Proc. Eur. Conf. Comput. Vis., pp. 539–553 (2000)

  46. Morris, D.D., Kanade, T.: A unified factorization algorithm for points, line segments, and planes with uncertainty models. Int. Conf. Comput. Vis., pp. 696–702 (1998)

  47. Poelman C.J., Kanade T.: A paraperspective factorization method for shape and motion recovery. IEEE Trans. Pattern Anal. Mach. Intell. 19, 206– (1997)

    Article  Google Scholar 

  48. Zucchelli M., Santos-Victor J., Christensen H.I.: Maximum likelihood structure from motion estimation integrated over time. Int. Conf. Pattern Recognit. 4, 260– (2002)

    Google Scholar 

  49. Dellaert F., Seitz S.M., Thrope C.E., Thrun S.: Structure from motion without correspondence. Proc. Conf. Comput. Vis. Pattern Recognit. 2, 557– (2000)

    Google Scholar 

  50. Soatto, S., Brocket, R.: Optimal structure from motion: local ambiguities and global estimates. Proc. Conf. Comput. Vis. Pattern Recognit., pp. 282–288 (1998)

  51. Chiuso R.B., Soatto S.: Optimal structure from motion: local ambiguities and global estimates. Int. J. Comput. Vis. 39, 195– (2000)

    Article  MATH  Google Scholar 

  52. Simoncelli, E.P., Adelson, E.H., Heeger, D.J.: Probability distributions of optical flow. Proc. Conf. Comput. Vis. Pattern Recognit., pp. 310–315 (1991)

  53. Anandan P.: A computation framework and an algorithm for the measurement of visual motion. Int. J. Comput. Vis. 2, 283– (1989)

    Article  Google Scholar 

  54. Szeliski R.: Bayesian modeling of uncertainty in low-level vision. Int. J. Comput. Vis. 5(3), 271– (1990)

    Article  Google Scholar 

  55. Merrell, P.C., Lee, D.J., Beard, R.W.: Obstacle Avoidance for Unmanned Air Vehicles Using Optical Flow Probability Distributions. SPIE Optics East, Robotics Technologies and Architectures, Mobile Robots XVII, vol. 5609–04. Philadelphia (2004)

  56. Merrell, P.C., Lee, D.J., Beard, R.W.: Statistical Analysis of Multiple Optical Flow Values for Estimation of Unmanned Air Vehicles Height Above Ground. SPIE Optics East, Robotics Technologies and Architectures, Intelligent Robots and Computer Vision XXII, vol. 5608–28. Philadelphia (2004)

  57. Sun Z., Tekalp A.M., Ramesh V.: Error characterization of the factorization method. Comput. Vis. Image Underst. 82(2), 10– (2001)

    Google Scholar 

  58. Anderson, J.D., Lee, D.J., Edwards, B.B., Archibald, J.K., Greco, C.R.: Real-time Feature Tracking on an Embedded Vision Sensor for Small Vision-guided Unmanned Vehicles. In: The 7th IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA2007), pp. 55–60. Jacksonville June 20–23 (2007)

  59. Wei, Z.Y., Lee, D.J., Nelson, B.E., Martineau, M.A.: A Fast and Accurate Tensor-based Optical Flow Algorithm Implemented in FPGA. In: IEEE Workshop on Applications of Computer Vision (WACV 2007), pp. 18–23. Austin Feb 21–22 (2007)

  60. Wei Z.Y., Lee D.J., Nelson B.E.: FPGA-based Real-time Optical Flow Algorithm Design and Implementation. J. Multimed. 2(5), 38–45 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dah-Jye Lee.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lee, DJ., Merrell, P., Wei, Z. et al. Two-frame structure from motion using optical flow probability distributions for unmanned air vehicle obstacle avoidance. Machine Vision and Applications 21, 229–240 (2010). https://doi.org/10.1007/s00138-008-0148-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-008-0148-9

Keywords

Navigation