Real-time landing place assessment in man-made environments

Abstract

We propose a novel approach to the real-time landing site detection and assessment in unconstrained man-made environments using passive sensors. Because this task must be performed in a few seconds or less, existing methods are often limited to simple local intensity and edge variation cues. By contrast, we show how to efficiently take into account the potential sites’ global shape, which is a critical cue in man-made scenes. Our method relies on a new segmentation algorithm and shape regularity measure to look for polygonal regions in video sequences. In this way, we enforce both temporal consistency and geometric regularity, resulting in very reliable and consistent detections. We demonstrate our approach for the detection of landable sites such as rural fields, building rooftops and runways from color and infrared monocular sequences significantly outperforming the state-of-the-art.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Notes

  1. 1.

    Results on the full sequences are included as part of supplementary material for all datasets. MP results for landable fields are also provided.

  2. 2.

    http://www.sensefly.com.

References

  1. 1.

    Barnes, N., Loy, G., Shaw, D.: The regular polygon detector. Pattern Recognit. 43(3), 592–602 (2010)

    Article  MATH  Google Scholar 

  2. 2.

    Bignone, F., Henricsson, O., Fua, P., Stricker, M.: Automatic extraction of generic house roofs from high resolution aerial imagery. In: European Conference on Computer Vision, pp. 85–96 (1996)

  3. 3.

    Bosch, S., Lacroix, S., Caballero, F.: Autonomous detection of safe landing areas for a UAV from monocular images. In: International Conference on Intelligent Robots and Systems (2006)

  4. 4.

    Brockers, R., Buffard, P., Ma, J., Matthies, L., Tomlin, C.: Autonomous landing and ingress of micro-air-vechicles in urban environments based on monocular vision. In: SPIE Conference on Micro- and Nanotechnology Sensors, Systems and Applications III (2011)

  5. 5.

    Cheng, Y., Johnson, A., Matthies, L., Wolf, A.: Passive imaging based hazard avoidance for spacecraft safe landing. In: International Symposium on Artifical Intelligence, Robotics and Automation in Space (2001)

  6. 6.

    Christoudias, M., Georgescu, B., Meer, P.: Synergism in low level vision. In: International Conference on Pattern Recognition (2002)

  7. 7.

    Comaniciu, D., Meer, P.: Mean Shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)

    Article  Google Scholar 

  8. 8.

    Couprie, M., Bertrand, G.: Topological grayscale watersheld transformation. In: SPIE Vision Geometry VI (1997)

  9. 9.

    Cuseo, J., Fleming, P., Hoff, W., Sklair, C., Eshera, M., Whitten, G.: Machine vision techniques for planetary terminal descent hazard avoidance and landmark tracking. In: American Control Conference (1991)

  10. 10.

    Di, N., Zhu, M., Wang, Y.: Real time method for airport detection in aerial images. In: International Conference on Audio, Language and Image Processing (2008)

  11. 11.

    Donoser, M., Bischof, H.: 3D segmentation by maximally stable volumes (MSVs). In: International Conference on Pattern Recognition (2006)

  12. 12.

    Duda, R., Hart, P.: Use of the hough transform to detect lines and curves in pictures. In: Communications of the ACM (1972)

  13. 13.

    Fitzgerald, D.: Landing site selection for UAV forced landing using machine vision. Ph.D. thesis, Queensland University of Technology (2007)

  14. 14.

    Fu, Y., Xing, K., Han, X., Zhang, H.: Airfield runway detection from synthetic aperture radar image. In: Congress on Image and Signal Processing (2008)

  15. 15.

    Fua, P., Hanson, A.: An optimization framework for feature extraction. Machine Vis. Appl. 4(2), 59–87 (1991)

    Article  Google Scholar 

  16. 16.

    Garcia-Pardo, P., Sukhatme, G., Montgomery, J.: Towards vision-based safe landing for an autonomous helicopter. Robot. Auton. Sys. 38, 19–29 (2001)

    Article  Google Scholar 

  17. 17.

    Gong, X., Abbot, A.L., Fleming, G.A.: A survey of techniques for detection and tracking of airport runways. In: 4th AIAA Aerospace Sciences Meeting and Exhibit (2006)

  18. 18.

    Hamza, R., Ibrahim, M., Ramegowwda, D., Rao, V.: Runway position and moving object detection prior to landing. In: Hammoud, R.I. (ed.) Augmented Vision Perception in Infrared: Algorithms and Applied Systems, Springer, Berlin (2009)

  19. 19.

    Heijmans, H.J.A.M.: Theoretical aspects of gray-level morphology. IEEE Trans. Pattern Anal. Mach. Intell. 13(6), 568–582 (1991)

    Article  Google Scholar 

  20. 20.

    Hermansson, J., Gising, A., Skoglund, M.A., Schon, T.B.: Autonomous landing of an unmanned aerial vehicle. Tech. Rep. Linkopings Universitet (2010)

  21. 21.

    Hoff, W., Sklair, C.: Planetary terminal descent hazard avoidance using optical flow. In: International Conference on Robotics and Automation (1990)

  22. 22.

    Howard, A., Jones, B., Serrano, N.: Integrated sensing for entry, descent, and landing of a robotic spacecraft. In: IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 1, pp. 295–304 (2011)

  23. 23.

    Howard, A., Seraji, H.: A fuzzy rule-based safety index for landing site risk assessment. In: 5th Biannual World Automation Congress (2002)

  24. 24.

    Huertas, A., Cheng, Y., Madison, R.: Passive imaging based multi-cue hazard detection for spacecraft safe landing. In: IEEE Aerospace Conference (2006)

  25. 25.

    Huertas, A., Cole, W., Nevatia, R.: Detecting runways in complex airport scenes. Comput. Vis. Grap. Image Process. 51(2) (1990)

  26. 26.

    Johnson, A., Keim, J., Ivanov, T.: Analysis of flash lidar filed test data for safe lunar landing. In: IEEE Aerospace Conference (2010)

  27. 27.

    Johnson, A., Klumpp, A., Collier, J., Wolf, A.: Lidar-based hazard avoidance for safe landing on mars. In: 11th AAS/AIAA Space Flight Mechancis Meeting (2001)

  28. 28.

    Johnson, A., Montgomery, J., Matthies, L.: Vision guided landing of an autonomous helicopter in hazardous terrain. In: International Conference on Robotics and Automation (2005)

  29. 29.

    Jones, R.: Connected filtering and segmentation using component trees. Comput. Vis. Image Underst. 75(3), 215–228 (1999)

    Article  Google Scholar 

  30. 30.

    Jung, C.R., Schramm, R.: Rectangle detection based on a windowed hough transform. In: Computer Graphics and Image Processing (2004)

  31. 31.

    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 20(2), 91–110 (2004)

    Article  Google Scholar 

  32. 32.

    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004)

    Article  Google Scholar 

  33. 33.

    Matthies, L., Huertas, A., Cheng, Y., Johnson, A.: Stereo vision and shadow analysis for landing hazard detection. In: International Conference on Robotics and Automation (2008)

  34. 34.

    Mckeown, D., Harvey, W., Mcdermott, J.: Rule-based interpretation of aerial imagery. IEEE Trans. Pattern Anal. Mach. Intell. 7(5), 570–585 (1985)

    Article  Google Scholar 

  35. 35.

    Najman, L., Couprie, M.: Building the component tree in quasi-linear time. Image Process. 15(11), 3531–3539 (2006)

    Article  Google Scholar 

  36. 36.

    Nevatia, R., Babu, R.: Linear feature extraction and description. Comput. Graph. Image Process. 13(3), 257–269 (1980)

    Article  Google Scholar 

  37. 37.

    Pien, H.: Autonomous hazard detection and avoidance. Tech. rep., NASA (1992)

  38. 38.

    Rogata, P., Sotto, E.D., Camara, F., Caramagno, A., Reborado, J.M., Correia, B., Duarte, P., Mancuso, S.: Design and performance assessment of hazard avoidance techniques for vision-based landing. Acta Astronaut. (2007)

  39. 39.

    Salembier, P., Oliveras, A., Garrido, L.: Antiextensive connected operators for image and sequence processing. IEEE Trans. Image Process. 7(4), 555–570 (1998)

    Article  Google Scholar 

  40. 40.

    Scherer, S., Chamberlain, L., Singh, S.: First results in autonomous landing and obstacle avoidance by a full-scale helicopter. In: International Conference on Robotics and Automation (2012)

  41. 41.

    Serrano, N., Bajracharya, M., Howard, M., Seraji, H.: A novel tiered sensor fusion approach for terrain characterization and safe landing assessment. In: IEEE Aerospace Conference (2006)

  42. 42.

    Shakerina, O., Vidal, R., Sharp, C.S., Ma, Y., Sastry, S.S.: Multiple view motion estimation and control for landing and unmanned aerial vehicle. In: International Conference on Robotics and Automation (2002)

  43. 43.

    Shang, J., Shi, Z.: Vision-based runway recognition for UAV autonomous landing. Int. J. Comput. Sci. Netw. Security 7(3), 112–117 (2007)

    Google Scholar 

  44. 44.

    Sharp, C.S., Shakernia, O., Sastry, S.S.: A vision system for landing an unmanned aerial vehicle. In: International Conference on Robotics and Automation (2001)

  45. 45.

    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)

    Article  Google Scholar 

  46. 46.

    Suetens, P., Fua, P., Hanson, A.: Computational strategies for object recognition. ACM Comput. Surv. 24(1), 5–61 (1992)

    Article  Google Scholar 

  47. 47.

    Takahashi, M.D., Abershitz, A., Rubinets, R., Whalley, M.: Evaluation of safe landing area determination algorithms for autonomous rotorcraft using site benchmarking. American Helicopter Society 67th Annual Forum (2011)

  48. 48.

    Tandra, S., Rahman, Z.: Robust edge-detection algorithm for runway edge detection. In: SPIE Image Processing: Machine Vision Applications (2008)

  49. 49.

    Templeton, T., Shim, D.H., Geyer, C., Sastry, S.S.: Autonomous vision-based landing and terrain mapping using an Mpc-controlled unmanned rotorcraft. In: International Conference on Robotics and Automation (2007)

  50. 50.

    Theodore, C.T., Tischler, M.B.: Precision autonomous landing adaptive control experiment (PALACE). In: Army Science Conference (2006)

  51. 51.

    Udrea, R.M., Vizireanu, N.: Iterative generalization of morpholigical skeleton. J. Electron. Imaging 16(1) (2007)

  52. 52.

    Vedaldi, A., Fulkerson, B.: VLFeat: an open and portable library of computer vision algorithms (2008). http://www.vlfeat.org/

  53. 53.

    Vincent, L.: Local grayscale granulometrics based on opening trees. In: Maragos, P., Schafer, R.W., Butt, M.A. (eds.) Mathematical Morphology and Its Applications to Image and Signal Processing. Kluwer Academic Publishers (1996)

  54. 54.

    Vizireanu, D.N., Pirnog, C., Lăzărescu, V., Vizireanu, A.: The skeleton structure—an improved compression algorithm with perfect reconstruction. J. Digit. Imaging 14, 241–242 (2001)

    Article  Google Scholar 

  55. 55.

    Wendt, P.D., Coyle, E.J., Gallagher, N.C.: Stack filters. Acoust. Speech Signal Process. 34(4), 898–911 (1986)

    Article  Google Scholar 

  56. 56.

    Zongur, U.: Detection of airport runways in optical satellite images. Master’s thesis, Middle East Technical University (2009)

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Xiaolu Sun.

Additional information

This work was supported in part by the EU myCopter project.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 1065 KB)

Supplementary material 2 (mp4 4151 KB)

Supplementary material 4 (mp4 4055 KB)

Supplementary material 5 (mp4 2510 KB)

Supplementary material 6 (mp4 1807 KB)

Supplementary material 3 (htm 5 KB)

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Sun, X., Christoudias, C.M., Lepetit, V. et al. Real-time landing place assessment in man-made environments. Machine Vision and Applications 25, 211–227 (2014). https://doi.org/10.1007/s00138-013-0560-7

Download citation

Keywords

  • Automated landing
  • Hazard detection
  • Component tree
  • Image segmentation
  • Shape analysis