Skip to main content
Log in

On Exploiting Haptic Cues for Self-Supervised Learning of Depth-Based Robot Navigation Affordances

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

This article presents a method for online learning of robot navigation affordances from spatiotemporally correlated haptic and depth cues. The method allows the robot to incrementally learn which objects present in the environment are actually traversable. This is a critical requirement for any wheeled robot performing in natural environments, in which the inability to discern vegetation from non-traversable obstacles frequently hampers terrain progression. A wheeled robot prototype was developed in order to experimentally validate the proposed method. The robot prototype obtains haptic and depth sensory feedback from a pan-tilt telescopic antenna and from a structured light sensor, respectively. With the presented method, the robot learns a mapping between objects’ descriptors, given the range data provided by the sensor, and objects’ stiffness, as estimated from the interaction between the antenna and the object. Learning confidence estimation is considered in order to progressively reduce the number of required physical interactions with acquainted objects. To raise the number of meaningful interactions per object under time pressure, the several segments of the object under analysis are prioritised according to a set of morphological criteria. Field trials show the ability of the robot to progressively learn which elements of the environment are traversable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aloimonos, J., Weiss, I., Bandyopadhyay, A.: Active vision . Int. J. Comput. Vis. 1(4), 333–356 (1988)

    Article  Google Scholar 

  2. Anderson, S.R., Pearson, M.J., Pipe, A., Prescott, T., Dean, P., Porrill, J.: Adaptive cancelation of self-generated sensory signals in a whisking robot. IEEE Trans. Robot. 26(6), 1065–1076 (2010)

    Article  Google Scholar 

  3. Azzari, G., Goulden, M.L., Rusu, R.B.: Rapid characterization of vegetation structure with a microsoft kinect sensor. Sensors 13(2), 2384–2398 (2013)

    Article  Google Scholar 

  4. Bajcsy, R.: Active perception. Proc. IEEE 76(8), 996–1005 (1988)

    Article  Google Scholar 

  5. Bajracharya, M., Howard, A., Matthies, L.H., Tang, B., Turmon, M.: Autonomous off-road navigation with end-to-end learning for the lagr program. J. Field Robot. 26(1), 3–25 (2009)

    Article  MATH  Google Scholar 

  6. Baleia, J., Santana, P., Barata, J.: Self-supervised learning of depth-based navigation affordances from haptic cues. In: Proceedings of the IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 146–151. IEEE (2014)

  7. Ballard, D.H.: Animate vision . Artif. Intell. 48(1), 57–86 (1991)

    Article  Google Scholar 

  8. Batavia, P., Singh, S.: Obstacle detection in smooth high curvature terrain. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp 3062–3067. IEEE Press, Piscataway (2002)

  9. Detry, R., Baseski, E., Popovic, M., Touati, Y., Kruger, N., Kroemer, O., Peters, J., Piater, J.: Learning object-specific grasp affordance densities. In: Proceedings of the IEEE International Conference on Development and Learning, pp. 1–7 (2009)

  10. Dunbabin, M., Marques, L.: Robots for environmental monitoring: Significant advancements and applications. Robot. Autom. Mag. IEEE 19(1), 24–39 (2012)

    Article  Google Scholar 

  11. Fend, M.: Whisker-based texture discrimination on a mobile robot. In: Advances in Artificial Life, pp 302–311. Springer, Berlin Heidelberg (2005)

  12. Fend, M., Bovet, S., Pfeifer, R.: On the influence of morphology of tactile sensors for behavior and control. Robot. Auton. Syst. 54(8), 686–695 (2006)

    Article  Google Scholar 

  13. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  14. Gibson, J.: The concept of affordances. Perceiving, acting, and knowing pp. 67–82 (1977)

  15. Haralick, R.M., Joo, H., Lee, D., Zhuang, S., Vaidya, V.G., Kim, M.B.: Pose estimation from corresponding point data. IEEE Transactions on Systems. Man Cybern. 19(6), 1426–1446 (1989)

    Article  Google Scholar 

  16. Heidarsson, H., Sukhatme, G.: Obstacle detection from overhead imagery using self-supervised learning for autonomous surface vehicles. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3160–3165. IEEE (2011)

  17. Huntsberger, T., Aghazarian, H., Howard, A.: Stereo vision–based navigation for autonomous surface vessels . J. Field Robot. 28(1), 3–18 (2011)

    Article  Google Scholar 

  18. Johnson, D., Naffin, D., Puhalla, J., Sanchez, J., Wellington, C.: Development and implementation of a team of robotic tractors for autonomous peat moss harvesting. J. Field Robot. 26(6-7), 549–571 (2009)

  19. Kim, D., Möller, R.: Biomimetic whiskers for shape recognition. Robot. Auton. Syst. 55(3), 229–243 (2007)

    Article  MATH  Google Scholar 

  20. Lacey, S., Hall, J., Sathian, K.: Are surface properties integrated into visuohaptic object representations?. Eur. J. Neurosci. 31(10), 1882–1888 (2010)

    Article  Google Scholar 

  21. Lalonde, J.F., Vandapel, N., Huber, D.F., Hebert, M.: Natural terrain classification using three-dimensional ladar data for ground robot mobility. J. Field Robot. 23(10), 839–861 (2006)

    Article  Google Scholar 

  22. Manduchi, R., Castano, A., Talukder, A., Matthies, L.: Obstacle detection and terrain classification for autonomous off-road navigation. Auton. Robot. 18(1), 81–102 (2005)

    Article  Google Scholar 

  23. Marques, F., Santana, P., Guedes, M., Pinto, E., Lourenċo, A., Barata, J.: Online self-reconfigurable robot navigation in heterogeneous environments. In: Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE) pp. 1–6 IEEE, IEEE (2013)

  24. Montemerlo, M., Becker, J., Bhat, S., Dahlkamp, H., Dolgov, D., Ettinger, S., Haehnel, D., Hilden, T., Hoffmann, G., Huhnke, B., et al.: Junior: The stanford entry in the urban challenge. J. Field Robot. 25(9), 569–597 (2008)

    Article  Google Scholar 

  25. Moorthy, I., Miller, J.R., Berni, J.A.J., Zarco-Tejada, P., Hu, B., Chen, J.: Field characterization of olive (Olea europaea l.) tree crown architecture using terrestrial laser scanning data. Agric. For. Meteorol. 151(2), 204–214 (2011)

    Article  Google Scholar 

  26. Murphy, R., Stover, S.: Rescue robots for mudslides: A descriptive study of the 2005 La Conchita mudslide response. J. Field Robot. 25(1-2), 3–16 (2008)

    Article  Google Scholar 

  27. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: Ros: an open-source robot operating system. In: Proceedings of the IEEE ICRA Workshop on Open Source Software, vol. 3, pp. 1–6 (2009)

  28. Rasmussen, C., Lu, Y., Kocamaz, M.: A trail-following robot which uses appearance and structural cues. In: Field and Service Robotics, pp. 265–279. Springer, Berlin Heidelberg (2014)

  29. Rusu, R.: Cousins, S.: 3d is here: Point cloud library (pcl). In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 1–4 (2011)

  30. Rusu, R., Sundaresan, A., Morisset, B., Hauser, K., Agrawal, M., Latombe, J., Beetz, M.: Leaving Flatland: Efficient real-time three-dimensional perception and motion planning. J. Field Robot. 26(10), 841–862 (2009)

    Article  Google Scholar 

  31. Santana, P., Barata, J., Correia, L.: Sustainable robots for humanitarian demining. Int. J. Adv. Robot. Sys. 4(2), 207–218 (2007)

    Google Scholar 

  32. Santana, P., Correia, L.: Swarm cognition on off-road autonomous robots. Swarm Intelligence 5(1), 45–72 (2011)

    Article  Google Scholar 

  33. Santana, P., Correia, L., Mendonça, R., Alves, N., Barata, J.: Tracking natural trails with swarm-based visual saliency. J. Field Robot. 30(1), 64–86 (2013)

    Article  Google Scholar 

  34. Santana, P., Guedes, M., Correia, L., Barata, J.: Stereo-based all-terrain obstacle detection using visual saliency. J. Field Robot. 28(2), 241–263 (2011)

    Article  MATH  Google Scholar 

  35. Santana, P., Santos, C., Chaínho, D., Correia, L., Barata, J.: Predicting affordances from gist. Proceedings of the International Conference on the Simulation of Adaptive Behavior (SAB) pp. 325–334 (2010)

  36. Scholz, G.R., Rahn, C.D.: Profile sensing with an actuated whisker. IEEE Trans. Robot. Autom. 20(1), 124–127 (2004)

    Article  Google Scholar 

  37. Schwenkler, J.: Do things look the way they feel?. Analysis 73(1), 86–96 (2013)

    Article  Google Scholar 

  38. Silver, D., Sofman, B., Vandapel, N., Bagnell, J.A., Stentz, A.: Experimental analysis of overhead data processing to support long range navigation. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2443–2450. IEEE (2006)

  39. Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P., Gale, J., Halpenny, M., Hoffmann, G., Lau, K., Oakley, C., Palatucci, M., Pratt, V., Stang, P., Strohband, S., Dupont, C., Jendrossek, L.E., Koelen, C., Markey, C., Rummel, C., van Niekerk, J., Jensen, E., Alessandrini, P., Bradski, G., Davies, B., Ettinger, S., Kaehler, A., Nefian, A., Mahoney, P.: Stanley: The robot that won the darpa grand challenge. J. Field Robot. 23(9), 661–692 (2006)

    Article  Google Scholar 

  40. Torralba, A., Murphy, K.P., Freeman, W.T., Rubin, M.A.: Context-based vision system for place and object recognition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) pp. 273–280, IEEE Computer Society, Washington, DC (2003)

  41. Uġur, E., Ṡahin, E.: Traversability: A case study for learning and perceiving affordances in robots. Adapt. Behav. 18(3-4), 258–284 (2010)

    Article  Google Scholar 

  42. Urmson, C., Ragusa, C., Ray, D., Anhalt, J., Bartz, D., Galatali, T., Gutierrez, A., Johnston, J., Harbaugh, S., Kato, H., Messner, W., Miller, N., Peterson, K., Smith, B., Snider, J., Spiker, S., Ziglar, J., Whittaker, W., Clark, M., Koon, P., Mosher, A., Struble, J.: A robust approach to high-speed navigation for unrehearsed desert terrain. J. Field Robot. 23(8), 467–508 (2006)

    Article  MATH  Google Scholar 

  43. Wellington, C., Courville, A., Stentz, A.T.: A generative model of terrain for autonomous navigation in vegetation. The Int. J. Robot. Res. 25(12), 1287–1304 (2006)

    Article  Google Scholar 

  44. Wurm, K.M., Kretzschmar, H., Kümmerle, R., Stachniss, C., Burgard, W.: Identifying vegetation from laser data in structured outdoor environments. Robot. Auton. Sys. 62(5), 675–684 (2012)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pedro Santana.

Additional information

This work was co-funded by ROBOSAMPLER project (LISBOA-01-0202-FEDER-024961).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baleia, J., Santana, P. & Barata, J. On Exploiting Haptic Cues for Self-Supervised Learning of Depth-Based Robot Navigation Affordances. J Intell Robot Syst 80, 455–474 (2015). https://doi.org/10.1007/s10846-015-0184-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-015-0184-4

Keywords

Navigation