Skip to main content

Occupancy Anticipation for Efficient Exploration and Navigation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12350))

Included in the following conference series:

Abstract

State-of-the-art navigation methods leverage a spatial memory to generalize to new environments, but their occupancy maps are limited to capturing the geometric structures directly observed by the agent. We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions. In doing so, the agent builds its spatial awareness more rapidly, which facilitates efficient exploration and navigation in 3D environments. By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment, with performance significantly better than strong baselines. Furthermore, when deployed for the sequential decision-making tasks of exploration and navigation, our model outperforms state-of-the-art methods on the Gibson and Matterport3D datasets. Our approach is the winning entry in the 2020 Habitat PointNav Challenge. Project page: http://vision.cs.utexas.edu/projects/occupancy_anticipation/.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use our own implementation of ANS since authors’ code was unavailable at the time of our experiments. See Sect. S7 in Supp. for details.

References

  1. The Habitat Challenge 2020. https://aihabitat.org/challenge/2020/

  2. Anderson, P., et al.: On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757 (2018)

  3. Anderson, P., et al.: Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  4. Armeni, I., Sax, A., Zamir, A.R., Savarese, S.: Joint 2D–3D-semantic data for indoor scene understanding. ArXiv e-prints, February 2017

    Google Scholar 

  5. Bao, S.Y., Bagra, M., Chao, Y.W., Savarese, S.: Semantic structure from motion with points, regions, and objects. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2703–2710. IEEE (2012)

    Google Scholar 

  6. Burda, Y., Edwards, H., Pathak, D., Storkey, A., Darrell, T., Efros, A.A.: Large-scale study of curiosity-driven learning. arXiv:1808.04355 (2018)

  7. Cadena, C., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Rob. 32(6), 1309–1332 (2016)

    Article  Google Scholar 

  8. Carrillo, H., Reid, I., Castellanos, J.A.: On the comparison of uncertainty criteria for active slam. In: 2012 IEEE International Conference on Robotics and Automation, pp. 2080–2087. IEEE (2012)

    Google Scholar 

  9. Chang, A., et al.: Matterport3D: learning from RGB-D data in indoor environments. In: Proceedings of the International Conference on 3D Vision (3DV), MatterPort3D dataset license (2017). http://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf

  10. Chaplot, D.S., Gupta, S., Gandhi, D., Gupta, A., Salakhutdinov, R.: Learning to explore using active neural mapping. In: 8th International Conference on Learning Representations, ICLR 2020 (2020)

    Google Scholar 

  11. Chen, T., Gupta, S., Gupta, A.: Learning exploration policies for navigation. In: 7th International Conference on Learning Representations, ICLR 2019 (2019)

    Google Scholar 

  12. Das, A., Datta, S., Gkioxari, G., Lee, S., Parikh, D., Batra, D.: Embodied question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2054–2063 (2018)

    Google Scholar 

  13. Datta, S., Maksymets, O., Hoffman, J., Lee, S., Batra, D., Parikh, D.: Integrating egocentric localization for more realistic pointgoal navigation agents. In: CVPR 2020 Embodied AI Workshop (2020)

    Google Scholar 

  14. Dhamo, H., Navab, N., Tombari, F.: Object-driven multi-layer scene decomposition from a single image. In: The IEEE International Conference on Computer Vision (ICCV), October 2019

    Google Scholar 

  15. Elhafsi, A., Ivanovic, B., Janson, L., Pavone, M.: Map-predictive motion planning in unknown environments. arXiv preprint arXiv:1910.08184 (2019)

  16. Fang, K., Toshev, A., Fei-Fei, L., Savarese, S.: Scene memory transformer for embodied agents in long-horizon tasks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 538–547 (2019)

    Google Scholar 

  17. Gan, C., Zhang, Y., Wu, J., Gong, B., Tenenbaum, J.B.: Look, listen, and act: towards audio-visual embodied navigation. arXiv preprint arXiv:1912.11684 (2019)

  18. Gordon, D., Kembhavi, A., Rastegari, M., Redmon, J., Fox, D., Farhadi, A.: IQA: visual question answering in interactive environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4089–4098 (2018)

    Google Scholar 

  19. Gupta, S., Davidson, J., Levine, S., Sukthankar, R., Malik, J.: Cognitive mapping and planning for visual navigation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2616–2625 (2017)

    Google Scholar 

  20. Gupta, S., Fouhey, D., Levine, S., Malik, J.: Unifying map and landmark based representations for visual navigation. arXiv preprint arXiv:1712.08125 (2017)

  21. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  22. Henriques, J.F., Vedaldi, A.: MapNet: an allocentric spatial memory for mapping environments. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8476–8484 (2018)

    Google Scholar 

  23. Hoermann, S., Bach, M., Dietmayer, K.: Dynamic occupancy grid prediction for urban autonomous driving: a deep learning approach with fully automatic labeling. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2056–2063. IEEE (2018)

    Google Scholar 

  24. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 36(4), 1–14 (2017)

    Article  Google Scholar 

  25. Jayaraman, D., Gao, R., Grauman, K.: ShapeCodes: self-supervised feature learning by lifting views to viewgrids. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 120–136 (2018)

    Google Scholar 

  26. Jayaraman, D., Grauman, K.: Learning to look around: intelligently exploring unseen environments for unknown tasks. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  27. Chen, J., Liu, C., Wu, J., Furukawa, Y.: Floor-SP: inverse CAD for floorplans by sequential room-wise shortest path. In: The IEEE International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  28. Karkus, P., Ma, X., Hsu, D., Kaelbling, L.P., Lee, W.S., Lozano-Pérez, T.: Differentiable algorithm networks for composable robot learning. arXiv preprint arXiv:1905.11602 (2019)

  29. Katyal, K., Popek, K., Paxton, C., Burlina, P., Hager, G.D.: Uncertainty-aware occupancy map prediction using generative networks for robot navigation. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 5453–5459. IEEE (2019)

    Google Scholar 

  30. Katyal, K., et al.: Occupancy map prediction using generative and fully convolutional networks for vehicle navigation. arXiv preprint arXiv:1803.02007 (2018)

  31. Kolve, E., et al.: AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv (2017)

    Google Scholar 

  32. Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919 (2017)

    Google Scholar 

  33. Liu, C., Wu, J., Furukawa, Y.: FloorNet: a unified framework for floorplan reconstruction from 3D scans. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 201–217 (2018)

    Google Scholar 

  34. Lu, C., Dubbelman, G.: Hallucinating beyond observation: learning to complete with partial observation and unpaired prior knowledge (2019)

    Google Scholar 

  35. Savva, M., et al.: Habitat: a platform for embodied AI research. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  36. Martinez-Cantin, R., De Freitas, N., Brochu, E., Castellanos, J., Doucet, A.: A Bayesian exploration-exploitation approach for optimal online sensing and planning with a visually guided mobile robot. Auton. Rob. 27(2), 93–103 (2009)

    Article  Google Scholar 

  37. Mohajerin, N., Rohani, M.: Multi-step prediction of occupancy grid maps with recurrent neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10600–10608 (2019)

    Google Scholar 

  38. Mousavian, A., Toshev, A., Fišer, M., Košecká, J., Wahid, A., Davidson, J.: Visual representations for semantic target driven navigation. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 8846–8852. IEEE (2019)

    Google Scholar 

  39. Müller, M., Dosovitskiy, A., Ghanem, B., Koltun, V.: Driving policy transfer via modularity and abstraction. arXiv preprint arXiv:1804.09364 (2018)

  40. O’Callaghan, S.T., Ramos, F.T.: Gaussian process occupancy maps. Int. J. Robot. Res. 31(1), 42–62 (2012)

    Article  Google Scholar 

  41. Parisotto, E., Salakhutdinov, R.: Neural map: structured memory for deep reinforcement learning. arXiv preprint arXiv:1702.08360 (2017)

  42. Pathak, D., Agrawal, P., Efros, A.A., Darrell, T.: Curiosity-driven exploration by self-supervised prediction. In: International Conference on Machine Learning (2017)

    Google Scholar 

  43. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016

    Google Scholar 

  44. Ramakrishnan, S.K., Grauman, K.: Sidekick policy learning for active visual exploration. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 413–430 (2018)

    Google Scholar 

  45. Ramakrishnan, S.K., Jayaraman, D., Grauman, K.: Emergence of exploratory look-around behaviors through active observation completion. Sci. Robot. 4(30) (2019). https://doi.org/10.1126/scirobotics.aaw6326, https://robotics.sciencemag.org/content/4/30/eaaw6326

  46. Ramakrishnan, S.K., Jayaraman, D., Grauman, K.: An exploration of embodied visual exploration. arXiv preprint arXiv:2001.02192 (2020)

  47. Ramos, F., Ott, L.: Hilbert maps: scalable continuous occupancy mapping with stochastic gradient descent. Int. J. Robot. Res. 35(14), 1717–1730 (2016)

    Article  Google Scholar 

  48. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  49. Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H., Davison, A.J.: Slam++: simultaneous localisation and mapping at the level of objects. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1352–1359 (2013)

    Google Scholar 

  50. Savinov, N., Dosovitskiy, A., Koltun, V.: Semi-parametric topological memory for navigation. arXiv preprint arXiv:1803.00653 (2018)

  51. Savinov, N., et al.: Episodic curiosity through reachability. arXiv preprint arXiv:1810.02274 (2018)

  52. Savva, M., Chang, A.X., Dosovitskiy, A., Funkhouser, T., Koltun, V.: MINOS: multimodal indoor simulator for navigation in complex environments. arXiv preprint arXiv:1712.03931 (2017)

  53. Sax, A., Emi, B., Zamir, A.R., Guibas, L., Savarese, S., Malik, J.: Mid-level visual representations improve generalization and sample efficiency for learning visuomotor policies. arXiv preprint arXiv:1812.11971 (2018)

  54. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  55. Seifi, S., Tuytelaars, T.: Where to look next: unsupervised active visual exploration on 360 \(\{\backslash \)deg\(\}\) input. arXiv preprint arXiv:1909.10304 (2019)

  56. Senanayake, R., Ganegedara, T., Ramos, F.: Deep occupancy maps: a continuous mapping technique for dynamic environments (2017)

    Google Scholar 

  57. Shen, W.B., Xu, D., Zhu, Y., Guibas, L.J., Fei-Fei, L., Savarese, S.: Situational fusion of visual representation for visual navigation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2881–2890 (2019)

    Google Scholar 

  58. Shrestha, R., Tian, F.P., Feng, W., Tan, P., Vaughan, R.: Learned map prediction for enhanced mobile robot exploration. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 1197–1204. IEEE (2019)

    Google Scholar 

  59. Sless, L., Cohen, G., Shlomo, B.E., Oron, S.: Self supervised occupancy grid learning from sparse radar for autonomous driving. arXiv preprint arXiv:1904.00415 (2019)

  60. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  61. Song, S., Zeng, A., Chang, A.X., Savva, M., Savarese, S., Funkhouser, T.: Im2pano3D: extrapolating 360 structure and semantics beyond the field of view. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3847–3856 (2018)

    Google Scholar 

  62. Straub, J., et al.: The replica dataset: a digital replica of indoor spaces. arXiv preprint arXiv:1906.05797 (2019)

  63. Sun, C., Hsiao, C.W., Sun, M., Chen, H.T.: HorizonNet: learning room layout with 1D representation and pano stretch data augmentation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019

    Google Scholar 

  64. Thrun, S.: Probabilistic robotics. Commun. ACM 45(3), 52–57 (2002)

    Article  Google Scholar 

  65. Wijmans, E., Kadian, A., Morcos, A., Lee, S., Essa, I., Parikh, D., Savva, M., Batra, D.: DD-PPO: learning near-perfect pointgoal navigators from 2.5 billion frames (2020)

    Google Scholar 

  66. Wu, W., Fu, X.M., Tang, R., Wang, Y., Qi, Y.H., Liu, L.: Data-driven interior plan generation for residential buildings. ACM Trans. Graph. 38(6), 1–2 (2019). https://doi.org/10.1145/3355089.3356556

    Article  Google Scholar 

  67. Xia, F., et al.: Gibson Env: real-world perception for embodied agents. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9068–9079, Gibson dataset license agreement (2018). https://storage.googleapis.com/gibson_material/Agreement%20GDS%2006-04-18.pdf

  68. Yang, J., et al.: Embodied amodal recognition: learning to move to perceive objects. In: ICCV (2019)

    Google Scholar 

  69. Yang, S.T., Wang, F.E., Peng, C.H., Wonka, P., Sun, M., Chu, H.K.: DuLa-Net: a dual-projection network for estimating room layouts from a single RGB panorama. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3363–3372 (2019)

    Google Scholar 

  70. Yang, W., Wang, X., Farhadi, A., Gupta, A., Mottaghi, R.: Visual semantic navigation using scene priors. arXiv preprint arXiv:1810.06543 (2018)

  71. Yang, Z., Pan, J.Z., Luo, L., Zhou, X., Grauman, K., Huang, Q.: Extreme relative pose estimation for RGB-D scans via scene completion. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019

    Google Scholar 

  72. Zhu, Y., et al.: Visual semantic planning using deep successor representations. In: 2017 IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  73. Zou, C., Colburn, A., Shan, Q., Hoiem, D.: LayoutNet: reconstructing the 3D room layout from a single RGB image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2051–2059 (2018)

    Google Scholar 

Download references

Acknowledgements

UT Austin is supported in part by DARPA Lifelong Learning Machines and the GCP Research Credits Program. We thank Devendra Singh Chaplot for clarifying the implementation details for ANS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Santhosh K. Ramakrishnan .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 11976 KB)

Supplementary material 2 (mp4 11378 KB)

Supplementary material 3 (mp4 12684 KB)

Supplementary material 4 (mp4 11451 KB)

Supplementary material 5 (mp4 50334 KB)

Supplementary material 6 (pdf 4079 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ramakrishnan, S.K., Al-Halah, Z., Grauman, K. (2020). Occupancy Anticipation for Efficient Exploration and Navigation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12350. Springer, Cham. https://doi.org/10.1007/978-3-030-58558-7_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58558-7_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58557-0

  • Online ISBN: 978-3-030-58558-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics