Autonomous Robots

, Volume 34, Issue 3, pp 149–176 | Cite as

OpenRatSLAM: an open source brain-based SLAM system

  • David Ball
  • Scott Heath
  • Janet Wiles
  • Gordon Wyeth
  • Peter Corke
  • Michael Milford


RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.


RatSLAM OpenRatSLAM SLAM Navigation Mapping Brain-based Appearance-based ROS Open-source Hippocampus 



This work was supported in part by the Australian Research Council under a Discovery Project Grant DP0987078 to GW and JW, a Special Research Initiative on Thinking Systems TS0669699 to GW and JW and a Discovery Project Grant DP1212775 to MM. We would like to thank Samuel Brian for coding an iRat ground truth tracking system.


  1. Andreasson, H., Duckett, T., & Lilienthal, A. (2008). A minimalistic approach to appearance-based visual SLAM. IEEE Transactions on Robotics, 24, 1–11.CrossRefGoogle Scholar
  2. Ball, D. (2009). RatSLAM, 1.0 ed. The University of Queensland, Brisbane.Google Scholar
  3. Ball, D., Heath, S., Wyeth, G., & Wiles, J. (2010). iRat: Intelligent rat animal technology. In Australasian conference on robotics and automation. Brisbane, Australia.Google Scholar
  4. Bay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: Speeded up robust features. In Computer Vision—ECCV 2006 (pp. 404–417).Google Scholar
  5. Cummins, M., & Newman, P. (2008). FAB-MAP: Probabilistic localization and mapping in the space of appearance. International Journal of Robotics Research, 27, 647–665.CrossRefGoogle Scholar
  6. Cummins, M., & Newman, P. (2009). Highly scalable appearance-only SLAM—FAB-MAP 2.0, in Robotics: Science and Systems, Seattle, United States.Google Scholar
  7. Cummins, M., & Newman, P. (2010). Appearance-only SLAM at large scale with FAB-MAP 2.0. The International Journal of Robotics Research, 30(9), 1100–1123.Google Scholar
  8. Davison, A. J., Reid, I. D., Molton, N. D., & Stasse, O. (2007). MonoSLAM: Real-time single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29, 1052–1067.Google Scholar
  9. Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., & Moser, E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 11, 801–806.CrossRefGoogle Scholar
  10. Heath, S., Cummings, A., Wiles, J., & Ball, D. (2011). A rat in the browser. In Australasian conference on robotics and automation. Melbourne, Australia.Google Scholar
  11. Jacobson, A., & Milford, M. (2012). Towards brain-based sensor fusion for navigating robots, presented at the Australasian conference on robotics and automation. Wellington, New Zealand.Google Scholar
  12. Knuth, D. (1977). A generalization of Dijkstra’s algorithm. Information Processing Letters, 6.Google Scholar
  13. Konolige, K., & Agrawal, M. (2008). FrameSLAM: From bundle adjustment to real-time visual mapping. IEEE Transactions on Robotics, 24, 1066–1077.CrossRefGoogle Scholar
  14. Konolige, K., Agrawal, M., Bolles, R., Cowan, C., Fischler, M., & Gerkey, B. (2008). Outdoor mapping and navigation using stereo vision (pp. 179–190).Google Scholar
  15. Kyprou, S. (2009). Simple but effective personal localisation using computer vision. London: Department of Computing, Imperial College London.Google Scholar
  16. Labbe, M., & Michaud, F. (2011). Memory management for real-time appearance-based loop closure detection, presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco, United States.Google Scholar
  17. Lowe, D. G. (1999). Object recognition from local scale-invariant features, presented at the proceedings of the international conference on computer vision (Vol. 2).Google Scholar
  18. Maddern, W., Milford, M., & Wyeth, G. (2012). CAT-SLAM: Probabilistic localisation and mapping using a continuous appearance-based trajectory. The International Journal of Robotics Research, 31, 429–451.CrossRefGoogle Scholar
  19. Milford, M. J. (2008). Robot navigation from nature: Simultaneous localisation, mapping, and path planning based on hippocampal models (Vol. 41). Berlin: Springer.MATHGoogle Scholar
  20. Milford, M., & Wyeth, G. (2008). Mapping a suburb with a single camera using a biologically inspired SLAM system. IEEE Transactions on Robotics, 24, 1038–1053.CrossRefGoogle Scholar
  21. Milford, M., & Wyeth, G. (2008). Single camera vision-only SLAM on a suburban road network. In International conference on robotics and automation. Pasadena, United States.Google Scholar
  22. Milford, M., & Wyeth, G. (2010). Persistent navigation and mapping using a biologically inspired SLAM system. International Journal of Robotics Research, 29, 1131–1153.CrossRefGoogle Scholar
  23. Milford, M.J., Wiles, J., & Wyeth, G. F. (2010). Solving navigational uncertainty using grid cells on robots. PLoS Computational Biology, 6.Google Scholar
  24. Milford, M., Schill, F., Corke, P., Mahony, R., & Wyeth, G. (2011). Aerial SLAM with a single camera using visual expectation. In International conference on robotics and automation. Shanghai, China.Google Scholar
  25. Newman, P., Sibley, G., Smith, M., Cummins, M., Harrison, A., Mei, C., et al. (2009). Navigating, recognizing and describing urban spaces with vision and lasers. The International Journal of Robotics Research, 28, 1406–1433.CrossRefGoogle Scholar
  26. Quigley, M., Gerkey, B., Conley, K., Fausty, J., Footey, T., Leibs, J., et al. (2009). ROS: an open-source Robot Operating System, presented at the IEEE international conference on robotics and automation. Kobe, Japan.Google Scholar
  27. Radish: The Robotics Data Set Repository [Online]. Available:
  28. Samsonovich, A., & McNaughton, B. L. (1997). Path integration and cognitive mapping in a continuous attractor neural network model. The Journal of Neuroscience, 17, 5900–5920.Google Scholar
  29. Sibley, G., Mei, C., Reid, I., & Newman, P. (2010). Vast-scale outdoor navigation using adaptive relative bundle adjustment. International Journal of Robotics Research, 29, 958–980.CrossRefGoogle Scholar
  30. Smith, D., & Dodds, Z. (2009). Visual navigation: Image profiles for odometry and control. Journal of Computing Sciences in Colleges, 24, 168–179.Google Scholar
  31. Smith, M., Baldwin, I., Churchill, W., Paul, R., & Newman, P. (2009). The new college vision and laser data set. The International Journal of Robotics Research, 28, 595–599.CrossRefGoogle Scholar
  32. Strasdat, H., Montiel, J. M., & Davison, A. J. (2010). Scale drift-aware large scale monocular SLAM, in robotics science and systems. Spain: Zaragoza.Google Scholar
  33. Sunderhauf, N. (2012). Towards a robust back-end for pose graph SLAM. In IEEE international conference on robotics and automation. St Paul, United States.Google Scholar
  34. Sunderhauf, N., & Protzel, P. (2010). Beyond RatSLAM: Improvements to a biologically inspired SLAM system. In IEEE international conference on emerging technologies and factory automation (pp. 1–8). Bilbao, Spain.Google Scholar
  35. Zhang, A. M., & Kleeman, L. (2009). Robust appearance based visual route following for navigation in large-scale outdoor environments. The International Journal of Robotics Research, 28, 331–356.CrossRefGoogle Scholar
  36. Zoccolan, D., Oertelt, N., DiCarlo, J. J., & Cox, D. D. (2009). A rodent model for the study of invariant visual object recognition. Proceedings of the National Academy of Sciences of the United States of America, 106, 8748–8753.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • David Ball
    • 1
  • Scott Heath
    • 2
  • Janet Wiles
    • 2
  • Gordon Wyeth
    • 1
  • Peter Corke
    • 1
  • Michael Milford
    • 1
  1. 1.School of Electrical Engineering and Computer ScienceQueensland University of TechnologyBrisbaneAustralia
  2. 2.School of Information Technology and Electrical EngineeringThe University of QueenslandBrisbaneAustralia

Personalised recommendations