Skip to main content
Log in

OpenRatSLAM: an open source brain-based SLAM system

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

References

  • Andreasson, H., Duckett, T., & Lilienthal, A. (2008). A minimalistic approach to appearance-based visual SLAM. IEEE Transactions on Robotics, 24, 1–11.

    Article  Google Scholar 

  • Ball, D. (2009). RatSLAM, 1.0 ed. ratslam.itee.uq.edu.au. The University of Queensland, Brisbane.

  • Ball, D., Heath, S., Wyeth, G., & Wiles, J. (2010). iRat: Intelligent rat animal technology. In Australasian conference on robotics and automation. Brisbane, Australia.

  • Bay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: Speeded up robust features. In Computer Vision—ECCV 2006 (pp. 404–417).

  • Cummins, M., & Newman, P. (2008). FAB-MAP: Probabilistic localization and mapping in the space of appearance. International Journal of Robotics Research, 27, 647–665.

    Article  Google Scholar 

  • Cummins, M., & Newman, P. (2009). Highly scalable appearance-only SLAM—FAB-MAP 2.0, in Robotics: Science and Systems, Seattle, United States.

  • Cummins, M., & Newman, P. (2010). Appearance-only SLAM at large scale with FAB-MAP 2.0. The International Journal of Robotics Research, 30(9), 1100–1123.

    Google Scholar 

  • Davison, A. J., Reid, I. D., Molton, N. D., & Stasse, O. (2007). MonoSLAM: Real-time single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29, 1052–1067.

    Google Scholar 

  • Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., & Moser, E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 11, 801–806.

    Article  Google Scholar 

  • Heath, S., Cummings, A., Wiles, J., & Ball, D. (2011). A rat in the browser. In Australasian conference on robotics and automation. Melbourne, Australia.

  • Jacobson, A., & Milford, M. (2012). Towards brain-based sensor fusion for navigating robots, presented at the Australasian conference on robotics and automation. Wellington, New Zealand.

  • Knuth, D. (1977). A generalization of Dijkstra’s algorithm. Information Processing Letters, 6.

  • Konolige, K., & Agrawal, M. (2008). FrameSLAM: From bundle adjustment to real-time visual mapping. IEEE Transactions on Robotics, 24, 1066–1077.

    Article  Google Scholar 

  • Konolige, K., Agrawal, M., Bolles, R., Cowan, C., Fischler, M., & Gerkey, B. (2008). Outdoor mapping and navigation using stereo vision (pp. 179–190).

  • Kyprou, S. (2009). Simple but effective personal localisation using computer vision. London: Department of Computing, Imperial College London.

    Google Scholar 

  • Labbe, M., & Michaud, F. (2011). Memory management for real-time appearance-based loop closure detection, presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco, United States.

  • Lowe, D. G. (1999). Object recognition from local scale-invariant features, presented at the proceedings of the international conference on computer vision (Vol. 2).

  • Maddern, W., Milford, M., & Wyeth, G. (2012). CAT-SLAM: Probabilistic localisation and mapping using a continuous appearance-based trajectory. The International Journal of Robotics Research, 31, 429–451.

    Article  Google Scholar 

  • Milford, M. J. (2008). Robot navigation from nature: Simultaneous localisation, mapping, and path planning based on hippocampal models (Vol. 41). Berlin: Springer.

    MATH  Google Scholar 

  • Milford, M., & Wyeth, G. (2008). Mapping a suburb with a single camera using a biologically inspired SLAM system. IEEE Transactions on Robotics, 24, 1038–1053.

    Article  Google Scholar 

  • Milford, M., & Wyeth, G. (2008). Single camera vision-only SLAM on a suburban road network. In International conference on robotics and automation. Pasadena, United States.

  • Milford, M., & Wyeth, G. (2010). Persistent navigation and mapping using a biologically inspired SLAM system. International Journal of Robotics Research, 29, 1131–1153.

    Article  Google Scholar 

  • Milford, M.J., Wiles, J., & Wyeth, G. F. (2010). Solving navigational uncertainty using grid cells on robots. PLoS Computational Biology, 6.

  • Milford, M., Schill, F., Corke, P., Mahony, R., & Wyeth, G. (2011). Aerial SLAM with a single camera using visual expectation. In International conference on robotics and automation. Shanghai, China.

  • Newman, P., Sibley, G., Smith, M., Cummins, M., Harrison, A., Mei, C., et al. (2009). Navigating, recognizing and describing urban spaces with vision and lasers. The International Journal of Robotics Research, 28, 1406–1433.

    Article  Google Scholar 

  • Quigley, M., Gerkey, B., Conley, K., Fausty, J., Footey, T., Leibs, J., et al. (2009). ROS: an open-source Robot Operating System, presented at the IEEE international conference on robotics and automation. Kobe, Japan.

  • Radish: The Robotics Data Set Repository [Online]. Available: http://radish.sourceforge.net/

  • Samsonovich, A., & McNaughton, B. L. (1997). Path integration and cognitive mapping in a continuous attractor neural network model. The Journal of Neuroscience, 17, 5900–5920.

    Google Scholar 

  • Sibley, G., Mei, C., Reid, I., & Newman, P. (2010). Vast-scale outdoor navigation using adaptive relative bundle adjustment. International Journal of Robotics Research, 29, 958–980.

    Article  Google Scholar 

  • Smith, D., & Dodds, Z. (2009). Visual navigation: Image profiles for odometry and control. Journal of Computing Sciences in Colleges, 24, 168–179.

    Google Scholar 

  • Smith, M., Baldwin, I., Churchill, W., Paul, R., & Newman, P. (2009). The new college vision and laser data set. The International Journal of Robotics Research, 28, 595–599.

    Article  Google Scholar 

  • Strasdat, H., Montiel, J. M., & Davison, A. J. (2010). Scale drift-aware large scale monocular SLAM, in robotics science and systems. Spain: Zaragoza.

  • Sunderhauf, N. (2012). Towards a robust back-end for pose graph SLAM. In IEEE international conference on robotics and automation. St Paul, United States.

  • Sunderhauf, N., & Protzel, P. (2010). Beyond RatSLAM: Improvements to a biologically inspired SLAM system. In IEEE international conference on emerging technologies and factory automation (pp. 1–8). Bilbao, Spain.

  • Zhang, A. M., & Kleeman, L. (2009). Robust appearance based visual route following for navigation in large-scale outdoor environments. The International Journal of Robotics Research, 28, 331–356.

    Article  Google Scholar 

  • Zoccolan, D., Oertelt, N., DiCarlo, J. J., & Cox, D. D. (2009). A rodent model for the study of invariant visual object recognition. Proceedings of the National Academy of Sciences of the United States of America, 106, 8748–8753.

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the Australian Research Council under a Discovery Project Grant DP0987078 to GW and JW, a Special Research Initiative on Thinking Systems TS0669699 to GW and JW and a Discovery Project Grant DP1212775 to MM. We would like to thank Samuel Brian for coding an iRat ground truth tracking system.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Ball.

Appendices

Appendix A: Included datasets

The three datasets described in this paper are available online at http://wiki.qut.edu.au/display/cyphy/OpenRatSLAM+datasets. Details of the ROS bag files are provided below.

figure a7

Note that the data in the New College dataset belongs to the original authors from Oxford University.

Appendix B: Installation instructions and tutorial

Installing dependencies

OpenRatSLAM depends on ROS packages: opencv2 and topological_nav_msgs and also on 3D graphics library Irrlicht. Irrlicht can be installed on Ubuntu with apt-get

sudo apt-get install libirrlicht-dev

Build instructions

Checkout the source from SVN:

svn checkout http://ratslam.googlecode.com/svn/branches/ratslam_rosratslam_ros

Setup ROS environment variables by typing:

. /opt/ros/fuerte/setup.sh

The OpenRatSLAM directory needs to be added to the environment variable ROS_PACKAGE_PATH.

export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:/path/to/OpenRatSLAM

Then build OpenRatSLAM with

rosmake

Running OpenRatSLAM

To use one of the provided pre-packaged bag files, download the bag file for either:

   \(\bullet \) iRat 2011 in Australia

   \(\bullet \) Car in St Lucia 2007

   \(\bullet \) Oxford New College 2008 dataset

All datasets are available at https://wiki.qut.edu.au/display/cyphy/OpenRatSLAM+datasets.

Place the dataset in the OpenRatSLAM directory.

Run the dataset and RatSLAM by typing either

roslaunch irataus.launch

rosbag play irat_aus_28112011.bag

or

roslaunch stlucia.launch

rosbag play stlucia_2007.bag

or

roslaunch oxford_newcollege.launch

rosbag play oxford_newcollege.bag

Using rviz

The map created by OpenRatSLAM will be periodically published to rviz. To run rviz:

rosrun rviz rviz

Click on the ”Add” button down the bottom left of the window. Choose ”MarkerArray” from the list. In the field ”Marker Array Topic” on the left, click on the button with 3 dots. Choose the topic \(<\)my_robot\(>\)/ExperienceMap/MapMarker

Using OpenRatSLAM with a custom dataset

Creating a bag file

The easiest way to tune RatSLAM is using an offline dataset. Any robot providing camera images as sensor_msgs/CompressedImage and odometry as nav_msgs/Odometry can be used to create a dataset. The images and odometry must be in the form \(<\)my_robot\(>\)/camera/image and\(<\)my_robot\(>\)/odom.

To convert topic names to the correct format run:

rostopic echo \(<\)path/to/my/robot/camera\(> {\vert }\) rostopic pub \(<\)my_robot\(>\)/camera/image sensor_msgs/CompressedImage & rostopic echo \(<\)path/to/my/robot/odom\(> {\vert }\) rostopic pub \(<\)my_robot\(>\)/odom nav_msgs/Odometry &

Start recording into a bag file:

rosbag record -O \(<\)my_robot\(>.\)bag \(<\)my_robot\(>\)/camera/image \(<\)my_robot\(>\)/odom

Start the robot and collect the dataset. Press Ctrl-C at the terminal to finish recording.

Running the bag file

To run a custom bag file, a new config file and launch file are required.

Creating a new config file

In a terminal type

cd ratslam_ros

cp config/config_stlucia.txt config/config_\(<\)my_robot\(>.\)txt

gedit config/config_\(<\)my_robot\(>.\)txt

Change the first line

topic_root=stlucia

to

topic_root=\(<\)my_robot\(>\)

Creating a new launch file In the same terminal type

cp stlucia.launch \(<\)my_robot\(>.\)launch

gedit \(<\)my_robot\(>.\)launch

Replace all references to ”../config/config_stlucia.txt” with ”../config/config_\(<\)my_robot\(>.\)txt”

Comment out the visual odometry node to prevent it from running. Replace

\(<\)node name=”RatSLAMVisualOdometry” pkg=”ratslam_ros” type=”ratslam_vo” args=”../config/config_\(<\)my_robot\(>.\)txt _image_transport:=compressed” cwd=”node” required=”true” /\(>\)

with

\(<\)!– \(<\)node name=”RatSLAMVisualOdometry” pkg=”ratslam_ros” type=”ratslam_vo” args=”../config/config_\(<\)my_robot\(>.\)txt _image_transport:=compressed” cwd=”node” required=”true” /\(>\)\(>\)

Running your dataset Your dataset can now be run the same way is the provided datsets:

roslaunch \(<\)my_robot\(>.\)launch

rosbag play \(<\)my_robot\(>.\)bag

Tuning parameters

Open the created config file

gedit config/config_\(<\)my_robot\(>.\)txt

Edit the settings under [ratslam]. Refer to Sect. 4 for parameter details.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ball, D., Heath, S., Wiles, J. et al. OpenRatSLAM: an open source brain-based SLAM system. Auton Robot 34, 149–176 (2013). https://doi.org/10.1007/s10514-012-9317-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-012-9317-9

Keywords

Navigation