Abstract
A robot uses a map to localize its position. Maps can be loaded into the robot, but often the robot must create a map by itself. Maps are represented using either a grid showing which cells are occupied and which are free or a continuous map containing the coordinates of the obstacles. For a grid map the frontier algorithm enables the robot to explore its environment in order to determine the probability that each cell is occupied or free. A robot can use knowledge of its environment, for example, that it is in a building with rectangular rooms, to facilitate building a map. Algorithms for simultaneous localization and mapping enable a robot to perform these two tasks together, using data from localization and its sensor to extend the map and data from the map to achieve accurate localization.
We have seen that a robot can use its capability to detect obstacles to localize itself, based on information about the position of the obstacles or other information on the environment. This information is normally provided by a map. It is relatively easy to construct a map of industrial environments such as factories, since the machines are anchored in fixed locations. Maps are less relevant for a robotic vacuum cleaner, because the manufacturer cannot prepare maps of the apartments of every customer. Furthermore, the robots would be too difficult to use if customers had to construct maps of their apartments and change them whenever a piece of furniture is moved. It goes without saying that it is impossible to construct in advance maps of inaccessible locations like the ocean floor.
The solution is to have the robot build its own map of the environment. To build a map requires localization so that the robot knows where it is, but localization needs a map, which needs .... To overcome this chickenandegg problem, robots use simultaneous localization and mapping (SLAM) algorithms. To perform SLAM robots use information known to be valid even in unexplored parts of the environment, refining the information during the exploration.
This is what humans did to create geographic maps. Observations of the sun and stars were used for localization and maps were created as exploration proceeded. Initially, tools for localization were poor: it is relatively easy to measure latitude using a sextant to observe the height of the sun at noon, but accurate measurements of longitude were impossible until accurate clocks called chronometers were developed in the late eighteenth century. As localization improved so did maps, including not only land and seacoasts, but also terrain features like lakes, forests and mountains, as well as artificial structures like buildings and roads.
Sections 9.1 and 9.2 introduce methods of representing maps in a computer. Section 9.3 describes how a robot can create a map using the frontier algorithm. Section 9.4 explains how partial knowledge of the environment helps in constructing a map. A SLAM algorithm is the subject of the final three sections. The algorithm is first presented in Sect. 9.5 using a relatively simple example. Activities for SLAM are collected in Sects. 9.6 and 9.7 explains the formal algorithm.
9.1 Discrete and Continuous Maps
We are used to graphical maps that are printed on paper, or, more commonly these days, displayed on computers and smartphones. A robot, however, needs a nonvisual representation of a map that it can store in its memory. There are two techniques for storing maps: discrete maps (also called grid maps ) and continuous maps.
If the objects in the environment are few and have a simple shape, a continuous map is more efficient, in addition to being more accurate. In Fig. 9.1b, three pairs of numbers represent the triangle with far greater accuracy than the seven pairs of the discrete map. Furthermore, it is easy to compute if a point is in the object or not using analytic geometry. However, if there are many objects or if they have very complex shapes, continuous maps are no longer efficient either in memory or in the amount of computation needed. The object in Fig. 9.1b is bounded by straight lines, but if the boundary were described by highorder curves, the computation would become difficult. Consider a map with 32 objects of size one, none of which touch each other. The discrete map would have 32 coordinates, while the continue map would need to store the coordinates of the four corners of each object.
In mobile robotics, discrete maps are commonly used to represent maps of environments, as we did in Chap. 8.
9.2 The Content of the Cells of a Grid Map
A geographic map uses conventional notations to describe the environment. Colors are used: green for forests, blue for lakes, red for highways. Symbols are used: dots of various sizes to denote villages and towns, and lines for roads, where the thickness and color of a line is used to indicate the quality of the road. Robots use grid maps where each cell stores a number and we need to decide what a number encodes.
The simplest encoding is to allocate one bit to each cell. A value of 1 indicates that an object exists in that cell and a value of 0 indicates that the cell is empty. In Fig. 9.1a, gray represents the value 1 and white represents the value 0.
It can be seen that cells with a probability of at least 0.7 are the ones considered in Fig. 9.1a to be cells occupied by the object. Of course, we are free to choose some other threshold, for example 0.5, in which case more cells are considered to be occupied. In this example, we know that the object is triangular, so we can see that a threshold of 0.5 makes the object bigger than it actually is, while the higher threshold 0.7 gives a better approximation.
Activity 9.1:

Place your robot on a line in front of a set of obstacles that the robot can detect with a lateral sensor (Fig. 9.3). If you implemented localization as described in Activity 8.4, you can use this information to establish where the robot is. Otherwise, draw regular lines on the ground that can be read by the robot for localization. Build a probabilistic map of the obstacles.

How do the probabilities change when obstacles are added or removed?
9.3 Creating a Map by Exploration: The Frontier Algorithm
Consider a robotic vacuum cleaner newly let loose in your apartment. Obviously, it does not come preprogrammed with a map of your apartment. Instead, it must explore the environment to gather information that will be used to construct its own map. There are several ways of exploring the environment, the simplest of which is random exploration. The exploration will be much more efficient if the robot has a partial map that it can use to guide its exploration.
9.3.1 Grid Maps with Occupancy Probabilities
9.3.2 The Frontier Algorithm
The frontier algorithm is used to expand the map by exploring the frontier. The robot moves to the closest frontier cell, senses if there are obstacles in unknown adjacent cells and updates the map accordingly.
Figure 9.9 shows the completed map constructed by the robot after it has explored the entire frontier as shown by the path with the blue arrows.
Algorithm 9.1 formalizes the frontier algorithm. For simplicity, the algorithm recomputes the frontier at each step. A more sophisticated algorithm would examine the cells in a neighborhood of the robot’s position and add or remove the cells whose status as frontier cells has changed.
The example to which we applied the frontier algorithm is a relatively simple environment consisting of two rooms connected by a door (at the sixth column from the left in Fig. 9.9), but otherwise closed to the outside environment. However, the frontier algorithm works in more complex environments.
9.3.3 Priority in the Frontier Algorithm
Activity 9.2:

Implement the frontier algorithm. You will need to include an algorithm to move accurately from one cell to another and an algorithm to move around obstacles.

Run the program on the grid map in Fig. 9.9. Do you get the same path? If not, why not?

Run the program on the grid map in Fig. 9.10.

Modify Algorithm 9.1 to use the priority described in Sect. 9.3.3. Is the path different from the one generated by the original program?

Try to implement the frontier algorithm on your robot. What is the most difficult aspect of the implementation?
9.4 Mapping Using Knowledge of the Environment
Now that we know how to explore an environment, let us consider how to build a map during the exploration. In Chap. 8 we saw that a robot can localize itself with the help of external landmarks and their representation in a map. Without such external landmarks, the robot can only rely on odometry or inertial measurement, which are subject to errors that increase with time (Fig. 5.6). How is it possible to make a map when localization is subject to large errors?
Activity 9.3:

Write a program that causes a robotic lawnmower to move along a path using odometry alone (Fig. 9.12). Run the program several times. Does the robot return to the charging station? If not, how large are the errors? Are the errors consistent or do they change from one run to the next?

Place landmarks of black tape around the lawn (Fig. 9.14). Program your robot to recognize the landmarks and correct its localization. How large do the landmarks have to be so that the robot detects them reliably?
9.5 A Numerical Example for a SLAM Algorithm
The SLAM algorithm is quite complicated so we first compute a numerical example and later give the formal presentation.
For the purpose of presenting the details of the SLAM algorithm, the map is highly simplified. First, the cells are much too large, roughly the same size as the robot itself. In practice, the cells would be much smaller than the robot. Second, we specify that each (explored) cell is either free (white) or it contains an obstacle (gray); real SLAM algorithms use a probabilistic representation (Sect. 9.2).
Figure 9.18a (which is the same as Fig. 9.16b) shows the intended perception of the robot because the robot has moved one cell upwards relative to the map in Fig. 9.15b. From this position it can detect the obstacle to its left and investigate the unknown cells in front of it.
There is a clear mismatch between the current map and the sensor data which should correspond to the known part of the map. Obviously, the robot is not where it is expected to be based on odometry. How can this mismatch be corrected? We assume that odometry does give a reasonable estimation of the pose (position and heading) of the robot. For each relatively small possible error in the pose, we compute what the perception of the current map would be and compare it with the actual perception computed from the sensor data. The pose that gives the best match is chosen as the actual pose of the robot and the current map updated accordingly.
The next step is to choose the map that gives the best fit with the sensor measurements. First transform the \(8\times 5\) maps into \(8\times 5\) matrices, assigning \(1\) to empty cells, \(+1\) to obstacle cells and 0 to other cells. The left matrix in Fig. 9.21 is associated with the current map and the center matrix in the figure is associated with the perception map corresponding to the pose where the robot is in the correct cell but the heading is \(15^\circ \) CW (the middle element of the top row of Fig. 9.20).
Similarity \(\mathcal {S}\) of the sensorbased map with the current map
Intended orientation  \(15^{\circ }\) CW  \(15^{\circ }\) CCW  

Intended position  22  32  20 
Up one cell  23  25  16 
Down one cell  19  28  21 
Left one cell  6  7  18 
Right one cell  22  18  18 
9.6 Activities for Demonstrating the SLAM Algorithm
The following two activities demonstrate aspects of the SLAM algorithm. Activity 9.4 follows the algorithm and is intended for implementation in software. Activity 9.5 demonstrates a key element of the algorithm that can be implemented on an educational robot.
The activities are based on the configuration shown in Fig. 9.23. The robot is located at the origin of the coordinate system with pose \(((x,y),\theta )=((0,0),0^\circ )\).^{1} Given the uncertainty of the odometry, the robot might actually be located at any of the coordinates \((1,0), (1,0), (0,1), (0,1)\) and its orientation might be any of \(15^\circ , 0, 15^\circ \) (as shown by the dashed arrows), giving 15 possible poses. The three gray dots at coordinates \((2,2), (2,0), (2,2)\) represent known obstacles on the current map. (To save space the dots are displayed at coordinates \((2,1), (2,0), (2,1)\).) The obstacles can be sensed by multiple horizontal proximity sensors, but for the purposes of the activities we specify that there are three sensors.
In the SLAM algorithm the perception of an obstacle is the value returned by a distance sensor. To avoid having to define model for the sensors, the activities will define a perception as the distance and heading from the sensor to the obstacle.
Activity 9.4:
 For each of the 15 poses compute the set of perceptions of each obstacle. For example, if the robot is at the pose \(((0.0,1.0),15.0^\circ )\), the set of perceptions of the three obstacles is:$$ [( 2.2, 41.6^\circ ),\, ( 2.2, 11.6^\circ ),\, ( 3.6, 41.3^\circ )]\,. $$
 Given a set of measured perceptions, compute their similarities to the perceptions of the 15 poses of the robot. Choose the pose with the best similarity as the actual pose of the robot. For example, for the set of measured perceptions:and the similarity computed as the sum of the absolute differences of the elements of the perceptions, the pose with the best similarity is \(((0.0, 1.0),15.0^\circ )\).$$ [( 2.0, 32.0^\circ ),\, ( 2.6, 20.0^\circ ),\, ( 3.0, 30.0^\circ )]\,, $$

Experiment with different similarity functions.

We have computed that the robot’s pose is approximately \(((0.0, 1.0),15.0^\circ )\). Suppose that a new obstacle is placed at coordinate (3, 0). Compute the perception \((d,\theta )\) of the object from this pose and then compute the coordinate (x, y) of the new obstacle. The new obstacle can be added to the map with this coordinate.

Is the computed coordinate significantly different from the coordinate (3, 0) that would be obtained if the robot were at its intended pose \(((0,0),0^\circ )\)?
Activity 9.5:

Place three objects as shown in Fig. 9.23.

Write a program that stores the set of values returned by the three horizontal proximity sensors. Place your robot successively at each of the 15 poses and record the sets of values. You now have a database of perceptions: a set of three sensor readings for each pose.

Place the robot at one of the poses and store the set of values returned by the sensors. Compute the similarity of this set to each of the sets in the database. Display the pose associated with the best similarity.

Experiment with placing the robot at various poses and several times at each pose. How precise is the determination of the pose?

Experiment with different similarity functions.
9.7 The Formalization of the SLAM Algorithm
The algorithm is divided into three phases. In the first phase (lines 2–4), the robot moves a short distance and its new position is computed by odometry. The perception map at this location is obtained by analyzing the sensor data.
Assuming that the odometry error is relatively small, we can define a set of test positions where the robot might be. In the second phase (lines 5–8), the expected map at each of these positions is computed and compared with the current map. The best match is saved.
In the third phase (lines 9–11), the position with the best match becomes the new position and the current map is updated accordingly.
In practice, the algorithm is somewhat more complicated because it has to take into account that the perception map obtained from the sensors is limited by the range of the sensors. The overlap will be partial both because the sensor range does not cover the entire current map and because the sensors can detect obstacles and free areas outside the current map. Therefore, the size of the perceived map \(\mathbf {p}\) will be much smaller than the expected map \(\mathbf {e}\) and the function compare(\(\mathbf {p}\), \(\mathbf {e}\)) will only compare the areas that overlap. Furthermore, when updating the current map, areas not previously in the map will be added. In Fig. 9.22 there are cells in the current map that are outside the fivecell radius of the sensor and will not be updated. The light red cells were unknown in the current map as indicated by the question marks, but in the perception map they are now known to be part of the obstacle. This information is used to update the current map to obtain a new current map.
9.8 Summary
Accurate robotic motion in an uncertain environment requires that the robot have a map of the environment. The map must be maintained in the robot’s computer; it can be either a grid map of cells or a graph representation of a continuous map. In an uncertain environment, a map will typically not be available to the robot before it begins its tasks. The frontier algorithm is used by a robot to construct a map as it explores its surroundings. More accurate maps can be constructed if the robot has some knowledge of its environment, for example, that the environment is the inside of a building consisting of rectangular rooms and corridors. Simultaneous localization and mapping (SLAM) algorithms use an iterative process to construct a map while also correcting for errors in localization.
9.9 Further Reading
Two textbooks on path and motion planning are [4, 5]. See also [6, Chap. 6].
The frontier algorithm was proposed by Yamauchi [8] who showed that a robot could use the algorithm to successfully explore an office with obstacles.
Algorithms for SLAM use probability, in particular Bayes rule. Probabilistic methods in robotics are the subject of the textbook [7].
A twopart tutorial on SLAM by DurrantWhyte and Bailey can be found in [1, 2]. A tutorial on graphbased SLAM is [3].
Sebastian Thrun’s online course Artificial Intelligence for Robotics is helpful: https://classroom.udacity.com/courses/cs373.
Footnotes
 1.
It is convenient to take the heading of the robot as \(0^\circ \).
References
 1.Bailey, T., DurrantWhyte, H.: Simultaneous localization and mapping: part ii. IEEE Robot. Autom. Mag. 13(3), 108–117 (2006)CrossRefGoogle Scholar
 2.DurrantWhyte, H., Bailey, T.: Simultaneous localization and mapping: part i. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006)CrossRefGoogle Scholar
 3.Grisetti, G., Kümmerle, R., Stachniss, C., Burgard, W.: A tutorial on graphbased slam. IEEE Intell. Transp. Syst. Mag. 2(4), 31–43 (2010)CrossRefGoogle Scholar
 4.Latombe, J.C.: Robot Motion Planning. Springer, Berlin (1991)CrossRefGoogle Scholar
 5.LaValle, S.M.: Planning Algorithms. Cambridge University Press, Cambridge (2006)CrossRefGoogle Scholar
 6.Siegwart, R., Nourbakhsh, I.R., Scaramuzza, D.: Introduction to Autonomous Mobile Robots, 2nd edn. MIT Press, Cambridge (2011)Google Scholar
 7.Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press, Cambridge (2005)zbMATHGoogle Scholar
 8.Yamauchi, B.: A frontierbased approach for autonomous exploration. In: IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp. 146–151 (1997)Google Scholar
Copyright information
This chapter is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the work's Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work's Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material.