Basic skills for a mobile robot system are localization and navigation. Any possible service task, such as floor cleaning, fetching and carrying objects, or assistance of the handicapped, requires these skills.
Implementing such skills involves answering the questions ``Where am I?'', ``Where am I going?'', and ``How do I get there?''. These questions can only be answered with a map: to name a place or path, a representation of the environment is needed. The first question therefore is: “What is my map?”
The most convincing approach, both from the perspective of autonomy and of applications, is to have the robot create its own map while moving. The Simultaneous Localization and Mapping (SLAM) approach addresses this problem.
The basic idea is as follows. A mobile robot moves through a building. It measures its own movement by odometry (yellow path in the image) and uses a camera to observe distinguished features, or landmarks, in the environment (yellow lines in the image). The landmarks are detected and located by image processing. Vertical lines, occurring at doors and windows, may be used as natural landmarks. In our experiments, however, we have used circular marks on the floor. Detected landmarks are entered in the map (blue crosses in the image).
The main source of errors in the map is the odometry. As the odometric error accumulates, locally consistent but globally very uncertain maps are generated. The use of identifiable landmarks, moving along the same doorway, allows integration of new measurements to advance the quality of the existing map. The optimal integration solution based on maximum likelihood or least square estimation needs excessive computation time, i.e., O([n+p]3) for n landmarks and p robot poses. Popular approaches like the Extended Kalman Filter (EKF) are more efficient but still need O(n2) computation time and suffer from linearization errors.
To overcome these computational problems, we have developed a very efficient SLAM algorithm that works by hierarchically dividing the map into local regions. At each level of the hierarchy, regions are represented by a matrix storing the accumulated measurements of the region’s landmarks. On the level of finest subdivision, these matrices are naturally small as they represent, say, a single room with few landmarks. At higher levels, regions are larger and may contain many landmarks. To keep the corresponding matrices small, only the landmarks observable from outside are represented. This allows the integration of a new measurement in O(k3log(n)) time for n total landmarks and a maximum of k in a single region at lowest level.
The mapping of the Institute of Robotics and Mechatronics building (60x45m, 29 rooms). The map contains 725 landmarks, 29,142 measurements taken at 3,297 robot positions. Due to the algorithm's efficiency, incorporating a measurement typically took below 1ms computation time, worst case 10ms.
12/03:[fre03b] U. Frese, T. Duckett: A Multigrid Approach for Accelerating Relaxation-based SLAM, 18. Fachgespräch Autonome Mobile Systeme (AMS03), Karlsruhe (Read fre03 instead)
08/03:[fre03] U. Frese, T. Duckett: A Multigrid Approach for Accelerating Relaxation-based SLAM, Proc. Workshop Reasoning with Uncertainty in Robotics, IJCAI 2003, Acapulco, Mexiko, pages 39-46
06/01: [fre01] U. Frese, G. Hirzinger, Simultaneous Localization and Mapping - A Discussion, IJCAI01 Workshop Reasoning with Uncertainty in Robotics, 2001, Seattle
06/00: [fre00] U. Frese, M. Hörmann, B. Bäuml, Global konsistente visuelle Lokalisation ohne vorgegebene Karte, at - Automatisierungstechnik 6/00
11/99: [fre99] Global konsistente visuelle Lokalisation ohne vorgegebene Karte, U. Frese, M. Hörmann, B. Bäuml, 15. Fachgespräch Autonome Mobile Systeme (AMS99), München (Read [fre00] instead)