Autonomous driving is an essential research field of the ROboMObil project. Instead of adapting a conventional vehicle, our hardware has been built from the scratch for autonomous driving. Additionally, in contrast to active lidar or radar, mostly used for the environmental perception, ROMO’s environment perception sensors are cameras. Cameras have several advantages regarding power consumption, depth of information, electromagnetic compatibility, and ease of integration. ROMO is equipped with 18 cameras for providing a 360° stereo view around the vehicle. For ensuring a dense coverage of the environment, the camera positions were determined by combining CAD models of the chassis, physical models of the cameras, and the DLR’s Visualization library.
The computational effort of processing all camera images with an appropriate frame rate is immense and the available resources are limited due to energy consumption and space. The solution used in ROMO is a sensor-attention-management system that schedules the processing of different camera segments to keep a coherent model of the environment. Cameras showing areas with highly dynamic obstacles will be activated more frequently than areas lacking obstacles. The ROboMObil concept provides a seamless transition between manual, shared-autonomous, and autonomous driving. Additionally, it makes no difference if the operator is located in the vehicle or commands it from a telepresence base station. Shared autonomy is a concept from space robotics that means a refinement of rather abstract operator commands. The artificial intelligence derives motion demands by combining the operator’s wish with information about the environment. Due to the monolithic structure, motion commands – no matter if it comes from a human operator or from the artificial intelligence - are always passed to the chassis control before any action is performed. Therefore, the system can always overrule the driver, if his commands lead the ROMO into an unsafe state.
The ROboMObil’s artificial intelligence architecture is an adaptation of two kinds of architectures that have been developed for autonomous mobile robots. Brooks gave a general definition of the task-based and hierarchical architecture already in the 1980s. ROMO incorporates a hybrid scheme derived from those basic types. Fast reactive algorithms for certain tasks like obstacle avoidance are running concurrently with a hierarchical approach that builds an environment model from sensor data and then runs the planning. Therefore, ROMO is prepared for suddenly emerging obstacles without lacking knowledge about complex scenes in its environment, which requires building up the information over several time-steps. Vision based control is settled in between those two contrary concepts, as it has on one hand a very reactive character deriving actions directly from information contained in camera images, but on the other hand an initial knowledge about the control goal must be given by a model or a target image.
The 3D perception is centered on the Semi-Global Matching (SGM) algorithm that is also used by Daimlers 6D Vision system for driver assistance. The real-time implementation uses two FPGA boards that were originally developed for fast processing of aerial images by SGM. Further image processing methods include optical flow, segmentation, object identification and tracking. This enables ROMO not only to perceive its environment in 3D, but also to determine and identify other objects and estimate their size, shape and relative movements, additionally to its own movements. This is the base for all kinds of autonomous and semi-autonomous planning techniques.