Sie sind hier:
Robotik und Mechatronik Zentrum
Telepräsenz und VR
Regelungsmethoden & Werkzeuge
Kooperationen und Projekte
Today in robotics, computer vision in a broad sense can be regarded as the key technology for realizing systems with an enhanced level of autonomy. In this domain, we tackle problems from a wide methodological range, from image-based tracking to scene understanding and world modeling. With such a broad scope, we can address application demands as diverse as rapid visual servoing and flexible adaptive behavior for a robot system, or generation of photo-realistic representations in a virtual-reality context.
Object Recognition and Scene Analysis
From a universal perspective, it is the goal of machine vision to determine for an artificial agent where in the environment things are and what they are. Answering these two questions constitutes a scene interpretation in the generic sense.
The ever increasing need for 3D models in robotics and Virtual Reality/Augmented Reality applications require a closer look at this widespread research field. The prerequisites range from fast, i.e. online, model generation, highly accurate models for collision detection and avoidance to photo-realistic models used in VR, i.e. telepresence and tele-operation scenarios. To gain the appropriate model for the desired application, a wide range of sensors and several data processing steps are necessary.
Tracking and Servoing
When a rigid object moves in 3D space relative to a camera, it is often interesting to know how its relative pose changes in its full 6 degrees of freedom (DoF). The problem of 6-DoF tracking arises in the context of numerous applications within and beyond robotics.
Basic skills for a mobile robot system are localization and navigation. Any possible service task, such as floor cleaning, fetching and carrying objects, or assistance of the handicapped, requires these skills. Implementing such skills involves answering the questions ``Where am I?'', ``Where am I going?'', and ``How do I get there?''.
The stereo ego-motion method works on successive images of the left camera of a synchronized and rectified stereo camera system. It is based on identifying image features that are likely to be found again in successive images. The well known Harris corner detector is used for this purpose.
The research focuses on sensor-based approaches to robotic exploration of partly unknown environments. Aiming at facilitating automated work processes in flexible work cells, an efficient and reliable task-dependent exploration is performed.
When a robot has to immediately react to real-world events detected by a vision sensor, high-speed vision is required. This may be a visual servoing task, i.e., the vision sensor is part of the robot’s control loop, or a reaction to a sudden event, such as catching a thrown ball.
Over the years we have developed some tools of general scope that assist us in various vision projects...
Methods developed in our research are applied in integrated systems that function as demonstrators, technological experiments, or prototypes.
Automatic, large scale terrain modeling from aerial images
Copyright © 2017 Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR). Alle Rechte vorbehalten.