DLR Portal
Home|Sitemap|Contact|Accessibility Imprint and terms of use Privacy Cookies & Tracking |Deutsch
You are here: Home:Research:3D Perception
Advanced Search
Institute of Robotics and Mechatronics
Departments
Robotic Systems
Applications
Research
3D Perception
Stereo Vision
Light Field Cameras
Autonomous 3D Modeling
Urban Scenes
Shape Analysis
Classification
Leg-based Locomotion
Flying Robots
Intelligent Industrial Robotics
Medical Robotics
Compliant Robotic Systems
Re-enabling Robotics
Telepresence & VR
Projects
Spin-offs
Publications and downloads
Job offers
How to get to us
News Collection

3D Perception

Advanced robotic systems require visual perception capabilities beyond plain 2D images or proximity sensors. They instead rely on 3D perception in order to enable the holistic perception of their surroundings that is eventually needed for high-level, task-related reasoning.
3D perception encompasses the acquisition of data based on commercially available or self-developed sensors, the creation of 3D models and formats in different representations, up to the use of these models/data for object recognition. The robotic applications of 3D perception include exploration, navigation, object manipulation, and telepresence.

Sensoric Methods

Visual Sensing is a classical approach in robotics for non-contact sensing. At RMC by default we deploy off-the-shelf visual sensors (digital cameras, stereo heads, Kinect, etc.). Due to the rising demands of complex robotic applications, however, we oftentimes reach their limits. Then we recur to high-end sensors or we develop custom-built sensor systems as well as the required sensor-oriented computational methods.

Stereo Vision


Semi-Global Matching (SGM) is a dense stereo matching method that can be used for accurate 3-D reconstruction from image pairs SGM is used in photogrammetry, for driver assistance systems as well as for environment modeling from satellite, aerial and multicopter images as well as for autonomous analysis of the workspace by robots.
Read more
Plenoptic principle using pinholes

Light Field Cameras


A light field camera records images with more dimensions than a conventional camera. A light field camera stores data in a 4D light field dataset instead of the usual 2D images. As a consequence of the extra dimensions, it is possible to extract multiple data products from a single recording. The two most common are regular 2D images that can be focused on particular distances even after the recording, and 3D depth images. We investigate the use of light field cameras for robotic systems.
Read more
DLR CalDe sample screenshot

DLR CalDe and DLR CalLab - Camera Calibration Software


"DLR CalDe and DLR CalLab" is a camera calibration toolbox that implements the well-known method of Zhang, Sturm and Maybank. The toolbox consists of two independent software components: While DLR CalDe detects corner features on the calibration pattern, DLR CalLab addresses the optimal estimation of the camera parameters.
Read more
3D Modeling

3D models are required in a variety of applications ranging from small objects for pose estimation and grasping to large buildings or environments for navigation and localization. Depending on the application different representations of 3D models such as surface, volumetric, or feature point models are required. The data acquisition and modeling should be carried out in a real-time stream for instant application. Further, the 3D model needs to be segmented to identify objects of interest e.g. for identifying walls, tables, objects in an indoor environment.

Autonomous Object Modeling


At RMC, robotic systems are enabled to acquire 3D models of unknown objects or scenes fully autonomously. This is achieved by iteratively planning Next-Best-Views and collision-free motions until a preferably complete 3D model is generated. This means that the robot needs to decide where to scan next with the goal to reach a high model quality in as few views or scans as possible.
Read more

Urban Scenes


An object can be reconstructed from images, if the object is viewed from different view points and the images have a high resolution and largely overlap each other. The 3d-reconstruction is based on a precise image orientation and the efficient calculation of depth images. Landscape models and functional building models can later get integrated into simulation environments.
Read more
Object Recognition

Whenever concrete models and specific knowledge are not available for objects or events in the robot's work environment, a robot system has to rely on more generalized modes of inference to arrive at the semantic content of the situation. This is a common scenario, e.g., for robots in the human living or working environment, and for systems that need to interact closely with humans. Adequate models and knowledge may then describe broad categories of objects or events, acquired through training on sets of numerous examples. Knowledge may also be derived from similarities and correspondences discovered between novel and known cases.

Classification of Novel Objects


Object recognition has been well studied, however, known object classifiers usually feature poor generality and therefore limited adaptivity to different application domains. Although some domain adaptation approaches have been presented for RGB data, little work has been done on understanding the effects of applying object classification algorithms using RGB-D for different domains.
Read more

Shape Warping and Analysis


When a robot encounters unknown objects in its environment, such that no specific sensory, geometric, and semantic models are available, the perceptual system can derive knowledge from relations established with known objects of a similar kind. These relations are established through warping prototypical shapes onto the encountered new shapes.
Read more
Related articles
DLR CalDe and DLR CalLab - Camera Calibration Software
DLR VR-SCAN (2011)
The DLR Multisensory 3D-Modeller (2006–2017)
THR Dataset
Copyright © 2022 German Aerospace Center (DLR). All rights reserved.