The complexity and dexterity of the robot David requires to investigate perception and planning methods to exploit the capabilities of the robot.
David uses computer vision, tactile sensing, and proprioception to perceive its environment and itself. During operation, it fuses all this information to continuously estimate the location of itself and relevant objects. Based on that data, David is able to perform complex manipulation tasks while dynamically reacting to changes. The entire system is designed to allow the tracking of untextured objects, be robust to self-occlusions, and fulfill real-time requirements.
Our ability to dexterously manipulate objects with our hands is an essential element of our daily lives. In order to seamlessly integrate robotic systems into our everyday life, they will require similar capabilities. Enabling robots to manipulate objects with precision requires a set of algorithmic skills, equivalent to the cognitive abilities of humans. To achieve this goal we are developing a dexterous manipulation framework, which enables the accurate control of grasped objects with robotic hands. Fundamentally, accomplishing a demanding manipulation task requires knowledge about the state of the grasp. Our grasp state estimation method integrates information from tactile sensing, proprioception and vision into a common formulation. Utilizing the estimated grasp state, the developed model-based controller realizes the compliant positioning of the object inside of the hand.
The complexity of the elastic neck makes it a good candidate for applying machine learning methods, where modeling the robot behavior can be hard. Methods such as ensembling for predicting the pose and handling failures, together with reinforcement learning for learning to control the neck from data are explored.
Manipulation and Planning
For higher performance in applications involving manipulation tasks, it is important to have knowledge of the amount of workspace reachable by David, as well as the dexterity David can achieve in different regions of the reachable workspace. An analysis of this kind allows us to choose a region of the workspace to best perform the desired task in terms of the capabilities of the system.
The capability maps provide a graphic representation of every reachable point in the discretized workspace and the expected level of dexterity in every reachable point. The different levels of dexterity are color-coded related to a capability index given to each reachable point. The capability index is obtained from sampling the possible end-effector orientations at each point in the reachable workspace. The resulting percentage of occupancy defines the capability index and the final color of the discretized point (according to a colormap). Therefore, the green regions observed in the image represent a group of points with several possible end-effector orientations for the same point and thus a region with a better capability index than the regions at edges of the map.
Manuel Stoiber, Martin Pfanne, Klaus H. Strobl, Rudolph Triebel, Alin Albu-Schäffer, "A Sparse Gaussian Approach to Region-Based 6DoF Object Tracking", in Proceedings of the Asian Conference on Computer Vision (ACCV), Kyoto, Japan, November 2020.
Pfanne, M., Chalon, M., Stulp, F., Ritter, H., and Albu-Schäffer, A. "Object-level impedance control for dexterous in-hand manipulation", IEEE Robotics and Automation Letters, 5(2):2987–2994, 2020.
Raffin, Antonin, Bastian Deutschmann, and Freek Stulp. "Fault-Tolerant Six-DoF Pose Estimation for Tendon-Driven Continuum Mechanisms." Frontiers in Robotics and AI 8, 2021.