DLR Hand II is a powerful dexterous robot hand. The performance of it with regard to forces is almost similar to a human hand. Also the possible speed is not far from the human abilities. Otherwise DLR Hand II can not actually replace a human hand in a remote environment. One drawback is the missing feel of touch. The human possibilities of haptic perception are so extraordinary excellent that no so far known tactile sensor system can compare to this in the slightest way.
Another point is the so far ‘hard’ surface of finger and hand which has to be softened. The size of about one and a half times the size of a common human hand is also an major obstacle in dealing with an environment especially designed for humans. To solve this problem we probably have to relocate the actuators outside the hand if we want to keep the hand as powerful as it is now.
With respect to future applications we have to find a solution with less complexity and a design which can stand problems like intensive dust or the contact with liquids without harm.
From our experience in the experiments we found that our hand already can do impressive and useful things, but we also learned about the limitations and possible improvements: Fine manipulation and grasping of small objects is still a difficult task. We think this is not due to the sensor and control quality, but to the finger tip design. Although we use a silicon coating for the finger tips, that finger tips need to be much softer to increase grasp robustness and stability. Our finger tip design already resembles finger nails, but should be much more distinct.
We are content with the results of our grasp planners, but integrating a grasp planner in real world systems has its own challenges. Precision grasps are fine in theory, in practice pinch and power grasps are more important. We currently integrate the pinch grasp planning facility in our grasp planner and develop a power grasp planner. Another major issue is creating geometric models of objects to be grasped online and provide them for a grasp planner. We experiment with laser scanner and stereo vision based generation of partial object models. To increase the robustness and stability of grasps we are integrating a grasp force optimization module. One precondition for a grasp force optimizer is the knowledge about the actual contact models and grasp forces. We try to derive the contact model information from position and force/torque sensors.
Generally, trying to make artificial hands work in real world situations provides us with a lot of interesting work for the future.