Throughout the last three decades, robots were developed with increasing capabilities to perform tedious work in industrial environments more precisely than a human worker and without fatigue. Going beyond, in the last few years, researchers strove to also develop robots that can be applied outside the well structured, predictable surroundings of a production line, for example as personal assistants at home. In contrast to the possibilities of a planned machine usage in manufacturing processes, in this case the work field is designed for human use and ease. Thus it is not desirable to adjust the working environment to the particular technical needs of a robotic assistant. Also, a specialization of this machine to one particular application would require its adaption for each new job it is to perform and would decline its acceptance.
One obvious approach for the design of a robot assistant to work in a human environment is to mimic human skills and hence most preferably also its anatomy.
The DLR Hand and Arm
At the Institute of Robotics and Mechatronics of the DLR, three generations of light-weight arms and two generations of dexterous robotic hands have been designed. Both in the strive do develop anthropomorphic limbs for use as human assistant.
From the desired modes of employment two major modes of operation can be distinguished:
An operator controls the motions of the robot, here arm and fingers, and receives the perceptions of the device, e.g. visual impressions through a 3-D projection, acoustical feedback but also the forces and torques acting on the finger surfaces and the joints of the fingers and the robot arm.This mode of operation can be used, when a remote presence is desirable but dangerous or impossible, for example in human-unfriendly scenarios as disaster handling, or de-mining, but also in places, man cannot easily reach as in space or sub-aquatic applications. Also, an expert can remotely act in a scenario without being physically present, as in tele-medicine.
2) Autonomous Operation
The robot undergoes a training for a certain library of duties and can re-perform these jobs upon command of a user as is desirable for robotic assistants. The extent of autonomy herein may vary for solely repeating exactly trained sequences through a limited analysis of a scenario and an adaption to it, for example when sweeping a room or serving drinks to fully autonomous task planning.
Applications of the DLR Arm/Hand
The experiments performed with the DLR arm/hand system can be divided analogous to the mode of operation into tele-operated experiments and autonomous experiments:
1) Tele-operated Experiments
a) Dataglove Control
The motions of the human hand can be captured using a device such as a dataglove. This device uses sensors to detect the flexion of the human finger.Comparing the anatomy of DLR Hand II and a human hand differences occur that initially pose problems in directly mapping the motions of the human hand also to motions of the robotic hand. In order to control the motions of the robotic fingers, consequently two methods were studied: Cartesian mapping and joint angle mapping. The first method tries to match the Cartesian positions of the human finger tips to those of the robotic hand, disregarding the gesture of the respective hand. This method is suitable when manipulating two copies of an object both withthe human hand and the robot simultaneously with the finger tips. In order to perform this mapping, a neural net was used. The second, joint-angle method was used to perform grasps with full hand contact as well. In these grasps, the gesture of the hand is more important than the actual position of the finger tip. In this case, a linear mapping between the joint angles of the human operator and the robot was determined that most conveniently allowed to perform a number of sample grasps.
b) Tele-manipulation Using Stereo Vision, Dataglove and Force-Feedback
In several experiments the capabilities of the DLR arm/hand system for remote operation were tested.The operator was placed in a remote location, within the laboratory or as far as Hanover-Munich or Berlin-Munich. The operation scenario remained at its original location in the laboratory in Oberpfaffenhofen/Munich. Visual sights of the scene were transmitted as live images through a bundle of ISDN links or a satellite link. The control commands for the hand were obtained from a VTI dataglove and force information was returned to a VTI Cybergrasp exosceleton that can display one dimensional grasping forces to each finger. The control commands and the sensing (haptic) feedback for the hand were transmitted in all cases through a single ISDN link. The pose of the arm was controlled with a 6-D input device similar to the DLR space-mouse. For the ease of setup the arm control was transmitted through a second ISDN link. In these experiments, different tasks were performed: a glass of wine was poured from a bottle, a drawer was opened and objects were removed from it, a microphone and flowers were handed over to a person present at the operation setup...
2) Autonomous Operation Experiments
a) Catching a Soft-Ball
An experiment to perform a completely autonomous action was the catching of a thrown ball: Volunteers at the trade show Hanover Messe threw soft balls towards the DLR hand/arm system. Using two standard black and white cameras, the trajectory of the flying soft ball was predicted from a number of images that were taken at standard video rate of 25 images per second. The estimate was constantly corrected from newly incoming pictures. Thus the robot and hand could be commanded appropriately to catch the soft ball in the flight. This experiment showed the applicability of the DLR arm/hand system also for tasks that required a high amount of autonomous planning and adaption in a specialized scenario.
b) Tracking and Grasping an Object
In a special experiment an object, in this case a soft ball, was presented to the DLR arm/hand system. Two on-board cameras rendered a stereo image of the scene. Using a visual servoing approach, the object was robustly grasped and manipulated in hand. This approach was very tolerant to calibration errors in the camera system.
c) Serving Drinks
Another experiment, more dedicated to a household application, was designed to serve drinks.A stereo camera observed a scene on a table and upon command recognized objects on this table, here glasses and bottles, and determined their location. In a trained and adapted sequence, the bottle was grasped and a glass was poured in order to serve drinks. The localization was performed using a specialized algorithm evaluating the images of the camera. In order to organize and adapt the elemental operations of the task, the special task programming system AOP was used.
c) Playing the Piano
In order to demonstrate the suitability for actions in human environments without particular preparation and also to demonstrate the velocity capabilities of the DLR hand, an experiment was performed that made the hand play pieces of music on the piano as for example Beethoven's Für Elise. Here, the hand was trained to play scales on the keyboard. From this data, the hand was able to determine the necessary motions for whole pieces of music given in tune notation. The velocity achieved in playing exceeded the capabilities of human pianists.
d) Service Robotics in Human Environments
In a larger setup, the DLR hand/arm system was mounted on a mobile platform, a collaboration of Technical University Munich and the DLR Institute of Robotics and Mechatronics. The scenario was to demonstrate a full service robotics employment for a bodily challenged or ill person. In this scenario, the system had to locate and move to the door of a designated room, open the door and within the room locate the field of action, here a table where it served a drink to.
e) Sweeping a Table
Algorithms Developed for the Control of the DLR Hand
a) Grasp Planning
One very desired functionality for a dexterous robot hand is autonomous grasping. To achieve such a skill by the robot system several subsystems have to be developed. First, there has to be a scene analysis system that is capable of recognizing and localizing known objects, as we have developed for the robutler scenario. If the grasping of so far unknown objects is wanted, one needs a system that can reconstruct the geometry of an object autonomously and online. For our systems we use the DLR Laser Scanner to perform this task.
With the geometric information of the object, its position and some geometric and kinematic models of the robotic hand one can plan grasps that assure certain levels of quality. These planned grasps can then be executed by the real robot hand.
Grasps for dexterous hands are reasonably divided in two main grasp categories: Precision grasps for high manipulability, where only the fingertips are in contact with the object and power- or enveloping grasps where high forces must be resisted or exerted, where the whole hand can be in contact with the object.
The planning of these types of grasps is very different: With precision grasps one searches for a few fingertip contacts (3-5) that allow for a stable grasp. For most of these contact sets one can compute more than one valid hand configuration.
Power grasps are mainly determined by the geometrical constraints of hand and object so one tries to find a suitable hand configuration to ‘wrap the fingers around the object’ and then calculates the resulting contacts.
It is possible to start a precision grasp planner with finding contacts and then calculate a valid hand configuration to realize the grasp. We also observed that about 20 % of a set of 4 randomly chosen contacts on a set of geometrical and real world objects result in force-closure grasps, which is the most common quality measure for precision grasps.Therefore we decided to implement a random based modular grasp planner.To determine the quality of a generated grasp a very precise and fast quality measure has been developed. With this measure the planner is able to generate and evaluate the quality of 100 grasps in less than a minute. The best grasp candidate among this set can then be executed by the robotic system.
Grasp Planner flowchart with graphics