Research Areas

Telerobotics



In few words

In telerobotics a human operator commands a remote robot. The separation of robot and human operator may occur due to spatial distance, hazardous environmental conditions, scaling, or even matter, in the case of virtual environments.

Telerobotic systems, in contrast to or in combination with telepresence systems, are implemented if the delay is too large to include the human operator into the control loop, or if the task can be performed semi-autonomously, i.e. the teleoperator acts according to its pre- or tele-programmed behaviour and the operator supervises the task execution. The elements for this operation mode are

  • a virtual model of the real world which is updated with real sensor readings,
  • a programming by-demonstration interface and
  • a shared control functionality.

Task-Oriented Programming (TOP)

The main functional design guidelines of the TOP approach are the following:

  • Decomposition of complex tasks in a set of elementary actions which can be executed autonomously on the robotic system, ideally guided by sensor information.
  • Separation of interfaces for robotic experts and control engineers and simple and easy to use interfaces for application developers or payload experts in the space robotic environment.

Since 1996 different versions of the TOP system have been used for almost all the demonstration scenarios and applications where control methods and higher level tasks had to be programmed; among them:

  • the ROTEX workcell ,
  • the Experimental Servicing Satellite (ESS) demonstrator,
  • the Japanese ETS VII robot,
  • the Canadian Space Station Manipulator, Simulator SSRMS,
  • the BallCatcher demonstrator,
  • the Robutler Service Robot  and
  • The mobile 2-Arm-2-Hand system (Justin).

The TOP system is modularly built (therefore, it can be easily integrated in the MARCO system) and allows for a fast integration of new control algorithms as well as interpolator functions. It carries out its supervision function in soft real-time and delivers generic interfaces, e.g. for high-level action planning, as well as collision avoiding path planning algorithms.

A variety of user interfaces can be connected to the TOP kernel via a standardized low-bandwidth communication channel. Until now an easy-to-use GUI (including a graphical motion simulator) and a speech processing system have been implemented.

The actual work focuses on making of the TOP system a more flexible tool able to handle all types of different robots and control strategies.

Programming-by-demonstration (PbD)

In PbD, the operator demonstrates the task to the robot directly or, in case of a telerobotic system, via a haptic interface in a virtual simulated world. The sensor recordings from the haptic interface produce the sensor patterns, which are commanded to the remote robot for adaption to the remote environment and for execution. Despite the simplicity of the PbD compared to traditional robot programming, still certain robotic skills are needed to generate suitable robotic programs by demonstration. Currently methods to support and train the operator in order to do that are developed within the EU project SKILLS.

World model update

In contrast to the telepresence scheme, the teleoperator at the remote site has to cope with unknown or uncertain environments. The geometry of the objects is often known, but the pose has to be determined continually or before performing each manipulation task.

Our TOP system offers a general interface to update the pose of the objects according to the task requirements. All the cartesian motions w.r.t. the desired object poses are modified online. The new poses are registered in the TOP data-base and shown on a 3D model simulation for the operator's acceptance.

Our on-going work will treat the case of fully unknown environments where multiple sensors (stereo vision, 3D scanner, force / torque sensors) gather information to build a 3D-model. After that, sophisticated action and grasp planning algorithms generate a TOP operation decomposing the high-level command (e.g. “grasp the object on the left”) into the corresponding elementary operations for execution.

Shared control

Shared control is used to delegate sub-tasks to the robot’s controller to ease the task of the operator. A telerobotic scenario contains often several sub-tasks; for instance, while capturing a satellite, collisions between the servicer and the target satellite have to be avoided. Through shared control, an autonomous system is controlling the distance between both satellites, and the operator controls the grasping and berthing of the target.

Another example appears in robot assisted minimally invasive heart surgery, where the heart beating motion captured by an endoscopic camera is superimposed to the operator’s motion.

 


URL for this article
http://www.dlr.de/rm/en/desktopdefault.aspx/tabid-5014/8373_read-14283/