How much do the robots at the DLR Institute of Robotics and Mechatronics know about what they do and why? To find out the Cognitive Robotics Department initiated the "Green Button Challenge".
DLR (CC BY-NC-ND 3.0).
Up to which point are robots capable to reason about their own failures? In the environment of the Factory of the Future ideas on Autonomous Assembly Planning are developed to answer this question. Plans are generated for robots with modular adaptative skills in order to build a range of customized products. A semantic representation of plans and failures allows the system to transfer the gained knowledge between different products. The reuse of this obtained knowledge results in a reduction of planning efforts.
TEK is a research group in the department of cognitive robotics.
Different applications in robotics require different levels of autonomy – from direct control with telepresence in surgery, to shared control in assistive robotics, to higher levels of autonomy in planetary exploration and manufacturing. To achieve higher levels of autonomy, especially in unstructured environments, it becomes important for a robot to have knowledge about what it is doing and why. Only then can it perform adaptive goal-directed behavior; so our assumption. The main aim of the group Transferable Explainable Knowledge is therefore to provide robots with explicit, explainable knowledge about their environment and skills.
Of primary importance in achieving this aim is to develop skill representations that allow for robust, goal-directed behavior. Furthermore, these skill representations should facilitate the transfer of skills to different robots, tasks and objects. Our approach hinges on representing skills at multiple levels of abstractions. More abstract levels are implemented in languages such as the Planning Domain Description Language (as in Action Templates) or as graphical state machines (as in RAFCON). Lower-level procedural knowledge is encoded in skills that are learned or intuitively programmed, as investigated in the group Interactive Skill Learning.
A second topic investigated by the group is how to compactly represent knowledge about the state of the world. Again, these world model representations combine multiple levels of abstractions (sensor streams, state variables, symbolic knowledge) to facilitate transfer of such knowledge between robots. Of particular interest to us is how such world model representations can be iteratively refined over time, as the robot is required to solve more complex and varied tasks.
An important advantage of explicitly representing knowledge, is that it becomes explainable. This means that a robot is able to communicate its knowledge so that it can be interpreted and understood by a human. As an example, in our "Green Button Challenge", several robots were able to explain what they are doing, and why.