Cover story from DLRmagazine 174: The humanoid robot Justin uses artificial intelligence to make smart decisions

Everything under control

Agile Justin
Justin is one of the most complex robotic systems in the world.

Justin is sorting wooden blocks on a table. Tirelessly. He skilfully turns them between his fingers, puts them in the right positions and not a single one falls out of his hands. Justin can also stack the blocks. The letters on the blocks spell out DLR RMC. That stands for German Aerospace Center (DLR) Robotics and Mechatronics Center – Justin's home. You want to praise him for his achievement because it is world-class. But would something like "Well done!" be appropriate in this context? Justin is a humanoid robot about to turn 18, which means he's an adult by human standards. In robotics terms, that's generations.

But Justin's age is not an obstacle: what the robot has learned in recent months through artificial intelligence (AI) represents a breakthrough in research. Justin is one of the most complex and advanced robot systems in the world, part of a whole 'family' of humanoid DLR robots that are similarly intelligent. Their future lies in the space, manufacturing and care sectors.

Justin sorts cubes
He can stack and turn the cubes without them slipping out of his hand.

To get straight to the point: yes, Justin is praised, a thousand times over. For example, the robot has learnt to turn the block between his fingers with his hand open at the bottom. Every time he succeeds, he gets bonus points, and Justin is ambitious in his own right. "He tries to achieve a high total score over time. If he drops a block, Justin gets a pretty big minus," explains Berthold Bäuml from the DLR Institute of Robotics and Mechatronics. Justin does not stand at a table and rotate objects as he learns. Only humans practise eye-hand coordination in a playful way – the interplay of muscles and joints usually takes care of itself. But this is still very complex and has not been possible for robots until now. It is only recently that 'learning AI' has made this possible. Justin learns the movements of up to 4000 hands simultaneously. In the end, he gets a lot of bonus points. "Training takes five hours, after which it is almost 100 percent perfect," says Berthold Bäuml. "In the beginning, the finger movement is completely random causing the block to be dropped quite frequently. But after an hour of training, Justin is already performing 70 percent of the tasks correctly." The principle is called deep reinforcement learning. A single standard graphics card is all that is needed, and the computing power is provided via the cloud.

Deep Reinforcement Learning
Justin uses deep reinforcement learning to learn what is right or wrong simply by being told what to do. In this training session, for example, the instructions were "Turning the object in the direction of the target is good" and "Dropping the object is bad". Machine learning only takes a few hours in a simulation, and Justin is getting better and better at it. He develops the movement sequence for the fingers himself by trial and error.

What does a human see? What does a humanoid robot see?

Agile Justin – that is his full name – is 1.91 metres tall with an upper body weighing in at 45 kilograms and a wheeled base at 150 kilograms. The whole system has 53 moving joints with just as many motors and sensory and motor skills that come close to those of humans. When you look at him, you see a round head with a silver-white shell, an 'eye area' cut out in black for the colour cameras, and a red light that flickers every now and again. Two stereo cameras are located where the 'ears' are anatomically assumed to be. The aim is not to create an exact replica of a human being, say robotics experts. His shape comes from the tasks and the fact that the robot is adapted to a human environment. And of course, it makes sense to have the cameras on top of a moving head.

Berthold Bäuml hands Justin a block
What Justin 'sees' is displayed on the monitor in the background. Red means that there is a person or object nearby; at a greater distance, they appear green or blue. Bäuml is responsible for Agile Justin's further development and leads the Autonomous Learning Robots Lab, among other things.

But what does Justin actually see? Berthold Bäuml turns on a monitor. One screen shows the environment exactly as a human would see it. The other shows the same contours but with a colour gradient. Everything close to Justin's head camera appears in bright red. Then the colour changes from orange, yellow and green to blue depending on its distance. Justin can instantly create an accurate three-dimensional model of anything in his field of vision. "He works in the same way as a human being, based purely on his own sensors," says Berthold Bäuml.

What does Justin see?
Justin uses colour cameras in the 'eye area' as well as two stereo cameras that are located where the 'ears' are anatomically assumed to be.

The humanoid robot can handle more than just blocks; he can grasp any object lighter than 15 kilograms. But how can Justin do this if all he can see is the front of the object and his fingers have to wrap around the back? This is where the researchers at DLR have achieved another breakthrough with AI: Justin now understands the entire shape and adapts his movements accordingly. A plastic bottle, a vase, a shoe, a cuddly toy – just a collection of dots, but Justin calculates the dimensions, the best approach and grasps it in a fraction of a second. With 50,000 examples and days of training in simulations, he has developed an understanding of all three-dimensional shapes.

... and then Justin teaches himself a trick with the blocks

Justin understands the shape of the cube
Before Justin moves, the robot calculates the dimensions of the cube and the best strategy.

When turning the block, Justin uses force sensors to feel how things fit in his hand. He estimates the shape based on the position of his fingers and the force he feels. Of course, Justin also learns this in a simulation. The robot did something that rendered the DLR robotics experts speechless: Justin slid one of his fingers under the block for support. "He taught himself a trick," says Bäuml. Justin's strategy was something along the lines of not dropping the block and always knowing the correct position – this works better if the block does not slip, so he reached under it briefly to reassess its position. The four fingers on Justin's robotic hand work together in a finely tuned way thanks to the learning AI.

Five fingers? Four is enough

Four fingers? "Five fingers are an ingenious design by nature. But we usually only need four," says Gerhard Hirzinger. "You can hardly do more with a fifth, but the grip is firmer." Hirzinger was Head of the DLR Institute of Robotics and Mechatronics from 1992 to 2012 and was Alin Albu Schäffer's predecessor. Justin's hands are slightly larger than human hands because they contain all the necessary sensors and motors and can be quickly replaced. In contrast, many of the human hand muscles are located in the forearms.

At his first public appearance in 2006, Justin was just a torso without a rolling base. The robot reacted sensitively to contact with his surroundings and retreated, making direct human-robot interaction possible. Up until then, robots had to be kept locked up behind security fences. However, Justin could throw and catch balls, pour drinks and shake hands. "Building systems like Justin, operating them and developing them significantly over the years – no university can do that. This is only possible in a large research organisation like DLR, where teams work together for years," says Hirzinger. Is it difficult to get young scientists interested in robotics? "Not really."

Building systems like Justin, operating them and developing them significantly over the years – no university can do that. This is only possible in a large research organisation like DLR, where teams work together for years.

Gerhard Hirzinger, Institute of Robotics and Mechatronics

Sensitive machines

Alin Albu-Schäffer
Director of the DLR Institute of Robotics and Mechatronics

Interview with Alin Albu-Schäffer. He has been Director of the Institute of Robotics and Mechatronics in Oberpfaffenhofen since 2012.

What makes robotics so fascinating?

When we build robots, we ask ourselves: How does a human think? How do we grasp objects? How do we walk? And why is it so difficult for machines to mimic tasks that humans effortlessly manage – such as tidying the kitchen? We haven't achieved that with robots yet. It teaches us a lot about the complexity of these tasks and the remarkable capabilities of human beings.

Do you understand why people find humanoid robots with artificial intelligence frightening?

The use of artificial intelligence in robotics holds real-world implications. It starkly differs from AI employed in text generation – where a chatbot might stumble on two out of ten questions. Such errors are unacceptable in robotics. When an AI system controls machines, the requirements become much more critical. Generally, people tend to overestimate the abilities of humanoid robots, primarily due to a lack of information.

Would you say that the better people understand robotics, the fewer concerns they have?

Precisely. At DLR, we focus strongly on the robot users; we craft our systems based on a human-centred approach. For instance, we work a lot with people with disabilities who can benefit greatly from robots. When interfaces grant users full control of the situation, while supporting them intelligently, they are generally enthusiastic about the robots. We develop robots that interact with people – ones that are sensitive, compliant, enable physical interaction, and respond at a comprehensible level for non-experts.

To train an AI, you need data. Where does it come from?

In the real world, we do not have millions of humanoid robots at our disposal that provide usable training data. This is a major research topic and is different from text-based AI, which has virtually the entire Internet at its disposal. However, training a robotic AI system using simulation data has shown promising progress. We have achieved success in this area and can expect a lot in the coming years.

Is the research more focused on space or in assistive robotics?

Our primary research emphasis is space-related, although we have actively pursued technology transfer since the very start. We have been conducting experiments with astronauts aboard the International Space Station ISS since 2017. In this scenario, robots are controlled from orbit to build habitats on the surface. Our robots function autonomously, semi-autonomously, or as avatars – depending on the crew's needs. The application on Earth is evident: our vision of such robots is that they can be used to support the self-sufficiency of people in their own home. Here, we play a key role in bridging academia and technology transfer to cultivate a new market and provide independence and quality of life to humans.

Robots and humans collaborate closely

Anne Köpken studied electrical engineering at the Technical University of Munich and visited DLR on a pivotal field trip during her second semester. "I thought it was really cool and knew I wanted to work here," she recalls. Currently immersed in her PhD in robotics, her primary focus is on 'Rollin' Justin' – Agile Justin's equivalent, albeit possessing a slightly different set of skills. While Agile Justin excels in intricate AI-driven manipulation, Rollin' Justin is geared more towards service robotics and space exploration.

Anne Köpken was there when Rollin' Justin was remotely controlled from the International Space Station ISS in July 2023. During this scenario, Rollin' Justin successfully unloaded a lander at the Mars laboratory in Oberpfaffenhofen and placed a measuring instrument on the ground. The remote operation involved NASA astronaut Frank Rubio simultaneously controlling Rollin' Justin, the robotic lander, and a robot from the European Space Agency (ESA). From space, the astronaut could determine the level of autonomy of the robots with the mere touch of a button. DLR has long been testing the collaboration between humanoid robots and humans on exploration missions, continually increasing the complexity of the endeavours. These robots can be used in situations deemed too hazardous for humans.

However, Rollin' Justin also holds promise for applications on Earth, potentially functioning as a mobile assistance robot aiding individuals in need. This is another focus of DLR's research into enhancing human-robot cooperation. Köpken is actively pioneering new capabilities for Rollin' Justin and has observed, "The system itself is fascinating and versatile. Beyond the space project, its applications have direct and visible benefits for people here on Earth."

The system itself is fascinating and versatile. Beyond the space project, its applications have direct and visible benefits for people here on Earth.

Anne Köpken, Institute of Robotics and Mechatronics

With a sense of self and others

Robot systems like Justin require expertise to function effectively and interact meaningfully. Expertise spanning mechanics, electronics, computer science, mechanical engineering, psychology, and ethics is essential in their development and operation. The coordinated movement of humanoid robots hinges on the intricate task of control engineering. Equipped with torque sensors in their joints, these robots possess not only an awareness of their body's orientation but are also sensitive to external forces. This enables them to gauge the pressure exerted upon them and respond accordingly, such as bending or retracting their arm. Justin operates as a multitasking system, adept at handling various tasks simultaneously. Coordination is key in prioritising these tasks to ensure Justin executes the most critical safely and swiftly.

But Oberpfaffenhofen is not only home to Agile and Rollin' Justin. Four humanoid robots now 'live' in the institute's laboratories, engaged in a continual process of learning how they can best support and assist humans in various capacities.

Just your 'regular' robot flatshare

Agile Justin and Rollin' Justin are not the only humanoid robots at the DLR Institute of Robotics and Mechatronics. David and Toro live next door in the lab, expanding the growing robot 'family' as their roles continue to develop. This 'flatshare' even boasts a robotic 'pet' named Bert, a four-legged robot specifically designed to study and emulate animal locomotion.

The robot flatshare
David, Toro and Bert live together at the DLR Institute of Robotics and Mechatronics.

An article by Katja Lenz from the DLRmagazine 174

Contact

Katja Lenz

Editor
German Aerospace Center (DLR)
Corporate Communications
Linder Höhe, 51147 Cologne
Tel: +49 2203 601-5401

Julia Heil

Editorial management DLRmagazine
German Aerospace Center (DLR)
Communications and Media Relations
Linder Höhe, 51147 Cologne