December 10, 2025

“Trust me, I am a robot! – How can we trust AI when we do not fully understand how and why results are produced?”

The Robotics Institute Germany (RIG) and SKIAS 2.0 (Sichere KI für Autonome Systeme) – supported by various partners from the fields of robotics and autonomous systems—organized a two-day workshop dedicated to one of the central challenges in contemporary AI research: trustworthy AI. The workshop was hosted by the DLR Institute of Robotics and Mechatronics on 1. and 2. December in Oberpfaffenhofen.

From robotics and aviation to civil security, Artificial Intelligence now achieves high accuracy in demanding tasks such as image classification. Yet its deployment in safety-critical environments remains constrained. A lack of transparency regarding how and why AI systems arrive at their outputs continues to limit their trustworthiness and, ultimately, their real-world applicability.

Modern AI systems, grounded in machine learning, rely on vast datasets, computational power, and sophisticated optimization processes. However, their internal reasoning often remains obscured—operating as black boxes in which only inputs and outputs are observable. To advance toward certifiable, reliable, and autonomous systems, interpretability, transparency, and rigorous uncertainty quantification are essential.

Questions for the panalist – SKIAS 2.0 (2025)

Throughout the workshop, participants from research, industry, and the start-up community explored how trustworthy AI can be strengthened, which uncertainty-handling methods are currently applied in practice, and which tools and frameworks are still needed to enable safe deployment.

A central component of the event was a multidisciplinary panel discussion addressing the question of whether— and under which conditions—trust in AI systems can be meaningfully established, particularly when underlying decision processes remain only partially understood.

The panel explored four interconnected dimensions:

1. Technical Perspectives

Discussion points included error rates, methods for uncertainty estimation, shared autonomy, and safety engineering. A recurring theme was the need for AI systems to recognize and communicate their own uncertainty, avoiding so-called overconfidence and supporting safe human–machine collaboration.

2. Industrial Challenges

Representatives from industrial contexts highlighted the pressures of efficiency, certification requirements, and the need for adaptive robotic systems that can reliably manage uncertainty at scale. Transparent and predictable behavior remains a prerequisite for deploying AI in complex production environments. In order to be efficient, robots need to be addressed in a larger political context. 

3. Ethical and Psychological Factors

The discussion examined anthropomorphism, the Uncanny Valley, and the role of human–robot interaction in shaping trust. A key insight: trust is not a feature of technology but a relational construct that emerges through interaction, expectations, and perceived reliability.

4. Societal and Regulatory Requirements

Participants addressed transparency and explainability expectations, and the challenge of designing regulatory frameworks that ensure safety without hindering innovation. Preparing society for the transition toward increasingly autonomous systems was identified as a key priority.

Contact

Lioba Suchenwirth

Public relations
German Aerospace Center (DLR)
Institute of Robotics and Mechatronics
Münchener Straße 20, 82234 Oberpfaffenhofen-Weßling