COSMA

The project aims to create an environment representation in continuous space that considers uncertainty and contains semantic information. The advantage of this is the ability to exploit geometry and context within a probabilistic framework. This will enable new navigation frameworks for space exploration, autonomous driving and scene reconstruction.

  
Project start
2019-01-01 until 2021-12-31
Project partners
• Prof. Teresa Vidal-Calleja, University of Technology, Sydney
Cedric LeGentil, University of Technology, Sydney
Fields of application

Project details

Autonomous robots require rich and reliable representations of the environment to interact with the world and perform the designated tasks, in particular in unknown scenarios. Nowadays, several robust real-time navigation strategies for this type of robots still rely only on discrete geometric, sometimes probabilistic, information of the environment, as high contextual representations are in general more expensive to compute and more prone to error. Moreover, commonly extracted semantic representations are seldom probabilistic. This project aims to investigate the theory and algorithms to create a representation of the environment in continuous space that considers uncertainty and contains semantic information. The key idea is to develop a concurrent learning process that combines Gaussian Process regression, to model probabilistically and continuously the environment, and active learning strategies, to extract semantics. The advantage of such representation is the ability to exploit geometry and, at the same time, context within a probabilistic framework that produces less overconfident semantic information. Such representations will enable the next generation of navigation frameworks for space exploration, autonomous driving and scene reconstruction.