Autonomous C-Space Exploration and Object Inspection


The Multisensory 3D-Modeller, developed at the Institute of Robotics and Mechatronics, acquires 3-D information about the environment. This device consists of a 7-dof passive manipulator mounted with four different types of sensors: a light-stripe sensor, a laser-range scanner, a passive stereo vision sensor, and a texture sensor. Experiences gained in the development of this passive Eye-in-Hand system led to the idea of automating the execution of inspection tasks. The outcome will be a robotic system that is capable of automated inspection tasks, such as complete surface acquisition in 3-D environments. As the robot moves in a priori unknown environments, it is fundamental for the surface acquisition task that the robotic system acquires knowledge of manoeuvrable space, i.e. a good knowledge of the robot's configuration space, or C-space, especially around the object to be inspected. The idea to integrate realistic noisy sensor models in the view planning stage occurred in this context. The information about sensor noise can be used for merging multiple readings, evading contradictory data sets and giving hints for the next sensing action to be taken by the robot.


Recent work at the Robotics Lab, Simon Fraser University, Canada has focussed on sensor-based motion planning for robots with non-trivial geometry and kinematics. The problem here is to plan the next best view (NBV, or simply view planning), in order to efficiently explore the robot's C-space. The notion of C-space entropy was developed as a measure of ignorance of C-space. Then the NBV is chosen as the one that can maximally reduce the expected C-space entropy.



However, in the above work, an ideal sensor model was assumed while real sensors are subject to noise. Therefore, more caution must be taken using results from this ideal sensing process, e.g. a safety margin to obstacles must be added. However, if we can model the sensor noise already in the planning stage, our results will be much more accurate and reliable. This will enable the robot to go much closer to the obstacles to achieve high sensing accuracy, a must for surface inspection tasks. Additionally, if one is using multiple sensors for the exploration task (multisensory exploration), the same noisy stochastic sensor models can be used not only merging multiple readings but also view planning and exploration decisions such as where to scan and which sensor to choose, thus providing one integrated framework.



Simulations with a two-link robot showed promising results, a simulation environment for n-Dof robots is currently developed. In order to reduce the amount of memory consumed, a visibility-based roadmap is implemented. The robot used is a 6-Dof Kuka KR6, with the new release of the Multisensory 3D-Modeller, which will be suitable for robot and manual guidance. Multiple View Planning approaches show already good exploration results in 3D, examples for the scene given below and exploration results.



[1] Suppa, M., Wang, P., Gupta, K., Hirzinger, G. C-space Exploration Using Noisy Sensor Models, ICRA 2004, New Orleans, U.S.A., 2004.

[2] Suppa, M., Hirzinger, G. A Novel System Approach to Multisensory Data Acquisition, The 8th Conference on Intelligent Autonomous Systems IAS-8, Amsterdam, The Netherlands, 2004.

[3] Wang, P., Gupta, K., View Planning via Maximal C-Space Entropy Reduction, Fifth International Workshop on Algorithmic Foundations of Robotics WAFR 2002, Nice, France, 2002.

URL for this article
Links zu diesem Artikel
Downloads zu diesem Artikel
C-space Exploration Using Noisy Sensor Models (