A key technology needed for exploration missions is precise, soft and autonomous vehicle landing on the moon, planets and other bodies. This must be done in real time with the vehicle’s computer. One promising technology uses optical systems such as a camera or LIDAR.
There are two navigation phases to consider for an accurate and soft landing with an optical system. During the first phase, the vehicle takes pictures of the surface of the body while in orbit and during descent. From these, the vehicle can calculate its position to help ensure the vehicle lands within a small area (about 100m radius) around a preset landing point. During the second phase, the vehicle takes pictures of the landing area to help determine if it is safe to land. The vehicle can identify if there are obstacles in the area to help choose a safe landing spot.
One research area focuses on developing an optical navigation system using image processing. By combining this system’s position data with other sensors (i.e. inertial sensors) a vehicle will be more precise. In the past, additional position information was obtained from a vehicle’s radio signal received at Earth ground stations. This took a relatively long time for the signals to be received on Earth and processed, before the position information was relayed back to the vehicle (more than 2 seconds). Additionally, radio signals could not be received when there was limited or non-existent radio contact with the vehicle, as when a vehicle is behind the moon or planet.
A test site is currently under development to test these types of optical navigation systems. A camera system will be attached to a robotic arm, which simulates the trajectory of a landing vehicle. he camera is pointed towards an artificial, scaled-down terrain model. The terrain model is illuminated with a parallel light source to simulate realistic sunlight conditions. Images from the camera are then used to develop image processing algorithms for navigation.