The fundamental principle of the range sensing by optical triangulation is illustrated in the figure below. A focused plane of laser light illuminates a stripe when colliding withthe surface of an object. A CCD camera (in our case without optical filtering) records the reflection. The reconstruction is done by triangulation, intersecting the laser plane with the rays of sight corresponding to the laser stripe projection in the image frame.
Note that this principle of operation is very much alike the one of stereo-vision systems. A laser plane is used instead of a second camera in order to simplify the correspondence problem. This way, the major obstacle for using stereo-vision in 3D modeling - the computational requirements - can be overcome.
The accuracy of the laser profiler depends directly on the errors in the calibration process. It must be calibrated with very high precision both in geometry and optic issues.
The CCD camera calibration has been performed with the calibration toolbox CalLab – the calibration takes into account radial, decentring and thin prism distortions.
The laser plane calibration is the process of determining the relative pose of the laser plane with respect to the sensor reference frame. This is a particularly critical point since its miscalibration leads to misalignments and warpage effects in the resulting surfaces related to the scanning poses. We implement a novel self-calibration method which is based on the assessment of the deformations this miscalibration leads to.
The self-calibration method works as follows: the reconstruction process runs with some initial laser plane calibration parameters – roughly estimated a priori. The proposed method exploits distortions caused in the reconstruction process whilst scanning surfaces. As calibration surface we use a plane of unknown pose, both in order to avoid the construction of a complex calibration target and due to the fact that a plane has the geometrical shape that can be straightened out in the easiest way. After scanning the calibration plane, the reconstructed pointcloud does show unevenness when scanning from very different points of view – theoretical studies determine the ideal procedure. Subsequently, the best fitting calibration target plane for these points is estimated – this overdetermined problem is solved with a closed form solution in the form of Singular Value Decomposition. Finally, the optimised laser plane parameters are off-line estimated: the goal is to minimise the mean squared distance of every reconstructed point to the best fitting plane. The Nelder-Mead Simplex method has been implemented for numerical optimisation. Furthermore, an Extended Kalman Filter is used for estimating precision of the attained calibration results.
To recapitulate, the laser plane parameters are adapted in such a way that in the end the scanned surface becomes as flat as possible.
Laser Stripe Segmentation
The software-based approach is based on a stripe edges detection algorithm. The algorithm for detecting both upper and lower stripe edges is in turn based on the Sobel Filter. In addition, two validation stages have been implemented: color (online LUT) and width validation. Color validation copes very well with the problem of specular reflections and moreover provides robustness and flexibility against changing lighting conditions. Width validation aims at blocking laser specular reflections. In addition, it corrects erroneous measurements when coping with object corners.
The stripe centre point is eventually estimated to sub-pixel precision (to within a fraction of a pixel) by means of the centre of mass method over the red image channel. Brightness saturation – very likely for non-filtered cameras when capturing laser reflections – rules out methods like Gaussian approximation.
Single and Dual Crosshair Laser Stripe Profiler
It is usually said that scanning with this kind of sensor is virtually like spray can painting, particularly if the sensor is mounted on a hand-held device. However, this is not completely true, since you do not have the freedom of horizontally moving in the stripe direction while gaining data, but sweeping up-and-down the sensor. While automatically scanning by a robotic manipulator this fact constraints the movement and entails the waste of one DoF.
Both in order to improve accuracy and aiming to complement higher-level tasks like exploration or meshing, a stochastic model of the solid geometry reconstruction process has been developed. This joins together both systematic and random errors, and is parameterised in relation to empirical calibration results. What is more, this represents the starting point to the data fusion objective of the DLR 3D-Modeller.
K. H. Strobl, W. Sepp, E. Wahl, T. Bodenmueller, M. Suppa, J. F. Seara, and G. Hirzinger. “The DLR Multisensory Hand-Guided Device: The Laser Stripe Profiler.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2004), New Orleans, LA, USA, pp. 1927-1932, April 2004.