Laser-stripe Profiler

zum Bild Profiler


The fundamental principle of the range sensing by optical triangulation is illustrated in the figure below. A focused plane of laser light illuminates a stripe when colliding withthe surface of an object. A CCD camera (in our case without optical filtering) records the reflection. The reconstruction is done by triangulation, intersecting the laser plane with the rays of sight corresponding to the laser stripe projection in the image frame.

Note that this principle of operation is very much alike the one of stereo-vision systems. A laser plane is used instead of a second camera in order to simplify the correspondence problem. This way, the major obstacle for using stereo-vision in 3D modeling - the computational requirements - can be overcome.

System Calibration

The accuracy of the laser profiler depends directly on the errors in the calibration process. It must be calibrated with very high precision both in geometry and optic issues.

The CCD camera calibration has been performed with the calibration toolbox CalLab – the calibration takes into account radial, decentring and thin prism distortions.

The laser plane calibration is the process of determining the relative pose of the laser plane with respect to the sensor reference frame. This is a particularly critical point since its miscalibration leads to misalignments and warpage effects in the resulting surfaces related to the scanning poses. We implement a novel self-calibration method which is based on the assessment of the deformations this miscalibration leads to.

When locating the laser plane there are three independent Degrees of Freedom (DoF) to be identified. Different solid geometry reconstruction errors ensue from imprecision in the estimation of each and every one of these DoF for every scanning movement of the hand-guided device. These include from simple scaling errors up to convex/concave warped deformations or even irregularly warped results.

The self-calibration method works as follows: the reconstruction process runs with some initial laser plane calibration parameters – roughly estimated a priori. The proposed method exploits distortions caused in the reconstruction process whilst scanning surfaces. As calibration surface we use a plane of unknown pose, both in order to avoid the construction of a complex calibration target and due to the fact that a plane has the geometrical shape that can be straightened out in the easiest way. After scanning the calibration plane, the reconstructed pointcloud does show unevenness when scanning from very different points of view – theoretical studies determine the ideal procedure. Subsequently, the best fitting calibration target plane for these points is estimated – this overdetermined problem is solved with a closed form solution in the form of Singular Value Decomposition. Finally, the optimised laser plane parameters are off-line estimated: the goal is to minimise the mean squared distance of every reconstructed point to the best fitting plane. The Nelder-Mead Simplex method has been implemented for numerical optimisation. Furthermore, an Extended Kalman Filter is used for estimating precision of the attained calibration results.

To recapitulate, the laser plane parameters are adapted in such a way that in the end the scanned surface becomes as flat as possible.

Laser Stripe Segmentation

In order to widen the range of objects to be scanned, the image processing algorithm does not have precise a priori knowledge about the shape of the laser outlines. This fact may yield erroneous results, for instance in the case of specular reflections or red light sources – a simple segmentation algorithm would detect them erroneously as laser projections. Furthermore, the absence of optical filtering when gathering laser stripe images aggravates error proneness. Both reasons imply a robust stripe segmentation method. Different measures must be taken to reduce these kinds of failures.

The software-based approach is based on a stripe edges detection algorithm. The algorithm for detecting both upper and lower stripe edges is in turn based on the Sobel Filter. In addition, two validation stages have been implemented: color (online LUT) and width validation. Color validation copes very well with the problem of specular reflections and moreover provides robustness and flexibility against changing lighting conditions. Width validation aims at blocking laser specular reflections. In addition, it corrects erroneous measurements when coping with object corners.

The stripe centre point is eventually estimated to sub-pixel precision (to within a fraction of a pixel) by means of the centre of mass method over the red image channel. Brightness saturation – very likely for non-filtered cameras when capturing laser reflections – rules out methods like Gaussian approximation.

Single and Dual Crosshair Laser Stripe Profiler

It is usually said that scanning with this kind of sensor is virtually like spray can painting, particularly if the sensor is mounted on a hand-held device. However, this is not completely true, since you do not have the freedom of horizontally moving in the stripe direction while gaining data, but sweeping up-and-down the sensor. While automatically scanning by a robotic manipulator this fact constraints the movement and entails the waste of one DoF.

In order to get rid of this constraint with a static laser stripe profiler (LSP) we use an additional laser beam that illuminates perpendicularly to the first stripe (from here crosshair). Due to construction-related constraints, both laser beams have to be closely placed. Since this arrangement implies an undesired reduction of the basis distances between each laser beam and the camera, we opt for using the second camera of the 3DModeller in order each of the cameras to perform single LSP scanning each with their farthest laser stripe. We call this configuration dual crosshair LSP. So the addition of a second miniaturized laser beam implies: 1) the release of a scanning movement constraint, 2) the increment of surface-related information gained in every direction, and 3) the possibility to duplicate the sensing rate, since both cameras and laser beams may be triggered in a complementary way at highest speed and limited shutter time.

Error Modelling

Both in order to improve accuracy and aiming to complement higher-level tasks like exploration or meshing, a stochastic model of the solid geometry reconstruction process has been developed. This joins together both systematic and random errors, and is parameterised in relation to empirical calibration results. What is more, this represents the starting point to the data fusion objective of the DLR 3D-Modeller.


K. H. Strobl, W. Sepp, E. Wahl, T. Bodenmueller, M. Suppa, J. F. Seara, and G. Hirzinger. “The DLR Multisensory Hand-Guided Device: The Laser Stripe Profiler.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2004), New Orleans, LA, USA, pp. 1927-1932, April 2004.

URL for this article
Downloads zu diesem Artikel
The DLR Multisensory Hand-Guided Device: The Laser Stripe Profiler (