Brief Tutorials



DLR CalDe

  1. Preparation.
    a) Create a calibration pattern using "Options->Create calibration pattern" or take one of the supplied patterns. The size of the pattern should be chosen according to the lens aperture as well as the range for sharp imaging. Usually, a bigger calibration pattern fits the demands. However, the pattern should be finally sticked onto a flat surface which may not be given for large objects.
    b) Shoot 3 to 10 images from the calibration pattern from vantage points with varying orientations and distances with either monocular or synchronized stereo cameras. An eventual proper calibration can be achieved if the calibration pattern fills the whole image.
    c) Save the images as Portable Network Graphics (PNG) files named ".(left|right|upper|lower).*png". Additionally, save the corresponding tool-center-point (TCP) to robot base 3x4 transformation matrix to text files ".coords" if you are interested in the extrinsic hand-eye calibration.
  2. Load data.
    a) Load the configuration corresponding to the particular calibration pattern in "Options->Settings" and EDIT the MEASURED size of the checkboard rectangles since printer devices do not necessarily preserve the pattern dimensions. Please note that the circles on the calibration pattern indicate its x/y-axis directions as well as its origin. Here, the z-axis is perpendicular to the calibration plane and always points inside the object.
    b) Load the PNG images.
  3. Detect corner points in the images.
    a) Run the automatic corner point detection for the first image/tab.
    b) If the circles of the calibration pattern have not been recognized then either enter an appropriate binarization threshold in "Options->Settings" and return to step 2.a) or run semi-automatic detection and do click in the circles of the calibration pattern.
    c) If no or too few corners of the calibration pattern have been detected in spite of correctly detected central points then lower the confidence threshold in "Options->Settings" and return to 2.a). The images probably present low contrast.
    d) Run the corner point detection for all images/tabs with the above configuration.
    e) Check the result by for example using the "repaint grid"-icon at the top of the table. The detected corners are then linked by a line to their horizontal and vertical neighbors. Wrongly identified points can easily be detected. Either
    i) select these points in the table on the right hand-side and click on the "delete-point"-icon to remove these points or
    ii) adjust the parameters as in 2.b) or 2.c) in order to repeat full-automatic detection.
    The last resource for really bad images is manual corner point localization. For this purpose, select a corner point in the table on the right hand-side and then click on the corresponding point in the image. However, this kind of adjustments are only very rarely needed.
  4. Save data.
    Save the detected corner points for all the images/tabs by clicking on the "save-points-of-all-tabs"-icon.

 CalDe GUI 1
zum Bild CalDe GUI 1
 CalDe GUI 2
zum Bild CalDe GUI 2

DLR CalLab

  1. Preparation.
    Run DLR CalDe or any other similar program to detect and identify features of a particular calibration object.
  2. Load data.
    Load the PNG image files as in DLR CalDe. The application is flexible to the names and numbers used. At least three images (or stereo image pairs) are required. You may select more than one file at the same time by pressing the SHIFT key and selecting the first and the last file, or by pressing the CTRL key and selecting each file. Load the corresponding points files for the images as from DLR CalDe.
  3. 1st Calibration Stage.
    The general settings should not be actually modified in a standard calibration case. Documentation in this concern is pending. If you are ready for the parameters estimation (look at the text output) do proceed with the 1st calibration stage. Intrinsic camera parameters (including the camera-#1 to camera-#n transformation(s) in the case of a stereo camera rig) are going to be estimated. At the beginning a first estimation is done based on homographies (refer to the work of Zhang or Sturm and Maybank). Then, this estimation is numerically optimized. After that the points projection errors are overlaid on the images. You may browse through the images in order to look for mistakenly detected points. For this, it is useful to hide the actual image (click on 'switch monochrome image') and to augment the error arrows (click on 'error zoom'). In addition, you may use the histograms for rapidly finding mistakenly detected points. Repeat this 1st calibration stage if you did remove any point.
  4. 2nd Calibration Stage.
    The 2nd calibration stage supplies the TCP-to-camera transformation, in other words the hand-to-eye transformation. This transformation is not always required by the user and can only be estimated in relation to the TCP-to-robot-base transformations for every calibration image. These transformations were already read in DLR CalDe and also embedded into the points file. DLR CalLab offers different algorithms for estimating this transformation ("Settings->General settings"):
    a) On the one hand you may use the method of Strobl and Hirzinger [Strobl06] minimizing errors in the erroneous world-to-TCP transformation. Here we use a particular implementation where the residual errors in distance are supposed to be mainly at the top of the manipulator - refer to the paperwork. The algorithm uses a linear least squares solution for the first estimation.
    b) On the other hand you may minimize the features projection errors. This is a sound solution only in the case of highly noisy cameras with very low image resolution.
  5. 3rd Calibration Stage.
    The 3rd calibration stage only verifies the correct estimation in the last steps and furthermore produces accuracy estimations for the positioning device (e.g. robotic manipulator, infrared tracking system, etc.) responsible for the world-to-TCP estimations.
  6. Save data.
    "File->Save" saves the calibration results in the desired output format. For information on these formats refer to the text in "Help->Documentation" in DLR CalLab.

 CalLab
zum Bild CalLab

Application of the calibration results for common computer vision tasks

In order to make it clear the camera model used by DLR CalLab we next briefly demonstrate in Matlab (R) code some common computer vision tasks like undistortion, pixel reprojection, and stereo rays intersection.

Image point undistortion

On the basis of both the camera calibration files and some particular image coordinates it is possible to undistort these distorted coordinates as follows:

function undistorted = undistort(distorted,A,k1,k2)

% starting out, we temporarily suppose the original distorted image
% points as undistorted, distort them and compute the residuals,
% which shall be subtracted from the temporal undistorted in order
% their distorted to equal the original distorted image points.

und_tmp = distorted;

% inits
distortion_error = [1;1]; i=1; optim_done="0;

while (~optim_done)

    u = und_tmp(1) - A(1,3);
    v = und_tmp(2) - A(2,3);
    y = v/A(2,2);
    x2 = ( (u-A(1,2)*y) / A(1,1) )^2;
    y2 = y^2;
    dis_tmp = und_tmp + [k1*u.*(x2+y2) + k2*u.*(x2+y2).^2; ...
    k1*v.*(x2+y2) + k2*v.*(x2+y2).^2];
    distortion_error(:,i) = dis_tmp - distorted;
    und_tmp = und_tmp - distortion_error(:,i);

    if (sum(distortion_error(:,i).^2)<1e-2) % dis_error < 1e-1 pixels

      optim_done=1;
      res_dis_error=sqrt(sum(distortion_error(:,i).^2))
    else
      i=i+1 % 4 iterations should suffice
    end

end
undistorted = und_tmp;

or rather faster, by using the jacobian between the undistorted normalized coordinates and the residual error in the distorted coordinates in the image. Plus, if you proceed vector-wise, it is much faster (obrigado a Diego):

function [undistorted,n_it] = undistort_(distorted,A,k1,k2)

% from image pixels to pinhole-centered normalized distances
xd_normalized = distorted(1,:) - A(1,3);
yd_normalized = distorted(2,:) - A(2,3);
yd_normalized = yd_normalized/A(2,2);
xd_normalized = ( (xd_normalized-A(1,2)*yd_normalized) / A(1,1) );
xd_normalized2 = xd_normalized.^2;
yd_normalized2 = yd_normalized.^2;
% we use them as temporary desired undistorted normalized values
xu_normalized = xd_normalized;
yu_normalized = yd_normalized;

% inits
distortion_error = [1;1]; i=1; optim_done="0;

while (~optim_done)

    % radial distortion
    r2 = (xu_normalized.^2+yu_normalized.^2);
    d_r = (1+k1*r2+k2*r2.^2);

    % distance in normalized coos between desired and actual distorted coos
    dx_normalized = xu_normalized.*d_r - xd_normalized;
    dy_normalized = yu_normalized.*d_r - yd_normalized;
    % Goal: minimize them by refining the undistorted coos.
    % Jacobian of the latter distances w.r.t. the desired undistorted normalized coos:
    dD1dx = d_r + xu_normalized.^2 .* (2*k1+4*k2*r2);
    dD1dy = xu_normalized.*yu_normalized.*(2*k1+4*k2*r2);
    dD2dx = dD1dy;
    dD2dy = d_r + yu_normalized.^2 .* (2*k1 + 4*k2*r2);
    % Linear equation:
    % dxy_normalized + J * delta_xy = [0;0] => delta_xy = -J^(-1)*dxy_normalized
    % J^(-1) = 1/det(J) * [dD2dy -dD1dy; -dD2dx dD1dx]
    det_J = dD2dy.*dD1dx - dD1dy.*dD2dx;
    delta_x = -( dD2dy.*dx_normalized - dD1dy.*dy_normalized)./det_J;
    delta_y = -(-dD2dx.*dx_normalized + dD1dx.*dy_normalized)./det_J;
    % Modifies the undistorted normalized coos.
    xu_normalized = xu_normalized + delta_x;
    yu_normalized = yu_normalized + delta_y;

    % This is only necessary for the termination condition:
    r2 = (xu_normalized.^2+yu_normalized.^2); d_r = (1+k1*r2+k2*r2.^2);
    redistorted(1,:) = A(1,1) .* xu_normalized.*d_r + A(1,2) .* yu_normalized.*d_r + A(1,3);
    redistorted(2,:) = A(2,2) .* yu_normalized.*d_r + A(2,3);

    % We stop if the (biggest if vector-wise) pixel error is bigger than 0.1.
    if (max((distorted(1,:)-redistorted(1,:)).^2 + ...
            (distorted(2,:)-redistorted(2,:)).^2) < 1e-2) % max dis_error < 1e-1 pixels

      optim_done=1;

    else

      i=i+1; % 2-3 iterations should suffice

    end

end

% Final undistorted coos in the image.
n_it=i;
undistorted(1,:) = A(1,1) .* xu_normalized + A(1,2) .* yu_normalized + A(1,3);
undistorted(2,:) = A(2,2) .* yu_normalized + A(2,3);

Only now we are able to reproject undistorted image points to camera rays and also for instance perform stereo triangulation.

Stereo triangulation from image points

function point_camera = triangulate(und_upper, und_lower, A, A_stereo, T_cam1_cam2)

% get normalized coordinates for camera #1
u = und_upper(1) - A(1,3);
v = und_upper(2) - A(2,3);
y_upper = v/A(2,2);
x_upper = ( (u-A(1,2)*y_upper) / A(1,1) );

% get normalized coordinates for camera #2
u = und_lower(1) - A_stereo(1,3);
v = und_lower(2) - A_stereo(2,3);
y_lower = v/A_stereo(2,2);
x_lower = ( (u-A_stereo(1,2)*y_lower) / A_stereo(1,1) );

% linear least squares solution for triangulation
% with the rigid body motion constraint T_cam1_cam2
% (in the form: M * x = m)
%

M = [[x_upper;y_upper;1] -T_cam1_cam2(1:3,1:3)*[x_lower;y_lower;1]];
m = T_cam1_cam2(1:3,4);
% SVD solution
[U,S,V]=svd(M);
m_=U'*m;
s = size(S);
m_(1:s(2))./sum(S)';
sol = V*ans;

%distance_to_camera = sum(sol)/2
distance_to_camera_1 = sqrt(sum(([x_upper;y_upper;1]*sol(1)).^2))
distance_to_camera_2 = sqrt(sum(([x_lower;y_lower;1]*sol(2)).^2))

% [[x_upper;y_upper;1]*sol(1);1]
% T_cam1_cam2*([[x_lower;y_lower;1]*sol(2);1])

% admittedly this solution ain't any optimal but approximated...
point_camera = ( [[x_upper;y_upper;1]*sol(1); 1] + ...
T_cam1_cam2*([[x_lower;y_lower;1]*sol(2);1]) ) / 2
point_camera(4)=1;

error = ([[x_upper;y_upper;1]*sol(1);1] -
(T_cam1_cam2*([[x_lower;y_lower;1]*sol(2);1])))
euclidean_error = sqrt(sum(error(1:3).^2))
if euclidean_error>1e-2
disp('Left and right rays do not really cut (within one centimeter)')
return
end


URL for this article
http://www.dlr.de/rmc/rm/en/desktopdefault.aspx/tabid-3925/6084_read-9196/