Go to: [Up | Next (Comments) | Back | Table of Contents]

Scanning

Each segment of the full view is split in several partial scans. Each of them starts at the corresponding landmark. The views of each partial scan are registered with the help of a registration aid. For each view a sequence of images is taken as follows.

registration aid
to register with previous view
superimposed random texture
projected from a slide-projector
natural texture and shading registration aid
to register with next view

The points on the registration aid are uniquely indexed but their position within the target or the target geometry and/or position and/or orientation need not be known to recover the transformations. The indices are decoded in an automatic detection procedure, the 3-D points are then reconstructed by stereo reconstruction and the 3D rigid transformation between two sets of 3D points (before and after the motion) is recovered by an absolute orientation procedure.

Stereo Matching and Reconstruction

The artificially textured scene images were processed per pairs as follows:

  1. Rectification standardizes (reprojects) the 512 x 512 images from any camera pair so that the 256 x 256 rectified images appear to be from a left-right pair of non-verging distortion-free cameras.
    512 x 512 images before rectification
    256 x 256 images after rectification
  2. Matching recovers integer disparity map. Matching is left-right, right-left symmetric and uses forbidden zone constraint to select matches from high-correlation candidates. Modified normalized cross-correlation is used. Matching is done pairwise (which can be done in parallel). Just 4 image pairs out of 6 possible were used for matching.
  3. Sub-pixel disparity correction is then computed using an affine model of apparent distortion between the left and right images. The correction is estimated from image derivatives and image values using LS estimator and is invariant to scaling and offset in image intensity. This step gives the reconstruction the high precision and geometric accuracy.
    sub-pixel disparity map
    (holes and occluded areas are black)
  4. Fusion All disparity maps from all four pairs in a view are re-mapped to the disparity space of a selected (always the first) pair. The same selection procedure that was used for selecting good matches is applied again. The result is a much cleaner disparity map with less holes and less outliers. This step effectively fuses all the disparity maps.
    fused disparity map
  5. Reconstruction converts the fused disparity map to 3D points in Euclidean space.
  6. Point coloring assigns the reconstructed points color by reprojecting them to the color camera image plane and retrieving the appropriate color with bilinear interpolation.
  7. No manual editing in any stage of the process was done, no outliers removal, no filtering, no additional registration.

Go to: [
Up | Next (Comments) | Back | Table of Contents]
Radim Sara
Last modified: Fri Nov 7 17:12:51 MET 1997