Two-view Geometry Estimation Unaffected by a Dominant Plane

Ondrej Chum (CMP Prague, Czech Republic)

A RANSAC-based algorithm for robust estimation of epipolar geometry from point correspondences in the possible presence of a dominant scene plane is presented. The algorithm handles scenes with (i) all points in a single plane, (ii) majority of points in a single plane and the rest off the plane, (iii) no dominant plane. It is not required to know a priori which of the cases (i) -- (iii) occurs.

The algorithm exploits a theorem we proved, that if five or more of seven correspondences are related by a homography then there is an epipolar geometry consistent with the seven-tuple as well as with all correspondences related by the homography. This means that a seven point sample consisting of two outliers and five inliers lying in a dominant plane produces an epipolar geometry which is wrong and yet consistent with a high number of correspondences. The theorem explains why RANSAC often fails to estimate epipolar geometry in the presence of a dominant plane. Rather surprisingly, the theorem also implies that RANSAC-based homography estimation is faster when drawing non-minimal samples of seven correspondences than minimal samples of four correspondences.


Can Two Specular Pixels Calibrate Photometric Stereo?

Ondrej Drbohlav (Heriot-Watt University, UK)

Lambertian photometric stereo with unknown light source parameters is ambiguous. Provided that the object imaged constitutes a surface, the ambiguity is represented by the group of Generalised Bas-Relief (GBR) transformations. We show that this ambiguity is resolved when specular reflection is present in {\em two} images taken under two different light source directions. We identify all configurations of the two directional lights which are singular and show that they can easily be tested for. While previous work used optimisation algorithms to apply the constraints implied by the specular reflectance component, we have developed a {\em linear} algorithm to achieve this goal. Our theory can be utilised to construct fast algorithms for automatic reconstruction of smooth glossy surfaces.


The rational function model for fish-eye lens distortion

Andrew Fitzgibbon (Microsoft Research Cambridge, UK)


Regular Polygon Detection

Gareth Loy (KTH Sweden, Sweden)

This talk describes a new robust regular polygon detector. The regular polygon transform is posed as a mix- ture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mix- ture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the gener- alised Hough transform. Results are presented for images with noise to show stability.

The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.


Modeling Hair from Multiple Views

Prof. Long Quan (Hong Kong University of Science and Technology)

We present a multi-view approach to capture the hair geometry of a person. The approach is natural and flexible, and it provides strong and accurate geometric constraints about the obeservables. The hair fibers are synthesized from local image orientations. Each synthesized fiber segment is first validated by a three-view geometric constraint, then optimally triangulated from all visible views. The hair volume and the visibility of synthesized fibers can also be reliably estimated from multiple views.


Class-Specific Segmentation Using Layered Pictorial Structures

Prof. Philip H. S. Torr (Oxford Brookes University, UK)

We propose a novel method for recognizing and segmenting instances of an object category from images using the layered pictorial structures model. Included in this model are the effects of self-occlusion which makes it particularly suitable for such applications. An unsupervised method for learning the model from videos is presented.

Given an image, we match the model to localize the instance(s) of the object. The matching is made efficient by substantially improving the running time of match scores computation and belief propagation. This localization allows us to learn the distribution of RGB values for `figure' and `ground'. Using the learnt distributions, graph cuts are employed to efficiently perform segmentation of objects from the image.