next up previous contents
Next: Degenerate case Up: Two view geometry computation Previous: More points...   Contents


Robust algorithm

The most important problem with the previous approaches is that they can not cope with outliers. If the set of matches is contaminated with even a small set of outliers, the result will probably be unusable. This is typical for all types of least-squares approaches (even non-linear ones). The problem is that the quadratic penalty (which is optimal for Gaussian noise) allows a single outlier being very far apart from the true solution to completely bias the final result.

The problem is that it is very hard to segment the set of matches in inliers and outliers before having the correct solution. The outliers could have such a disastrous effect on the least-square computations that almost no points would be classified as inliers (see Torr [196] for a more in depth discussion of this problem).

A solution to this problem was proposed by Fischler and Bolles [48] (see also [162] for more details on robust statistics). Their algorithm is called RANSAC (RANdom SAmpling Consensus) and it can be applied to all kinds of problems.

Let us take a subset of the data and compute a solution from it. If were are lucky and there are no outliers in our set, the solution will be correct and we will be able to correctly segment the data in inliers and outliers. Of course, we can not rely on being lucky. However, by repeating this procedure with randomly selected subsets, in the end we should end up with the correct solution. The correct solution is identified as the solution with the largest support (i.e. having the largest number of inliers).

Matches are considered inliers if they are not more than $1.96\sigma$ pixels away from their epipolar lines, with $\sigma$ characterizing the amount of noise on the position of the features. In practice $\sigma$ is hard to estimate and one could just set it to 0.5 or 1 pixel, for example.

The remaining question is of course 'how many samples should be taken?'. Ideally one could try out every possible subset, but this is usually computationally infeasible, so one takes the number of samples $m$ sufficiently high to give a probability $\Gamma$ in excess of $95\%$ that a good subsample was selected. The expression for this probability is [162]

\begin{displaymath}
\Gamma = 1 - (1 - (1 - \epsilon)^p)^m \, ,
\end{displaymath} (D10)

where $\epsilon$ is the fraction of outliers, and $p$ the number of features in each sample. In the case of the fundamental matrix $p=7$. Table 4.2 gives the required number of samples for a few values of $\epsilon$. The algorithm can easily deal with up to $50\%$ outliers, above this the required number of samples becomes very high.

Table 4.2: The number of 7-point samples required to ensure $\Gamma \geq 0.95$ for a given fraction of outliers.
5% 10% 20% 30% 40% 50% 60% 70% 80%
3 5 13 35 106 382 1827 13692 233963


One approach is to decide a priori which level of outlier contamination the algorithm should deal with and set the number of samples accordingly (e.g. coping with up to $50\%$ outliers implies 382 samples).

Often a lower percentage of outliers is present in the data and the correct solution will already have been found after much fewer samples. Assume that sample 57 yields a solution with $60\%$ of consistent matches, in this case one could decide to stop at sample 106, being sure -at least for $95\%$- not to have missed any bigger set of inliers.

Once the set of matches has been correctly segmented in inliers and outliers, the solution can be refined using all the inliers. The procedure of Section 4.3.3 can be used for this. Table 4.3 summarizes the robust approach to the determination of the two-view geometry.

Table 4.3: Overview of the two-view geometry computation algorithm.
\begin{table}\centerline{\framebox{\rule[-4cm]{0cm}{8cm}\parbox{10cm}
{\vspace{1...
...ine F based on all correct matches
\end{itemize}}}
\vspace{1cm}}}\par\end{table}


Once the epipolar geometry has been computed it can be used to guide the matching towards additional matches. At this point only features being in epipolar correspondence should be considered for matching. For a corner in one image, only the corners of the other image that are within a small region (1 or 2 pixels) around the corresponding epipolar line, are considered for matching. At this point the initial coordinate interval that was used for matching can be relaxed. By reducing the number of potential matches, the ambiguity is reduced and a number of additional matches are obtained. These can not only be used to refine the solution even further, but will be very useful further on in solving the structure from motion problem where it is important that tracked features survive as long as possible in the sequence.


next up previous contents
Next: Degenerate case Up: Two view geometry computation Previous: More points...   Contents
Marc Pollefeys 2000-07-12