In a triangulation sensor with two viewpoints
and
two types of occlusion
occur. If parts of the object are hidden in both viewpoints due to object
self-occlusion, then we speak of object occlusions which cannot be resolved
from this viewpoint. If a surface region is visible in viewpoint
but not in
,
we speak of a shadow occlusion. The regions have a shadow-like appearance of
undefined disparity values since the occlusions at view
cast a shadow on the
object as seen from view
.
Shadow occlusions are in fact detected by the uniqueness
constraint discussed in section 7.1. A solution to avoid shadow
occlusions is to incorporate a symmetrical multi-viewpoint matcher as
proposed in this contribution. Points that are shadowed in the (right) view
are normally visible in the (left) view
and vice versa. The
exploitation of up-and down-links will resolve for most of the shadow occlusions.
A helpful measure in this context is the visibility V that defines for a pixel in view
the maximum number of possible correspondences in the sequence.
is caused by a
shadow occlusion,
allows a depth estimate.