Modeling Hair from Multiple Views
Prof. Long Quan (Hong Kong University of Science and Technology)
We present a multi-view approach to capture the hair geometry of a
person. The approach is natural and flexible, and it provides strong and
accurate geometric constraints about the obeservables. The hair fibers
are synthesized from local image orientations. Each synthesized fiber
segment is first validated by a three-view geometric constraint, then
optimally triangulated from all visible views. The hair volume and the
visibility of synthesized fibers can also be reliably estimated from
multiple views.