next up previous contents
Next: Regular grid representation Up: Plenoptic model Previous: structure and motion   Contents

Plenoptic modeling and rendering

We use the calibrated cameras to create a scene model for visualization. In [111] this is done by plenoptic modeling . The appearance of a scene is described through all light rays (2D) that are emitted from every 3D scene point, generating a 5D radiance function. Recently two equivalent realizations of the plenoptic function were proposed in form of the lightfield [99], and the lumigraph [54]. They handle the case when we observe an object surface within a transparent medium. Hence the plenoptic function is reduced to four dimensions. The radiance is represented as a function of light rays passing through the scene. To create such a plenoptic model for real scenes, a large number of views is taken. These views can be considered as a collection of light rays with according color values. They are discrete samples of the plenoptic function. The light rays which are not represented have to be interpolated from recorded ones considering additional information on physical restrictions. Often, real objects are supposed to be lambertian, meaning that one point of the object has the same radiance value in all possible directions. This implies that two viewing rays have the same color value, if they intersect at a surface point. If specular effects occur, this is not true any more. Two viewing rays then have similar color values, if their direction is similar and if their point of intersection is near the real scene point which originates their color value. To render a new view we suppose to have a virtual camera looking at the scene. We determine those viewing rays which are nearest to those of this camera. The nearer a ray is to a given ray, the greater is its support to the color value.



Subsections
next up previous contents
Next: Regular grid representation Up: Plenoptic model Previous: structure and motion   Contents
Marc Pollefeys 2000-07-12