Shape, illumination and material properties from surface appearance

Jan-Olof Eklundh, Peter Nillius and Jesper Bruzelius (CVAP, NADA, KTH, Stockholm)

The appearance of a surface depends on its material, as well as the illumination and the surface shape. The aim of this work is to develop a computational framework in which all these aspects are taken into account, allowing discrimination of materials and also determination of qualitative shape and illumination properties. We have investigated learning methods for discriminating material properties independent of viewing direction using patches of the Curet database as input. The signatures used have been based on the statistics of the output of local luminance operators, originally proposed by Leung and Malik (2001). A recent method for computing such signatures, or textons, presented by Varma and Zisserman (2002) has served as a starting point for our work. They compute textons in 2D and use them to classify the 3D textures of the Curet database. The above method performs classification by finding the smallest chi square distance between measured and learned texton histograms. We have developed an alternative probabilistic method, that determines the probability that a certain texton distribution belongs a certain class. Results with this method are slightly inferior to Varma and Zisserman's method but is better suited for real-world scenes, where differently textured objects are juxtaposed. It can also be integrated with information and observations about the illumination and the surface, which is our long term goal. In our effort to analyze how the appearance of a surface depends on its shape and the illumination we have developed a method for estimating the direction to the light source, see Nillius and Eklundh (2001). More specifically, the method estimates the projection of the direction to a single light source in an image, given that there exists a segment of an occluding contour of an object with locally Lambertian surface reflectance. In the algorithm potential occluding contours are first picked out based on edge and color information. For each contour the light source direction is estimated using a shading model. Finally, these estimates are fused by application of a Bayesian network. The probabilistic model allows that the contours provided by the first step may not be occluding, but classify them as one or the other at the end. In our talk we'll discuss these issues and also briefly indicate on-going work towards integrating these methods into a single framework. References T. Leung and J. Malik. Representing and recognising the visual appearance of materials using three-dimensional textures. International Journal of Computer Vision, 43:1, 29-44, 2001. M. Varma and A. Zisserman. Classifying images of materials: achieving viewpoint and illumination independence, Proc. 7th ECCV, Vol. III, 255-271, 2002. P. Nillius and J.-O. Eklundh. Automatic estimation of the projected light source direction, Proc. CVPR'01, Vol. I, 1076-1083, 2001.