In this talk, I will present some of our recent results on inferring geometric, photometric, and semantic scene properties from a single image. I will first briefly describe our system for estimating the rough geometric surface layout of a scene as well as the camera viewpoint. I will show how this information, in turn, can be useful for modeling objects in the scene. Next, I will describe a very simple way of using the surface layout information as a way of estimating a rough illumination map for the scene. Finally, I will describe a new system that uses millions of unlabeled photographs from Flickr to capture some implicit semantic scene structure of an image.
Applications of our methods to computer graphics might be shown.