Shedding Light on Light Fields

Oliver Bimber
(Johannes Kepler University Linz, Austria)

Images play an essential role in our life. Photography and television are technologies that influenced generations like not many other technologies did. Both would be unimaginable without images. Advanced imaging systems and image processing methods are today fundamental to many professions. Medical imaging is certainly a good example. And if nothing else, images are also the final outcome of every visualization algorithm.

Digital images are two-dimensional matrixes of pixels. Cameras are based on this notion: Even though 3D scene points emit varying light rays in different directions, the lens and the sensor of cameras integrate them to a single pixel. By doing this for all imaged scene points, we end up with nothing more than a 2D image -- having lost most of the scene information. Displays are based on this notion: Pixels of raster-displayed images emit (more or less) the same amount of light in all directions -- giving us nothing more than a 2D image. Visualization and image processing algorithms are based on this notion: They map complex (possibly multidimensional) data to 2D images and vice versa. What if the notion of images would change once and forever? What if instead of capturing, storing, processing and displaying only a single color per pixel, each pixel would consist of individual colors for each emitting direction? Images would no longer be two-dimensional matrices but four-dimensional ones (storing spatial information in two dimensions, and directional information in the other two dimensions). This is called a light field.

Light fields have the potential to radically change everything that we relate to images -- from photography, over displays to image processing and analysis, and possibly even visualization. While first light-fields display prototypes have already been introduced in scientific communities and first light-field cameras are already commercially available, many unsolved challenges remain in the processing of light fields. While common digital images store mega-bytes of data, corresponding light fields might require gigabytes. While spatial consistency is a requirement for regular image processing, directional consistency has to be ensured in addition for light-field processing. In this talk, I will shed some light on light fields and light-field processing basics with applications to imaging and visualization. I invite the audience to think about what the impact for computer vision, image processing and analysis, or visualization could be if images evolve to light fields, raster display evolve to light-field displays, and digital cameras evolve to light-field cameras.

Short biography

Oliver Bimber became head of the Institute of Computer Graphics at Johannes Kepler University Linz in October 2009. From 2003-2010 he served as a Junior Professor of Augmented Reality at the Media System Science Department of Bauhaus-University Weimar. He received a Ph.D. (2002) in Engineering from Darmstadt University of Technology, Germany, and a Habilitation degree (2007) in Computer Science (Informatik) at Munich University of Technology. From 2001 to 2002 Bimber worked as a senior researcher at the Fraunhofer Center for Research in Computer Graphics in Providence, RI/USA, and from 1998 to 2001 he was a scientist at the Fraunhofer Institute for Computer Graphics in Rostock, Germany. Bimber co-authored the book "Displays: Fundamentals and Applications" (2011) with Rolf R. Hainich and the book "Spatial Augmented Reality" (2005) with Ramesh Raskar (MIT). Since 2005 he serves on the editorial board of the IEEE Computer Magazine. The VIOSO GmbH was founded in his group in 2005. He and his students received several awards for their research and inventions, and have won scientific competitions, such as the ACM Siggraph Student Research Competition (1st place 2006 and 2008, 2nd place 2009 and 2011), and the ACM Student Research Competition Grand Final (2006) that was presented together with the Turing award.