Regression Tree Fields -- A Practical and Effective Framework For Structured Learning and Prediction

Jeremy Jancsary (Microsoft Research Cambridge, UK)


Many tasks in fields as diverse as computer vision, natural language processing, and computational biology can be solved effectively by learning from large amounts of data.

A common trait of these seemingly disparate areas of research is that the objects of interest often possess rich internal structure. For instance, images consist of many pixels and sentences consist of several words. It is gainful to consider this structure and to respect the dependencies among elements when learning and inferring properties of such objects.

Graphical models offer a principled approach towards this end. However, efficiently learning the parameters and/or the structure of graphical models from large amounts of data is a long-standing challenge.

Regression tree fields address this problem via three key contributions: first, the model is parameterized by non-parametric regression trees, allowing universal specification of interactions between observations and variables; second, inference at test time is exact and efficient; and finally, training is efficient, scalable and fully parallelizable.

By means of several applications from computer vision, the practical importance ofregression tree fields is demonstrated empirically. I present results in natural image denoising that surpass the best published method by a wide margin and discuss how similar gains can likely be obtained in other low-level vision tasks.