Tutorial at DAGM 2010 on Sparse Linear Models: 
21/09/2010, 09:00  13:00 Recent dramatic advances for problems such as image denoising, deconvolution, image compression, or undersampled reconstruction come from endowing classical least squares techniques with realistic prior assumptions about image statistics: stepping from linear to sparse linear models (SLMs). In the SLM setting, we can bias inverse problems towards a gist of what makes a natural image, yet still employ much of the computational foundation of least squares. Direct reconstruction becomes maximum a posteriori (MAP) estimation, a convex optimization problem in many cases. Bayesian SLM inference goes beyond single best guesses, in that reconstruction uncertainties are summarized in the posterior distribution. Compared to MAP estimation, sparse Bayesian methods provide superior robustness in advanced problems like blind deconvolution or hyperparameter learning. Beyond, uncertainty information can be used to automatically optimize image acquisition. In this tutorial, I will review the mathematics behind sparse linear models, showing how properties of SLM prior potentials lead to optimization and inference approximation principles. I will discuss and contrast some reconstruction and recent variational approximate inference methods, with particular focus on exposing convexity and reductions to underlying standard computational primitives. Motivating examples from computer vision and medical imaging will be given. Relationship to previous tutorials: I will draw on a tutorial given at ICML 2008, but most material will be novel.

Organizers: 
Matthias Seeger, Max Planck Institute and Saarland University, Saarbrücken 
Curriculum Vitae: 
Matthias Seeger received his computer science Diploma degree from Karlsruhe University in 1999 (distinction) and his PhD from Edinburgh university in 2003, supervised by Christopher Williams. He was postdoc at the University of California, Berkeley with Michael Jordan, and at the Max Planck Institute, Tuebingen with Bernhard Schoelkopf. At present, he leads an independent research group at the Max Planck Institute and Saarland University, Saarbrücken, supervising a number of PhD students. He is interested in the theory, algorithmics, and practice of Bayesian techniques and probabilistic machine learning, with applications to medical image processing, computer vision, and compressive sensing. He made seminal contributions to Gaussian processes in machine learning, semisupervised learning, PACBayesian learning theory, and variational Bayesian inference approximations, with recent focus on adaptive compressive sensing for magnetic resonance imaging. He is the author of numerous journal and international conference publications, was invited fellow at the Isaac Newton Institute, Cambridge, UK in 2008, organized workshops at and served in senior program committees of leading international machine learning conferences (NIPS AA area chair 2004, 2010; UAI 2009; AISTATS 2010). 
© 20092010 DAGM 2010  TU Darmstadt, Interactive Graphics Systems Group 