Theme: Inverse Problems

Lecturer: Associate Professor Youssef Marzouk

Course Content
Inverse problems formalise the process of learning about a system through indirect, noisy, and often incomplete observations. Casting inverse problems in the Bayesian statistical framework provides a natural framework for quantifying uncertainty in parameter values and model predictions, for fusing heterogeneous sources of information, and for optimally selecting experiments or observations.

This course will present fundamentals of the Bayesian approach to inverse problems, covering both modeling issues—e.g., prior distributions, likelihoods, hierarchical models—and computational challenges—e.g., computing posterior expectations via Markov chain Monte Carlo sampling or principled approximations. We will discuss methods that expose low-dimensional structure in inverse problems, that attempt to mitigate the computational cost of repeated forward model evaluations, and that exhibit discretisation-invariant performance in large-scale problems.

We will also present Bayesian approaches to optimal experimental design, which attempt to answer ubiquitous questions of what or where to measure, what experimental conditions to employ, etc. Here we will introduce a decision theoretic Bayesian design formulation and link it to more classical alphabetic optimality criteria, then focus on computational issues, e.g., how to estimate and maximise expected information gain in various quantities of interest.

Background Reading

  • S. Brooks, A. Gelman, G. Jones and X. Meng, editors. Handbook of MCMC. Chapman & Hall/CRC, 2011.
  • K. Chaloner and I. Verdinelli. “Bayesian experimental design: a review.” Statistical Science, 10(3): 273–304, 1995.
  • J. Kaipio and E. Somersalo. Statistical and Computational Inverse Problems. Springer, 2005.
  • A. M. Stuart. “Inverse problems: a Bayesian perspective.” Acta Numerica, 19: 451–559, 2010.