A metropolis-hastings method for linear inverse problems with poisson likelihood and gaussian prior

Johnathan M. Bardsley, Aaron Luttman

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Poisson noise models arise in a wide range of linear inverse problems in imaging. In the Bayesian setting, the Poisson likelihood function together with a Gaussian prior yields a posterior density that is not of a well-known form and is thus difficult to sample from, especially for large-scale problems. In this work, we present a method for computing samples from posterior density functions with Poisson likelihood and Gaussian prior, using a Gaussian approximation of the posterior as an independence proposal within a Metropolis-Hastings framework. We consider a class of Gaussian priors, some of which are edge-preserving, and which we motivate using Markov random fields. We present two sampling algorithms: one which samples the unknown image alone, leaving the prior scaling (or regularization) parameter alone, and another which samples both the unknown image and the prior scaling parameter. For this paper, we make the assumption that our unknown image is sufficiently positive that proposed samples are always positive, allowing us to ignore the nonnegativity constraint. Results are demonstrated on synthetic data—including a synthetic X-ray radiograph generated from a radiation transport code—and on real images used to calibrate a pulsed power high-energy X-ray source at the U.S. Department of Energy’s Nevada National Security Site.

Original languageEnglish
Pages (from-to)35-55
Number of pages21
JournalInternational Journal for Uncertainty Quantification
Volume6
Issue number1
DOIs
StatePublished - 2016

Keywords

  • Bayesian inference
  • Image deblurring
  • Inverse problems
  • Markov chain Monte Carlo

Fingerprint

Dive into the research topics of 'A metropolis-hastings method for linear inverse problems with poisson likelihood and gaussian prior'. Together they form a unique fingerprint.

Cite this