TY - JOUR
T1 - Randomize-then-optimize
T2 - A method for sampling from posterior distributions in nonlinear inverse problems
AU - Bardsley, Johnathan M.
AU - Solonen, Antti
AU - Haario, Heikki
AU - Laine, Marko
N1 - Publisher Copyright:
© 2014 Society for Industrial and Applied Mathematics.
PY - 2014
Y1 - 2014
N2 - High-dimensional inverse problems present a challenge for Markov chain Monte Carlo (MCMC)-type sampling schemes. Typically, they rely on finding an efficient proposal distribution, which can be difficult for large-scale problems, even with adaptive approaches. Moreover, the autocorrelations of the samples typically increase with dimension, which leads to the need for long sample chains. We present an alternative method for sampling from posterior distributions in nonlinear inverse problems, when the measurement error and prior are both Gaussian. The approach computes a candidate sample by solving a stochastic optimization problem. In the linear case, these samples are directly from the posterior density, but this is not so in the nonlinear case. We derive the form of the sample density in the nonlinear case, and then show how to use it within both a Metropolis-Hastings and importance sampling framework to obtain samples from the posterior distribution of the parameters. We demonstrate, with various small- and medium-scale problems, that randomize-then-optimize can be efficient compared to standard adaptive MCMC algorithms.
AB - High-dimensional inverse problems present a challenge for Markov chain Monte Carlo (MCMC)-type sampling schemes. Typically, they rely on finding an efficient proposal distribution, which can be difficult for large-scale problems, even with adaptive approaches. Moreover, the autocorrelations of the samples typically increase with dimension, which leads to the need for long sample chains. We present an alternative method for sampling from posterior distributions in nonlinear inverse problems, when the measurement error and prior are both Gaussian. The approach computes a candidate sample by solving a stochastic optimization problem. In the linear case, these samples are directly from the posterior density, but this is not so in the nonlinear case. We derive the form of the sample density in the nonlinear case, and then show how to use it within both a Metropolis-Hastings and importance sampling framework to obtain samples from the posterior distribution of the parameters. We demonstrate, with various small- and medium-scale problems, that randomize-then-optimize can be efficient compared to standard adaptive MCMC algorithms.
KW - Bayesian methods
KW - Computational statistics
KW - Nonlinear inverse problems
KW - Sampling methods
KW - Uncertainty quantification
UR - http://www.scopus.com/inward/record.url?scp=84987680702&partnerID=8YFLogxK
U2 - 10.1137/140964023
DO - 10.1137/140964023
M3 - Article
AN - SCOPUS:84987680702
SN - 1064-8275
VL - 36
SP - A1895-A1910
JO - SIAM Journal on Scientific Computing
JF - SIAM Journal on Scientific Computing
IS - 4
ER -