TY - JOUR
T1 - Scalable optimization-based sampling on function space
AU - Bardsley, Johnathan M.
AU - Cui, Tiangang
AU - Marzouk, Youssef M.
AU - Wang, Zheng
N1 - Publisher Copyright:
© 2020 Society for Industrial and Applied Mathematics.
PY - 2020
Y1 - 2020
N2 - Optimization-based samplers such as randomize-then-optimize (RTO) [J. M. Bardsley et al., SIAM J. Sci. Comput., 36 (2014), pp. A1895-A1910] provide an efficient and parallellizable approach to solving large-scale Bayesian inverse problems. These methods solve randomly perturbed optimization problems to draw samples from an approximate posterior distribution. "Correcting" these samples, either by Metropolization or importance sampling, enables characterization of the original posterior distribution. This paper focuses on the scalability of RTO to problems with highor infinite-dimensional parameters. In particular, we introduce a new subspace strategy to reformulate RTO. For problems with intrinsic low-rank structures, this subspace acceleration makes the computational complexity of RTO scale linearly with the parameter dimension. Furthermore, this subspace perspective suggests a natural extension of RTO to a function space setting. We thus formalize a function space version of RTO and establish sufficient conditions for it to produce a valid Metropolis-Hastings proposal, yielding dimension-independent sampling performance. Numerical examples corroborate the dimension independence of RTO and demonstrate sampling performance that is also robust to small observational noise.
AB - Optimization-based samplers such as randomize-then-optimize (RTO) [J. M. Bardsley et al., SIAM J. Sci. Comput., 36 (2014), pp. A1895-A1910] provide an efficient and parallellizable approach to solving large-scale Bayesian inverse problems. These methods solve randomly perturbed optimization problems to draw samples from an approximate posterior distribution. "Correcting" these samples, either by Metropolization or importance sampling, enables characterization of the original posterior distribution. This paper focuses on the scalability of RTO to problems with highor infinite-dimensional parameters. In particular, we introduce a new subspace strategy to reformulate RTO. For problems with intrinsic low-rank structures, this subspace acceleration makes the computational complexity of RTO scale linearly with the parameter dimension. Furthermore, this subspace perspective suggests a natural extension of RTO to a function space setting. We thus formalize a function space version of RTO and establish sufficient conditions for it to produce a valid Metropolis-Hastings proposal, yielding dimension-independent sampling performance. Numerical examples corroborate the dimension independence of RTO and demonstrate sampling performance that is also robust to small observational noise.
KW - Bayesian inference
KW - Infinite-dimensional inverse problems
KW - Markov chain Monte Carlo
KW - Metropolis independence sampling
KW - Transport maps
UR - http://www.scopus.com/inward/record.url?scp=85084458358&partnerID=8YFLogxK
U2 - 10.1137/19M1245220
DO - 10.1137/19M1245220
M3 - Article
AN - SCOPUS:85084458358
SN - 1064-8275
VL - 42
SP - A1317-A1347
JO - SIAM Journal on Scientific Computing
JF - SIAM Journal on Scientific Computing
IS - 2
ER -