The TLDR; To estimate µ = E_p[f(θ)] when p’s normalizing constant is unknown, instead of doing MCMC on p(θ) or even p(θ)|f(θ)|, or learning a parametric q(θ), we try MCMC directly on p(θ)|f(θ)- µ|, which is the asymptotic-variance minimizing proposal. Note: we cannot do MCMC straightforwardly, as p(θ)|f(θ)- µ| cannot be evaluated - it contains µ, the quantity of interest ! So, we propose a simple iterative scheme that works: initial estimate µ₀ ; run a chain on the approximation p(θ)| f(θ)- µ₀ |; estimate µ again with SNIS, and keep iterating. I’m quite excited about extending this work.