In this paper, we analyze the minimization of seminorms ∥L · ∥ on R^n under
the constraint of a bounded I-divergence D(b,H·) for rather general linear
operators H and L. The I-divergence is also known as Kullback–Leibler
divergence and appears in many models in imaging science, in particular when
dealing with Poisson data but also in the case of multiplicative Gamma noise.
Often H represents, e.g., a linear blur operator and L is some discrete derivative
or frame analysis operator. A central part of this paper consists in proving
relations between the parameters of I-divergence constrained and penalized
problems. To solve the I-divergence constrained problem, we consider various
first-order primal–dual algorithms which reduce the problem to the solution of
certain proximal minimization problems in each iteration step. One of these
proximation problems is an I-divergence constrained least-squares problem
which can be solved based on Morozov’s discrepancy principle by a Newton
method. We prove that these algorithms produce not only a sequence of
vectors which converges to a minimizer of the constrained problem but also
a sequence of parameters which converges to a regularization parameter so
that the corresponding penalized problem has the same solution. Furthermore,
we derive a rule for automatically setting the constraint parameter for data
corrupted by multiplicative Gamma noise. The performance of the various
algorithms is finally demonstrated for different image restoration tasks both for
images corrupted by Poisson noise and multiplicative Gamma noise.
We demonstrate how path integrals often used in problems of theoretical physics can be adapted to provide a machinery for performing Bayesian inference in function spaces. Such inference comes about naturally in the study of inverse problems of recovering continuous (infinite dimensional) coefficient functions from ordinary or partial differential equations, a problem which is typically ill-posed. Regularization of these problems using L2 function spaces (Tikhonov regularization) is equivalent to Bayesian probabilistic inference, using a Gaussian prior. The Bayesian interpretation of inverse problem regularization is useful since it allows one to quantify and characterize error and degree of precision in the solution of inverse problems, as well as examine assumptions made in solving the problem—namely whether the subjective choice of regularization is compatible with prior knowledge. Using path-integral formalism, Bayesian inference can be explored through various perturbative techniques, such as the semiclassical approximation, which we use in this manuscript. Perturbative path-integral approaches, while offering alternatives to computational approaches like Markov-Chain-Monte-Carlo (MCMC), also provide natural starting points for MCMC methods that can be used to refine approximations. In this manuscript, we illustrate a path-integral formulation for inverse problems and demonstrate it on an inverse problem in membrane biophysics as well as inverse problems in potential theories involving the Poisson equation.