화학공학소재연구정보센터
Applied Mathematics and Optimization, Vol.29, No.3, 225-241, 1994
A New Smoothing-Regularization Approach for a Maximum-Likelihood-Estimation Problem
We consider the problem min SIGMA(i=1)m ([a(i), x] - b(i) log [a(i), z]) subject to x greater-than-or-equal-to 0 which occurs as a maximum-likelihood estimation problem in several areas, and particularly in positron emission tomography. After noticing that this problem is equivalent to min d(b, Ax) subject to x greater-than-or-equal-to 0, where d is the Kullback-Leibler information divergence and A, b are the matrix and vector with rows and entries a(i), b(i), respectively, we suggest a regularized problem min d(b, Ax) + mud(v, Sx), where mu is the regularization parameter, S is a smoothing matrix, and v is a fixed vector. We present a computationally attractive algorithm for the regularized problem, establish its convergence, and show that the regularized solutions, as mu goes to 0, converge to the solution of the original problem which minimizes a convex function related to d(v, Sx). We give convergence-rate results both for the regularized solutions and for their functional values.