Numerical Analysis and Scientific Computingmathscidoc:1912.43906
3218-3222, 2013.5
We present a novel modification to the well-known infomax algorithm of blind source separation. Under natural gradient descent, the infomax algorithm converges to a stationary point of a limiting ordinary differential equation. However, due to the presence of saddle points or local minima of the corresponding likelihood function, the algorithm may be trapped around these bad stationary points for a long time, especially if the initial data are near them. To speed up convergence, we propose to add a sequence of random perturbations to the infomax algorithm to shake the iterating sequence so that it is captured by a path descending to a more stable stationary point. We analyze the convergence of the randomly perturbed algorithm, and illustrate its fast convergence through numerical examples on blind demixing of stochastic signals. The examples have analytical structures so that saddle points or local minima of
@inproceedings{qi2013a,
title={A randomly perturbed INFOMAX algorithm for blind source separation},
author={Qi He, and Jack Xin},
url={http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20191224210551828910470},
pages={3218-3222},
year={2013},
}
Qi He, and Jack Xin. A randomly perturbed INFOMAX algorithm for blind source separation. 2013. pp.3218-3222. http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20191224210551828910470.