Block stochastic gradient iteration for convex and nonconvex optimization

Yangyang Xu University of Alabama Wotao Yin UCLA

Optimization and Control Machine Learning mathscidoc:1703.27001

SIAM Journal on Optimization, 25, (3), 1686-1716, 2015
The stochastic gradient (SG) method can quickly solve a problem with a large number of components in the objective, or a stochastic optimization problem, to a moderate accuracy. The block coordinate descent/update (BCD) method, on the other hand, can quickly solve problems with multiple (blocks of) variables. This paper introduces a method that combines the great features of SG and BCD for problems with many components in the objective and with multiple (blocks of) variables. This paper proposes a block SG (BSG) method for both convex and nonconvex programs. BSG generalizes SG by updating all the blocks of variables in the Gauss–Seidel type (updating the current block depends on the previously updated block), in either a fixed or randomly shuffled order. Although BSG has slightly more work at each iteration, it typically outperforms SG because of BSG’s Gauss–Seidel updates and larger step sizes, the latter of which are determined by the smaller per-block Lipschitz constants. The convergence of BSG is established for both convex and nonconvex cases. In the convex case, BSG has the same order of convergence rate as SG. In the nonconvex case, its convergence is established in terms of the expected violation of a first-order optimality condition. In both cases our analysis is nontrivial since the typical unbiasedness assumption no longer holds. BSG is numerically evaluated on the following problems: stochastic least squares and logistic regression, which are convex, and low-rank tensor recovery and bilinear logistic regression, which are nonconvex. On the convex problems, BSG performed significantly better than SG. On the nonconvex problems, BSG significantly outperformed the deterministic BCD method because the latter tends to stagnate early near local minimizers. Overall, BSG inherits the benefits of both SG approximation and block coordinate updates and is especially useful for solving large-scale nonconvex problems.
stochastic gradient, block coordinate update, convex optimization, nonconvex optimization
[ Download ] [ 2017-03-14 09:23:24 uploaded by yangyangxu ] [ 1140 downloads ] [ 0 comments ]
@inproceedings{yangyang2015block,
  title={Block stochastic gradient iteration for convex and nonconvex optimization},
  author={Yangyang Xu, and Wotao Yin},
  url={http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170314092324322520688},
  booktitle={SIAM Journal on Optimization},
  volume={25},
  number={3},
  pages={1686-1716},
  year={2015},
}
Yangyang Xu, and Wotao Yin. Block stochastic gradient iteration for convex and nonconvex optimization. 2015. Vol. 25. In SIAM Journal on Optimization. pp.1686-1716. http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170314092324322520688.
Please log in for comment!
 
 
Contact us: office-iccm@tsinghua.edu.cn | Copyright Reserved