Many optimization algorithms converge to stationary points. When the underlying problem
is nonconvex, they may get trapped at local minimizers and occasionally stagnate near saddle points.
We propose the Run-and-Inspect Method, which adds an “inspect” phase to existing algorithms that
helps escape from non-global stationary points. The inspection samples a set of points in a radius R
around the current point. When a sample point yields a sufficient decrease in the objective, we resume
an existing algorithm from that point. If no sufficient decrease is found, the current point is called an
approximate R-local minimizer. We show that an R-local minimizer is globally optimal, up to a specific
error depending on R, if the objective function can be implicitly decomposed into a smooth convex
function plus a restricted function that is possibly nonconvex, nonsmooth. Therefore, for such nonconvex
objective functions, verifying global optimality is fundamentally easier. For high-dimensional problems, we
introduce blockwise inspections to overcome the curse of dimensionality while still maintaining optimality
bounds up to a factor equal to the number of blocks. Our method performs well on a set of artificial and
realistic nonconvex problems by coupling with gradient descent, coordinate descent, EM, and prox-linear
Many iterative methods in optimization are fixed-point iterations with averaged operators. As such methods converge at an O(1/k) rate with the constant determined by the averagedness coefficient, establishing small averagedness coefficients for operators is of broad interest. In this paper, we show that the averagedness coefficients of the composition of averaged operators by Ogura and Yamada (Numer Func Anal Opt 32(1–2):113–137, 2002) and the threeoperator splitting by Davis and Yin (Set Valued Var Anal 25(4):829–858, 2017) are tight. The analysis relies on the scaled relative graph, a geometric tool recently proposed by Ryu, Hannah, and Yin (arXiv:1902.09788, 2019).