Many optimization algorithms converge to stationary points. When the underlying problem
is nonconvex, they may get trapped at local minimizers and occasionally stagnate near saddle points.
We propose the Run-and-Inspect Method, which adds an “inspect” phase to existing algorithms that
helps escape from non-global stationary points. The inspection samples a set of points in a radius R
around the current point. When a sample point yields a sufficient decrease in the objective, we resume
an existing algorithm from that point. If no sufficient decrease is found, the current point is called an
approximate R-local minimizer. We show that an R-local minimizer is globally optimal, up to a specific
error depending on R, if the objective function can be implicitly decomposed into a smooth convex
function plus a restricted function that is possibly nonconvex, nonsmooth. Therefore, for such nonconvex
objective functions, verifying global optimality is fundamentally easier. For high-dimensional problems, we
introduce blockwise inspections to overcome the curse of dimensionality while still maintaining optimality
bounds up to a factor equal to the number of blocks. Our method performs well on a set of artificial and
realistic nonconvex problems by coupling with gradient descent, coordinate descent, EM, and prox-linear
algorithms.