Many optimization algorithms converge to stationary points. When the underlying problem
is nonconvex, they may get trapped at local minimizers and occasionally stagnate near saddle points.
We propose the Run-and-Inspect Method, which adds an “inspect” phase to existing algorithms that
helps escape from non-global stationary points. The inspection samples a set of points in a radius R
around the current point. When a sample point yields a sufficient decrease in the objective, we resume
an existing algorithm from that point. If no sufficient decrease is found, the current point is called an
approximate R-local minimizer. We show that an R-local minimizer is globally optimal, up to a specific
error depending on R, if the objective function can be implicitly decomposed into a smooth convex
function plus a restricted function that is possibly nonconvex, nonsmooth. Therefore, for such nonconvex
objective functions, verifying global optimality is fundamentally easier. For high-dimensional problems, we
introduce blockwise inspections to overcome the curse of dimensionality while still maintaining optimality
bounds up to a factor equal to the number of blocks. Our method performs well on a set of artificial and
realistic nonconvex problems by coupling with gradient descent, coordinate descent, EM, and prox-linear
Many iterative methods in optimization are fixed-point iterations with averaged operators. As such methods converge at an O(1/k) rate with the constant determined by the averagedness coefficient, establishing small averagedness coefficients for operators is of broad interest. In this paper, we show that the averagedness coefficients of the composition of averaged operators by Ogura and Yamada (Numer Func Anal Opt 32(1–2):113–137, 2002) and the threeoperator splitting by Davis and Yin (Set Valued Var Anal 25(4):829–858, 2017) are tight. The analysis relies on the scaled relative graph, a geometric tool recently proposed by Ryu, Hannah, and Yin (arXiv:1902.09788, 2019).
It is known that operator splitting methods based on forward–backward splitting, Douglas–Rachford splitting, and Davis–Yin splitting decompose difficult optimization problems into simpler subproblems under proper convexity and smoothness assumptions. In this paper, we identify an envelope (an objective function), whose gradient descent iteration under a variable metric coincides with Davis–Yin splitting iteration. This result generalizes the Moreau envelope for proximal-point iteration and the envelopes for forward–backward splitting and Douglas–Rachford splitting iterations identified by Patrinos, Stella, and Themelis. Based on the new envelope and the stable–center manifold theorem, we further show that, when forward–backward splitting or Douglas–Rachford splitting iterations start from random points, they avoid all strict saddle points with probability one. This result extends the similar results by Lee et al. from gradient descent to splitting methods.
In this paper, we present a method for identifying infeasible, unbounded, and pathological conic programs based on Douglas–Rachford splitting. When an optimization program is infeasible, unbounded, or pathological, the iterates of Douglas–Rachford splitting diverge. Somewhat surprisingly, such divergent iterates still provide useful information, which our method uses for identification. In addition, for strongly infeasible problems the method produces a separating hyperplane and informs the user on how to minimally modify the given problem to achieve strong feasibility. As a first-order method, the proposed algorithm relies on simple subroutines, and therefore is simple to implement and has low per-iteration cost.
Zhongjian WangDepartment of Mathematics, The University of Hong KongXue LuoSchool of Mathematical Sciences, Beihang University (Shahe campus)Stephen S.-T. YauDepartment of Mathematical Sciences, Tsinghua UniversityZhiwen ZhangDepartment of Mathematics, The University of Hong Kong
Numerical Analysis and Scientific ComputingOptimization and ControlProbabilitymathscidoc:2004.25001
We present an alternating direction dual augmented Lagrangian method for solving semidefinite programming (SDP) problems in standard form. At each iteration, our basic algorithm minimizes the augmented Lagrangian function for the dual SDP problem sequentially, first with respect to the dual variables corresponding to the linear constraints, and then with respect to the dual slack variables, while in each minimization keeping the other variables fixed, and then finally it updates the Lagrange multipliers (i.e., primal variables). Convergence is proved by using a fixed-point argument. For SDPs with inequality constraints and positivity constraints, our algorithm is extended to separately minimize the dual augmented Lagrangian function over four sets of variables. Numerical results for frequency assignment, maximum stable set and binary integer quadratic programming problems demonstrate that our
The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In , it was shown empirically that using lscr<sup>p</sup> minimization with p < 1 can do so with fewer measurements than with p = 1. In this paper we consider the use of iteratively reweighted algorithms for computing local minima of the nonconvex problem. In particular, a particular regularization strategy is found to greatly improve the ability of a reweighted least-squares algorithm to recover sparse signals, with exact recovery being observed for signals that are much less sparse than required by an unregularized version (such as FOCUSS, ). Improvements are also observed for the reweighted-lscr<sup>1</sup> approach of .
In this paper, we formulate a problem of simultaneous redundant via insertion and line end extension for via yield optimization. Our problem is more general than previous works in the sense that more than one type of line end extension is considered and the objective function to be optimized directly accounts for via yield. We present a zero-one integer linear program based approach, that is equipped with two speedup techniques, to solve the addressed problem optimally. In addition, we describe how to modify our approach to exactly solve a previous work. Extensive experimental results are shown to demonstrate the effectiveness and efficiency of our approaches.
The free boundary value problem in obstacle problem for von Krmn equations is studied. By using the method of complementarity analysis, Rockafellar's theory of duality is generalized to the nonlinear variational problems and a complementarity theory of obstacle problem for von Krmn plates is established. We prove that the uniqueness and existence of solution directly depend on a complementary gap function. Moreover, a generalized dual extreme principle is established. We prove that the nonlinear primal variational inequality problem is eventually equivalent to a semi-quadratic dual optimization problem defined on a statically admissible space. This equivalence can be used to develop an effective numerical method for solving nonlinear free boundary value problems.