Kurdyka–Łojasiewicz (KL) exponent plays an important role in estimating the convergence rate of many contemporary first-order methods. In particular, a KL exponent of 1/2 for a suitable potential function is related to local linear convergence. Nevertheless, KL exponent is in general extremely hard to estimate. In this paper, we show under mild assumptions that KL exponent is preserved via inf-projection. Inf-projection is a fundamental operation that is ubiquitous when reformulating optimization problems via the lift-and-project approach. By studying its operation on KL exponent, we show that the KL exponent is 1/2 for several important convex optimization models, including some semidefinite-programming-representable functions and some functions that involve C2-cone reducible structures, under conditions such as strict complementarity. Our results are applicable to concrete optimization models such as group-fused Lasso and overlapping group Lasso. In addition, for nonconvex models, we show that the KL exponent of many difference-of-convex functions can be derived from that of their natural majorant functions, and the KL exponent of the Bregman envelope of a function is the same as that of the function itself. Finally, we estimate the KL exponent of the sum of the least squares function and the indicator function of the set of matrices
of rank at most k.
Finding the maximum eigenvalue of a symmetric tensor is an important topic in tensor computation and numerical multilinear algebra. In this paper, we introduce a new class of structured tensors called W-tensors, which not only extends thewell-studied nonnegative tensors by allowing negative entries but also covers several important tensors arising naturally from spectral hypergraph theory.We then show that finding the maximum H-eigenvalue of an even-order symmetric W-tensor is equivalent to solving a structured semidefinite program and hence can be validated in polynomial time. This yields a highly efficient semidefinite program algorithm for computing themaximumH-eigenvalue of W-tensors and is based on a new structured sums-of-squares decomposition result for a nonnegative polynomial induced by W-tensors. Numerical experiments illustrate
that the proposed algorithm can successfully find the maximum H-eigenvalue of W-tensors with dimension up to 10,000, subject to machine precision. As applications, we provide a polynomial time algorithm for computing the maximum H-eigenvalues of large-size Laplacian tensors of hyperstars and hypertrees, where the algorithm can be up to 13 times faster than the state-of-the-art numerical method introduced by Ng, Qi, and Zhou in 2009. Finally, we also show that the proposed algorithm can be used to test the copositivity of a multivariate form associated with symmetric extended Z-tensors,whose order may be even or odd.
In this paper, we study, for the first time, nonconvex minimax separable quadratic optimization problems with multiple separable quadratic constraints and their second-order cone programming (SOCP) relaxations. Under suitable conditions, we establish exact SOCP relaxation for minimax nonconvex separable quadratic programs. We show that various important classes of specially structured minimax quadratic optimization problems admit exact SOCP relaxations under easily verifiable conditions. These classes include some minimax extended trust-region problems, minimax uniform quadratic optimization problems, max dispersion problems, and some robust quadratic optimization problems under bounded data uncertainty. The present work shows that nonconvex minimax separable quadratic problems with quadratic constraints, which contain a hidden closed and convex epigraphical set, exhibit exact SOCP relaxations.
In this paper, we consider a bilevel polynomial optimization problem where the objective and the constraint functions of both the upper-and the lower-level problems are polynomials. We present methods for finding its global minimizers and global minimum using a sequence of semidefinite programming (SDP) relaxations and provide convergence results for the methods. Our scheme for problems with a convex lower-level problem involves solving a transformed equivalent single-level problem by a sequence of SDP relaxations, whereas our approach for general problems involving a nonconvex polynomial lower-level problem solves a sequence of approximation problems via another sequence of SDP relaxations.