The paper addresses parametric inequality systems described by polynomial functions in finite dimensions, where state-dependent infinite parameter sets are given by finitely many polynomial inequalities and equalities. Such systems can be viewed, in particular, as solution sets to problems of generalized semi-infinite programming with polynomial data. Exploiting the imposed polynomial structure together with powerful tools of variational analysis and semialgebraic geometry, we establish a far-going extension of the Łojasiewicz gradient inequality to the general nonsmooth class of supremum marginal functions as well as higher-order (Hölder type) local error bounds results with explicitly calculated exponents. The obtained results are applied to higher-order quantitative stability analysis for various classes of optimization problems including generalized semi-infinite programming with polynomial data, optimization of real polynomials under polynomial matrix inequality constraints, and polynomial second-order cone programming. Other applications provide explicit convergence rate estimates for the cyclic projection algorithm to find common points of convex sets described by matrix polynomial inequalities and for the asymptotic convergence of trajectories of subgradient dynamical systems in semialgebraic settings.
We show that the classical fourth order accurate compact finite difference scheme
with high order strong stability preserving time discretizations for convection diffusion problems
satisfies a weak monotonicity property, which implies that a simple limiter can enforce the bound-
preserving property without losing conservation and high order accuracy. Higher order accurate
compact finite difference schemes satisfying the weak monotonicity will also be discussed.
In this paper, we present a new adaptive feature scaling scheme for ultrahigh-dimensional feature selection on Big Data, and then reformulate it as a convex semi-infinite programming (SIP) problem. To address the SIP, we propose an efficient feature generating paradigm. Different from traditional gradient-based approaches that conduct optimization on all input features, the proposed paradigm iteratively activates a group of features, and solves a sequence of multiple kernel learning (MKL) subproblems. To further speed up the training, we propose to solve the MKL subproblems in their primal forms through a modified accelerated proximal gradient approach. Due to such optimization scheme, some efficient cache techniques are also developed. The feature generating paradigm is guaranteed to converge globally under mild conditions, and can achieve lower feature selection bias. Moreover, the proposed method can tackle two challenging tasks in feature selection: 1) group-based feature selection with complex structures, and 2) nonlinear feature selection with explicit feature mappings. Comprehensive experiments on a wide range of synthetic and real-world data sets of tens of million data points with O(10^14) features demonstrate the competitive performance of the proposed method over state-of-the-art feature selection methods in terms of generalization performance and training efficiency.
This paper studies the problem of finding best rank-1 approximations for both symmetric and nonsymmetric tensors. For symmetric tensors, this is equivalent to optimizing homogeneous polynomials over unit spheres; for nonsymmetric tensors, this is equivalent to optimizing multiquadratic forms over multispheres. We propose semidefinite relaxations, based on sum of squares representations, to solve these polynomial optimization problems. Their special properties and structures are studied. In applications, the resulting semidefinite programs are often large scale. The recent Newton-CG augmented Lagrangian method by Zhao, Sun, and Toh [SIAM J. Optim., 20 (2010), pp. 1737–1765] is suitable for solving these semidefinite relaxations. Extensive numerical experiments are presented to show that this approach is efficient in getting best rank-1 approximations.