In this article, we consider the Lorentz cone complementarity problems in infinite-dimensional real Hilbert space. We establish several results that are standard and important when dealing with complementarity problems. These include proving the same growth of the FishcherBurmeister merit function and the natural residual merit function, investigating property of bounded level sets under mild conditions via different merit functions, and providing global error bounds through the proposed merit functions. Such results are helpful for further designing solution methods for the Lorentz cone complementarity problems in Hilbert space.
Given a Hilbert space H, the infinite-dimensional Lorentz/second-order cone K is introduced. For any x H, a spectral decomposition is introduced, and for any function f: R R, we define a corresponding vector-valued function f H (x) on Hilbert space H by applying f to the spectral values of the spectral decomposition of x H with respect to K. We show that this vector-valued function inherits from f the properties of continuity, Lipschitz continuity, differentiability, smoothness, as well as s-semismoothness. These results can be helpful for designing and analyzing solution methods for solving infinite-dimensional second-order cone programs and complementarity problems.
In Nash bargaining problem, due to fairness concerns of players, instead
of maximizing the sum of utilities of all players, an implementable solution should
satisfy some axioms or characterizations. Such a solution can result in the so-called
price of fairness, because of the reduction in the sum of utilities of all players. An
important issue is to quantify the system efficiency loss under axiomatic solutions
through the price of fairness. Based on Perles–Maschler solution of two-player Nash
bargaining problem, this paper deals with the extended Perles–Maschler solution of
multi-player Nash bargaining problem.We give lower bounds of threemeasures of the
system efficiency for this solution, and show that the lower bounds are asymptotically
Processing streaming data as they arrive is often necessary for high dimensional data analysis. In this paper, we analyze the convergence of a subspace online PCA iteration, as a followup of the recent work of Li, Wang, Liu, and Zhang [Math. Program., Ser. B, DOI 10.1007/s10107-017-1182-z] who considered the case for the most significant principal component only, i.e., a single vector. Under the sub-Gaussian assumption, we obtain a finite-sample error bound that closely matches the minimax information lower bound of Vu and Lei [Ann. Statist. 41:6 (2013), 2905-2947].
We investigate a one-parametric class of merit functions for the second-order cone complementarity problem (SOCCP) which is closely related to the popular FischerBurmeister (FB) merit function and natural residual merit function. In fact, it will reduce to the FB merit function if the involved parameter <i></i> equals 2, whereas as <i></i> tends to zero, its limit will become a multiple of the natural residual merit function. In this paper, we show that this class of merit functions enjoys several favorable properties as the FB merit function holds, for example, the smoothness. These properties play an important role in the reformulation method of an unconstrained minimization or a nonsmooth system of equations for the SOCCP. Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions, which indicate that the
The support vector machine (SVM) was originally designed for binary classifications. A lot of effort has been put to generalize the binary SVM to multiclass SVM (MSVM) which are more complex problems. Initially, MSVMs were solved by considering their dual formulations which are quadratic programs and can be solved by standard second-order methods. However, the duals of MSVMs with regularizers are usually more difficult to formulate and computationally very expensive to solve. This paper focuses on several regularized MSVMs and extends the alternating direction method of multiplier (ADMM) to these MSVMs. Using a splitting technique, all considered MSVMs are written as two-block convex programs, for which the ADMM has global convergence guarantees. Numerical experiments on synthetic and real data demonstrate the high efficiency and accuracy of our algorithms.
We make a unified analysis of interior proximal methods of solving convex second-order cone programming problems. These methods use a proximal distance with respect to second-order cones which can be produced with an appropriate closed proper univariate function in three ways. Under some mild conditions, the sequence generated is bounded with each limit point being a solution, and global rates of convergence estimates are obtained in terms of objective values. A class of regularized proximal distances is also constructed which can guarantee the global convergence of the sequence to an optimal solution. These results are illustrated with some examples. In addition, we also study the central paths associated with these distance-like functions, and for the linear SOCP we discuss their relations with the sequence generated by the interior proximal methods. From this, we obtain improved convergence
As we aim at alleviating the curse of high-dimensionality, subspace learning is becoming more popular. Existing approaches use either information about global or local structure of the data, and few studies simultaneously focus on global and local structures as the both of them contain important information. In this paper, we propose a global and local structure preserving sparse subspace learning (GLoSS) model for unsupervised feature selection. The model can simultaneously realize feature selection and subspace learning. In addition, we develop a greedy algorithm to establish a generic combinatorial model, and an iterative strategy based on an accelerated block coordinate descent is used to solve the GLoSS problem. We also provide whole iterate sequence convergence analysis of the proposed iterative algorithm. Extensive experiments are conducted on real-world datasets to show the superiority of the
In this paper, we formulate a problem of simultaneous redundant via insertion and line end extension for via yield optimization. Our problem is more general than previous works in the sense that more than one type of line end extension is considered and the objective function to be optimized directly accounts for via yield. We present a zero-one integer linear program based approach, that is equipped with two speedup techniques, to solve the addressed problem optimally. In addition, we describe how to modify our approach to exactly solve a previous work. Extensive experimental results are shown to demonstrate the effectiveness and efficiency of our approaches.