We study the minimization problem of a non-convex sparsity promoting penalty func- tion, the transformed l1 (TL1), and its application in compressed sensing (CS). The TL1 penalty inter- polates l0 and l1 norms through a nonnegative parameter a ∈ (0, +∞), similar to lp with p ∈ (0, 1], and is known to satisfy unbiasedness, sparsity and Lipschitz continuity properties. We first consider the constrained minimization problem, and discuss the exact recovery of l0 norm minimal solution based on the null space property (NSP). We then prove the stable recovery of l0 norm minimal solution if the sensing matrix A satisfies a restricted isometry property (RIP). We formulated a normalized problem to overcome the lack of scaling property of the TL1 penalty function. For a general sensing matrix A, we show that the support set of a local minimizer corresponds to linearly independent columns of A. Next, we present difference of convex algorithms for TL1 (DCATL1) in computing TL1-regularized con- strained and unconstrained problems in CS. The DCATL1 algorithm involves outer and inner loops of iterations, one time matrix inversion, repeated shrinkage operations and matrix-vector multiplications. The inner loop concerns an l1 minimization problem on which we employ the Alternating Direction Method of Multipliers (ADMM). For the unconstrained problem, we prove convergence of DCATL1 to a stationary point satisfying the first order optimality condition. In numerical experiments, we identify the optimal value a = 1, and compare DCATL1 with other CS algorithms on two classes of sensing ma- trices: Gaussian random matrices and over-sampled discrete cosine transform matrices (DCT). Among existing algorithms, the iterated reweighted least squares method based on l1/2 norm is the best in sparse recovery for Gaussian matrices, and the DCA algorithm based on l1 minus l2 penalty is the best for over-sampled DCT matrices. We find that for both classes of sensing matrices, the performance of DCATL1 algorithm (initiated with l1 minimization) always ranks near the top (if not the top), and is the most robust choice insensitive to the conditioning of the sensing matrix A. DCATL1 is also com- petitive in comparison with DCA on other non-convex penalty functions commonly used in statistics with two hyperparameters.
We study a non-convex low-rank promoting penalty function, the transformed Schatten- 1 (TS1), and its applications in matrix completion. The TS1 penalty, as a matrix quasi-norm defined on its singular values, interpolates the rank and the nuclear norm through a nonnegative parameter a∈ (0,+∞). We consider the unconstrained TS1 regularized low-rank matrix recovery problem and develop a fixed point representation for its global minimizer. The TS1 thresholding functions are in closed analytical form for all parameter values. The TS1 threshold values differ in subcritical (supercritical) parameter regime where the TS1 threshold functions are continuous (discontinuous). We propose TS1 iterative thresholding algorithms and compare them with some state-of-the-art algorithms on matrix completion test problems. For problems with known rank, a fully adaptive TS1 iterative thresholding algorithm consistently performs the best under different conditions, where ground truth matrices are generated by multivariate Gaussian, (0,1) uniform and Chi-square distributions. For problems with unknown rank, TS1 algorithms with an additional rank estimation procedure approach the level of IRucL-q which is an iterative reweighted algorithm, non-convex in nature and best in performance.
We study the problem of constructing sparse and fast mean revert- ing portfolios. The problem is motivated by convergence trading and formu- lated as a generalized eigenvalue problem with a cardinality constraint . We use a proxy of mean reversion coefficient, the direct Ornstein-Uhlenbeck (OU) estimator, which can be applied to both stationary and nonstationary data. In addition, we introduce three different methods to enforce the sparsity of the solutions. One method uses the ratio of l1 and l2 norms and the other two use l1 norm. We analyze various formulations of the resulting non-convex opti- mization problems and develop efficient algorithms to solve them for portfolio sizes as large as hundreds. By adopting a simple convergence trading strat- egy, we test the performance of our sparse mean reverting portfolios on both synthetic and historical real market data. In particular, the l1 regularization method, in combination with quadratic program formulation as well as differ-ence of convex functions and least angle regression treatment, gives fast and robust performance on large out-of-sample data set.
To appear in Journal of Scientific Computing (JOMP).
A computational method, based on l1-minimization, is proposed for the problem of link flow correction, when the available traffic flow data on many links in a road network are inconsistent with respect to the flow conservation law. Without extra information, the problem is generally ill-posed when a large portion of the link sensors are unhealthy. It is possible, however, to correct the corrupted link flows accurately with the proposed method under a recoverability condition if there are only a few bad sensors which are located at certain links. We analytically identify the links that are robust to miscounts and relate them to the geometric structure of the traffic network by introducing the recoverability concept and an algorithm for computing it. The recoverability condition for corrupted links is simply the associated recoverability being greater than 1. In a more realistic setting, besides the unhealthy link sensors, small measurement noises may be present at the other sensors. Under the same recoverability condition, our method guarantees to give an estimated traffic flow fairly close to the ground-truth data and leads to a bound for the correction error. Both synthetic and real-world examples are provided to demonstrate the effectiveness of the proposed method.
We present a new perspective on graph-based
methods for collaborative ranking for recommender
systems. Unlike user-based or itembased
methods that compute a weighted average
of ratings given by the nearest neighbors, or lowrank
approximation methods using convex optimization
and the nuclear norm, we formulate
matrix completion as a series of semi-supervised
learning problems, and propagate the known ratings
to the missing ones on the user-user or itemitem
graph globally. The semi-supervised learning
problems are expressed as Laplace-Beltrami
equations on a manifold, or namely, harmonic
extension, and can be discretized by a point integral
method. We show that our approach does
not impose a low-rank Euclidean subspace on the
data points, but instead minimizes the dimension
of the underlying manifold. Our method, named
LDM (low dimensional manifold), turns out to be
particularly effective in generating rankings of
items, showing decent computational efficiency
and robust ranking quality compared to state-ofthe-