We propose a variational approach to obtain superresolution images from multiple low-resolution frames extracted from video clips. First the displacement between the lowresolution frames and the reference frame are computed by an optical flow algorithm. Then a low-rank model is used to construct the reference frame in high-resolution by incorporating the information of the low-resolution frames. The model has two terms: a 2-norm data fidelity term and a nuclear-norm regularization term. Alternating direction method of multipliers is used to solve the model. Comparison of our methods with other models on synthetic and real video clips show that our resulting images are more accurate with less artifacts. It also provides much finer and discernable details.
This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex and nonconvex problems. In addition, they are easy to parallelize.
The great performance of coordinate update methods depends on solving simple sub-problems. To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate-friendly operators that perform low-cost coordinate updates.
Based on the discovered coordinate friendly operators, as well as operator splitting techniques, we obtain new coordinate update algorithms for a variety of problems in machine learning, image processing, as well as sub-areas of optimization. Several problems are treated with coordinate update for the first time in history. The obtained algorithms are scalable to large instances through parallel and even asynchronous computing. We present numerical examples to illustrate how effective these algorithms are.
Finding a fixed point to a nonexpansive operator, i.e., x = Tx, abstracts many
problems in numerical linear algebra, optimization, and other areas of data sciences. To solve xed-
point problems, we propose ARock, an algorithmic framework in which multiple agents (machines,
processors, or cores) update x in an asynchronous parallel fashion. Asynchrony is crucial to parallel
computing since it reduces synchronization wait, relaxes communication bottleneck, and thus speeds
up computing significantly. At each step of ARock, an agent updates a randomly selected coordinate
xi based on possibly out-of-date information on x. The agents share x through either global memory
or communication. If writing xi is atomic, the agents can read and write x without memory locks.
We prove that if the nonexpansive operator T has a fixed point, then with probability one, ARock
generates a sequence that converges to a fixed point of T. Our conditions on T and step sizes are
weaker than comparable work. Linear convergence is obtained under suitable assumptions.
We propose special cases of ARock for linear systems, convex optimization, machine learning, as
well as distributed and decentralized consensus problems. Numerical experiments of solving sparse
logistic regression problems are presented.
Motivated by big data applications, first-order methods have been extremely popular
in recent years. However, naive gradient methods generally converge slowly. Hence, much efforts have
been made to accelerate various first-order methods. This paper proposes two accelerated methods
towards solving structured linearly constrained convex programming, for which we assume composite
convex objective that is the sum of a differentiable function and a possibly nondifferentiable one.
The first method is the accelerated linearized augmented Lagrangian method (LALM). At each
update to the primal variable, it allows linearization to the differentiable function and also the
augmented term, and thus it enables easy subproblems. Assuming merely convexity, we show that
LALM owns O(1/t) convergence if parameters are kept fixed during all the iterations and can be
accelerated to O(1/t^2) if the parameters are adapted, where t is the number of total iterations.
The second method is the accelerated linearized alternating direction method of multipliers
(LADMM). In addition to the composite convexity, it further assumes two-block structure on the
objective. Different from classic ADMM, our method allows linearization to the objective and also
augmented term to make the update simple. Assuming strong convexity on one block variable, we
show that LADMM also enjoys O(1=t2) convergence with adaptive parameters. This result is a
significant improvement over that in [Goldstein et. al, SIIMS'14], which requires strong convexity
on both block variables and no linearization to the objective or augmented term.
Numerical experiments are performed on quadratic programming, image denoising, and support
vector machine. The proposed accelerated methods are compared to nonaccelerated ones and also
existing accelerated methods. The results demonstrate the validness of acceleration and superior
performance of the proposed methods over existing ones.