We introduce a parallel machine scheduling problem in
which the processing times of jobs are not given in advance but are
determined by a system of linear constraints. The objective is to
minimize the makespan, i.e., the maximum job completion time among
all feasible choices. This novel problem is motivated by various
real-world application scenarios. We discuss the computational
complexity and algorithms for various settings of this problem. In
particular, we show that if there is only one machine with an
arbitrary number of linear constraints, or there is an arbitrary
number of machines with no more than two linear constraints, or both
the number of machines and the number of linear constraints are
fixed constants, then the problem is polynomial-time solvable via
solving a series of linear programming problems. If both the number
of machines and the number of constraints are inputs of the problem
instance, then the problem is NP-Hard. We further propose several
approximation algorithms for the latter case.
It has been an open question whether the family of merit functions $\psi _p\(p> 1) $, the generalized Fischer-Burmeister (FB) merit function, associated to the second-order cone is smooth or not. In this paper we answer it partly, and show that \psi _p is smooth for \psi _p , and we provide the condition for its coerciveness. Numerical results are reported to illustrate the influence of \psi _p on the performance of the merit function method based on \psi _p .
In this paper, we establish sublinear and linear convergence of fixed point iterations generated by averaged operators in a Hilbert space. Our results are achieved under a bounded Holder regularity assumption which generalizes the well-known notion of bounded linear regularity. As an application of our results, we provide a convergence rate analysis for many important iterative methods in solving broad mathematical problems such as convex feasibility problems and variational inequality problems. These include Krasnoselskii–Mann iterations, the cyclic projection algorithm, forward-backward splitting and the Douglas–Rachford feasibility algorithm along with some variants. In the important case in which the underlying sets are convex sets described by convex polynomials in a finite dimensional space, we show that the Holder regularity properties are automatically satisfied,
from which sublinear convergence follows.
This paper studies the capacity allocation game between duopolistic airlines which could offer callable products. Previous literature has shown that callable products provide a riskless source of additional rev- enue for a monopolistic airline. We examine the impact of the introduction of callable products on the revenues and the booking limits of duopolistic airlines. The analytical results demonstrate that, when there is no spill of low-fare customers, offering callable products is a dominant strategy of both airlines and provides Pareto gains to both airlines. When customers of both fare classes spill, offering callable products is no longer a dominant strategy and may harm the revenues of the airlines. Numerical ex- amples demonstrate that whether the two airlines offer callable products and whether offering callable products is beneficial to the two airlines mainly depends on their loads and capacities. Specifically, when the difference between the loads of the airlines is large, the loads of the airlines play the most important role. When the difference between the loads of the airlines is small, the capacities of the airlines play the most important role. Moreover, numerical examples show that the booking limits of the two airlines in the case with callable products are always higher than those in the case without callable products.
This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex and nonconvex problems. In addition, they are easy to parallelize.
The great performance of coordinate update methods depends on solving simple sub-problems. To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate-friendly operators that perform low-cost coordinate updates.
Based on the discovered coordinate friendly operators, as well as operator splitting techniques, we obtain new coordinate update algorithms for a variety of problems in machine learning, image processing, as well as sub-areas of optimization. Several problems are treated with coordinate update for the first time in history. The obtained algorithms are scalable to large instances through parallel and even asynchronous computing. We present numerical examples to illustrate how effective these algorithms are.
We consider the problem of minimizing the sum of a smooth function h with a bounded Hessian and a nonsmooth function. We assume that the latter function is a composition of a proper closed function P and a surjective linear map M. This problem is nonconvex in general and encompasses many important applications in engineering and machine learning. In this paper, we examined two types of splitting
methods for solving this nonconvex optimization problem: the alternating direction method of multipliers and the proximal gradient algorithm. For the direct adaptation of the alternating direction method of multipliers, we show that if the penalty parameter is chosen sufficiently large and the sequence generated has a cluster point, then it gives a stationary point of the nonconvex problem. We also establish convergence of the whole sequence under an additional assumption that the functions h and P are semialgebraic. Furthermore, we give simple sufficient conditions to guarantee boundedness of the sequence generated. These conditions can be satisfied for a wide range of applications including the least squares problem with the 1/2 regularization. Finally, when M is the identity so that the proximal gradient algorithm can be efficiently applied, we show that any cluster point is stationary under a slightly more flexible constant step-size rule than what is known in the literature
for a nonconvex h. We illustrate our theoretical finding with a variety of applications such as signal denoising and sparse optimisation problems.
Wenjia JingYau Mathematical Sciences Center, Tsinghua University, No 1. Tsinghua Yuan, Beijing 100084, ChinaPanagiotis E. SouganidisDepartment of Mathematics, The University of Chicago, 5734 S. University Ave., Chicago, IL 60637, USAHung V. TranDepartment of Mathematics, University of Wisconsin at Madison, 480 Lincoln Drive, Madison, WI 53706, USA
Analysis of PDEsOptimization and ControlProbabilitymathscidoc:2206.03012
Discrete and Continuous Dynamical Systems - S, 11, (5), 915-939, 2018.10
We study the averaging of fronts moving with positive oscillatory normal velocity, which is periodic in space and stationary ergodic in time. The problem can be formulated as the homogenization of coercive level set Hamilton-Jacobi equations with spatio-temporal oscillations. To overcome the difficulties due to the oscillations in time and the linear growth of the Hamiltonian, we first study the long time averaged behavior of the associated reachable sets using geometric arguments. The results are new for higher than one dimensions even in the space-time periodic setting.
We adapt the Douglas–Rachford (DR) splitting method to solve nonconvex feasibility problems by studying this method for a class of nonconvex optimization problem. While the convergence properties of the method for convex problems have been well studied, far less is known in the nonconvex setting. In this paper, for the direct adaptation of the method to minimize the sum of a proper closed function g and a smooth function f with a Lipschitz continuous gradient, we show that if the step-size parameter is smaller than a computable threshold and the sequence generated has a cluster point, then it gives a stationary point of the optimization problem. Convergence of the whole sequence and a local convergence rate are also established under the additional assumption that f and g are semi-algebraic. We also give simple sufficient conditions guaranteeing the boundedness of the sequence generated. We then apply our nonconvex DR splitting method to finding a point in the intersection of a closed convex set C and a general closed set D by minimizing the squared distance to C subject to D. We show that if either set is bounded and the step-size parameter is smaller than a computable threshold, then the sequence generated from the DR splitting method is actually bounded. Consequently, the sequence generated will have cluster points that are stationary for an optimization problem, and the whole sequence is convergent under an additional assumption that C and D are semi-algebraic. We achieve these results based on a new merit function constructed particularly for the DR splitting method. Our numerical results indicate that our DR splitting method usually outperforms the usual DR splitting method and the alternating projection method in finding a sparse solution of a linear system, in terms of both the solution quality and the number of iterations taken.