We discuss a variant of the multi-task <i>n</i>-vehicle exploration problem. Instead of requiring an optimal permutation of vehicles in every group, the new problem requires all vehicles in a group to arrive at the same destination. Given <i>n</i> tasks with assigned consume-time and profit, it may also be viewed as a maximization of every processors average profit. Further, we propose a new kind of partition problem in fractional form and analyze its computational complexity. By regarding fractional partition as a special case, we prove that the average profit maximization problem is NP-hard when the number of processors is fixed and it is strongly NPhard in general. At last, a pseudo-polynomial time algorithm for the average profit maximization problem and the fractional partition problem is presented, using the idea of the pseudo-polynomial time algorithm for the classical partition problem.
In this paper, we propose the palindromic doubling algorithm (PDA) for the palindromic generalized eigenvalue problem (PGEP) A x= Ax. We establish a complete convergence theory of the PDA for PGEPs without unimodular eigenvalues, or with unimodular eigenvalues of partial multiplicities two (one or two for eigenvalue 1). Some important applications from the vibration analysis and the optimal control for singular descriptor linear systems will be presented to illustrate the feasibility and efficiency of the PDA.
This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex and nonconvex problems. In addition, they are easy to parallelize.
This paper introduces algorithms for the decentralized low-rank matrix completion problem. Assume a low-rank matrix W = [W <sub>1</sub> ,W <sub>2</sub> , ...,W <sub>L</sub> ]. In a network, each agent observes some entries of W <sub></sub> . In order to recover the unobserved entries of W via decentralized computation, we factorize the unknown matrix W as the product of a public matrix X, common to all agents, and a private matrix Y = [Y <sub>1</sub> ,Y <sub>2</sub> , ...,Y <sub>L</sub> ], where Y <sub></sub> is held by agent . Each agent alternatively updates Y <sub></sub> and its local estimate of X while communicating with its neighbors toward a consensus on the estimate. Once this consensus is (nearly) reached throughout the network, each agent recovers W <sub></sub> = XY <sub></sub> , and thus W is recovered. The communication cost is scalable to the number of agents, and W <sub></sub> and Y <sub></sub> are kept private to agent to a certain extent. The algorithm is accelerated by extrapolation and compares favorably to the centralized