We introduce the notions of a D-standard abelian category and a K-standard additive category. We prove that for a finite dimensional algebra A, its module category is D-standard if and only if any derived autoequivalence on Ais standard, that is, isomorphic to the derived tensor functor by a two-sided tilting complex. We prove that if the subcategory of projective A-modules is K-standard, then the module category is D-standard. We provide new examples of D-standard module categories.
Recent years have witnessed the surge of asynchronous parallel (async-parallel) iterative algorithms due to problems involving very large-scale data and a large number of decision variables. Because of asynchrony, the iterates are computed with outdated information, and the age of the outdated information, which we call delay, is the number of times it has been updated since its creation. Almost all recent works prove convergence under the assumption of a finite maximum delay and set their stepsize parameters accordingly. However, the maximum delay is practically unknown. This paper presents convergence analysis of an async-parallel method from a probabilistic viewpoint, and it allows for large unbounded delays. An explicit formula of stepsize that guarantees convergence is given depending on delays’ statistics. With p+1 identical processors, we empirically measured that delays closely follow the Poisson distribution with parameter p, matching our theoretical model, and thus, the stepsize can be set accordingly. Simulations on both convex and nonconvex optimization problems demonstrate the validness of our analysis and also show that the existing maximum-delay-induced stepsize is too conservative, often slows down the convergence of the algorithm.
In this paper, we aim at developing scalable neural network-type learning systems. Motivated by the idea of constructive neural networks in approximation theory, we focus on constructing rather than training feed-forward neural networks (FNNs) for learning, and propose a novel FNNs learning system called the constructive FNN (CFN). Theoretically, we prove that the proposed method not only overcomes the classical saturation problem for constructive FNN approximation, but also reaches the optimal learning rate when the regression function is smooth, while the state-of-the-art learning rates established for traditional FNNs are only near optimal (up to a logarithmic factor). A series of numerical simulations are provided to show the efficiency and feasibility of CFN.