The alternating direction method of multipliers (ADMM) is a popular and efficient first-order method
that has recently found numerous applications, and the proximal ADMM is an important variant
of it. The main contributions of this paper are the proposition and the analysis of a class of
inertial proximal ADMMs, which unify the basic ideas of the inertial proximal point method and
the proximal ADMM, for linearly constrained separable convex optimization. This class of methods
are of inertial nature because at each iteration the proximal ADMM is applied to a point extrapolated
at the current iterate in the direction of last movement. The recently proposed inertial primal-dual
algorithm [A. Chambolle and T. Pock, On the ergodic convergence rates of a first-order primaldual
algorithm, preprint, 2014, Algorithm 3] and the inertial linearized ADMM [C. Chen, S. Ma,
and J. Yang, arXiv:1407.8238, eq. (3.23)] are covered as special cases. The proposed algorithmic
framework is very general in the sense that the weighting matrices in the proximal terms are allowed
to be only positive semidefinite, but not necessarily positive definite as required by existing methods
of the same kind. By setting the two proximal terms to zero, we obtain an inertial variant of the
classical ADMM, which is to the best of our knowledge new. We carry out a unified analysis for
the entire class of methods under very mild assumptions. In particular, convergence, as well as
asymptotic $o(1/\sqrt{k})$ and nonasymptotic $O(1/\sqrt{k})$ rates of convergence, are established for the best
primal function value and feasibility residues, where k denotes the iteration counter. The global
iterate convergence of the generated sequence is established under an additional assumption. We
also present extensive experimental results on total variation–based image reconstruction problems
to illustrate the profits gained by introducing the inertial extrapolation steps.