We study the well-posedness theory for the MHD boundary layer. The boundary layer equations are governed by the Prandtl-type equations that are derived from the incompressible MHD system with non-slip boundary condition on the velocity and perfectly conducting condition on the magnetic field. Under the assumption that the initial tangential magnetic field is not zero, we establish the local-in time existence, uniqueness of solutions for the nonlinear MHD boundary layer equations. Compared with the well-posedness theory of the classical Prandtl equations for which the monotonicity condition of the tangential velocity plays a crucial role, this monotonicity condition is not needed for the MHD boundary layer. This justifies the physical understanding that the magnetic field has a stabilizing effect on MHD boundary layer in rigorous mathematics.
This article is concerned with feature screening and variable selection for varying coefficient models with ultrahigh-dimensional covariates. We propose a new feature screening procedure for these models based on conditional correlation coefficient. We systematically study the theoretical properties of the proposed procedure, and establish their sure screening property and the ranking consistency. To enhance the finite sample performance of the proposed procedure, we further develop an iterative feature screening procedure. Monte Carlo simulation studies were conducted to examine the performance of the proposed procedures. In practice, we advocate a two-stage approach for varying coefficient models. The two-stage approach consists of (a) reducing the ultrahigh dimensionality by using the proposed procedure and (b) applying regularization methods for dimension-reduced varying coefficient models to make statistical inferences on the coefficient functions. We illustrate the proposed two-stage approach by a real data example. Supplementary materials for this article are available online.
We consider the problem of minimizing the sum of a smooth function h with a bounded Hessian and a nonsmooth function. We assume that the latter function is a composition of a proper closed function P and a surjective linear map M. This problem is nonconvex in general and encompasses many important applications in engineering and machine learning. In this paper, we examined two types of splitting
methods for solving this nonconvex optimization problem: the alternating direction method of multipliers and the proximal gradient algorithm. For the direct adaptation of the alternating direction method of multipliers, we show that if the penalty parameter is chosen sufficiently large and the sequence generated has a cluster point, then it gives a stationary point of the nonconvex problem. We also establish convergence of the whole sequence under an additional assumption that the functions h and P are semialgebraic. Furthermore, we give simple sufficient conditions to guarantee boundedness of the sequence generated. These conditions can be satisfied for a wide range of applications including the least squares problem with the 1/2 regularization. Finally, when M is the identity so that the proximal gradient algorithm can be efficiently applied, we show that any cluster point is stationary under a slightly more flexible constant step-size rule than what is known in the literature
for a nonconvex h. We illustrate our theoretical finding with a variety of applications such as signal denoising and sparse optimisation problems.
All SL(n) covariant vector valuations on convex polytopes in R^n are completely classified without any continuity assumptions. The moment vector turns out to be the only such valuation if n ≥ 3, while two new functionals show up in dimension two.