Qian LinHarvard UniversityXinran LiHarvard UniversityDongming HuangHarvard UniversityJun S. LiuHarvard University
Statistics Theory and Methodsmathscidoc:1701.333180
The central subspace of a pair of random variables $(y,x) \in \mathbb{R}^{p+1}$ is the minimal subspace $\mathcal{S}$ such that $y \perp \hspace{-2mm} \perp x\mid P_{\mathcal{S}}x$. In this paper, we consider the minimax rate of estimating the central space of the multiple index models $y=f(\beta_{1}^{\tau}x,\beta_{2}^{\tau}x,...,\beta_{d}^{\tau}x,\epsilon)$ with at most $s$ active predictors where $x \sim N(0,I_{p})$. We first introduce a large class of models depending on the smallest non-zero eigenvalue $\lambda$ of $var(\mathbb{E}[x|y])$, over which we show that an aggregated estimator based on the SIR procedure converges at rate $d\wedge((sd+s\log(ep/s))/(n\lambda))$. We then show that this rate is optimal in two scenarios: the single index models; and the multiple index models with fixed central dimension $d$ and fixed $\lambda$. By assuming a technical conjecture, we can show that this rate is also optimal for multiple index models with bounded dimension of the central space. We believe that these (conditional) optimal rate results bring us meaningful insights of general SDR problems in high dimensions.
@inproceedings{qianon,
title={On the optimality of sliced inverse regression in high dimensions},
author={Qian Lin, Xinran Li, Dongming Huang, and Jun S. Liu},
url={http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170121192407657109115},
}
Qian Lin, Xinran Li, Dongming Huang, and Jun S. Liu. On the optimality of sliced inverse regression in high dimensions. http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170121192407657109115.