Sparse Sliced Inverse Regression for High Dimensional Data

Qian Lin Harvard University Zhigen Zhao Temple University Jun S. Liu Harvard University

Statistics Theory and Methods mathscidoc:1701.333181

For multiple index models, it has recently been shown that the sliced inverse regression (SIR) is consistent for estimating the sufficient dimension reduction (SDR) subspace if and only if the dimension p and sample size n satisfies that ρ=limp/n=0. Thus, when p is of the same or a higher order of n, additional assumptions such as sparsity have to be imposed in order to ensure consistency for SIR. By constructing artificial response variables made up from top eigenvectors of the estimated conditional covariance matrix, \widehat{var(𝔼[x|y])}, we introduce a simple Lasso regression method to obtain an estimate of the SDR subspace. The resulting algorithm, Lasso-SIR, is shown to be consistent and achieve the optimal convergence rate under certain sparsity conditions when p is of order o(n^2λ^2) where λ is the generalized signal noise ratio. We also demonstrate the superior performance of Lasso-SIR compared with existing approaches via extensive numerical studies and several real data examples.
minimax rate, sliced inverse regression, lasso
[ Download ] [ 2017-01-21 19:28:11 uploaded by qianlin ] [ 871 downloads ] [ 0 comments ]
@inproceedings{qiansparse,
  title={ Sparse Sliced Inverse Regression for High Dimensional Data},
  author={Qian Lin, Zhigen Zhao, and Jun S. Liu},
  url={http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170121192811205941116},
}
Qian Lin, Zhigen Zhao, and Jun S. Liu. Sparse Sliced Inverse Regression for High Dimensional Data. http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20170121192811205941116.
Please log in for comment!
 
 
Contact us: office-iccm@tsinghua.edu.cn | Copyright Reserved