In this paper we study the support recovery problem for single index models Y=f(X⊺β,ε), where f is an unknown link function, X∼Np(0,𝕀p) and β is an s-sparse unit vector such that βi∈{±1s√,0}. In particular, we look into the performance of two computationally inexpensive algorithms: (a) the diagonal thresholding sliced inverse regression (DT-SIR) introduced by Lin et al. (2015); and (b) a semi-definite programming (SDP) approach inspired by Amini & Wainwright (2008). When s=O(p1−δ) for some δ>0, we demonstrate that both procedures can succeed in recovering the support of β as long as the rescaled sample size κ=nslog(p−s) is larger than a certain critical threshold. On the other hand, when κ is smaller than a critical value, any algorithm fails to recover the support with probability at least 12 asymptotically. In other words, we demonstrate that both DT-SIR and the SDP approach are optimal (up to a scalar) for recovering the support of β in terms of sample size. We provide extensive simulations, as well as a real dataset application to help verify our theoretical observations.