# MathSciDoc: An Archive for Mathematician ∫

#### Statistics Theory and Methodsmathscidoc:2206.33002

Statistics, 53, (6), 1251-1268, 2019.7
In deconvolution in R^d, d≥1, with mixing density p(∈\mathscr{P}) and kernel h, the mixture density f_p(∈\mathscr{F}_p) is estimated with MDE f_{\hat{p}n}, having upper L1-error rate, an, in probability or in risk; \hat{p}_n∈\mathscr{P}. In one application, \mathscr{P} consists of L1-separable densities in R with differences changing sign at most J times and h(x−y) Totally Positive. When h is known and p is \tilde{q}-smooth, vanishing outside a compact in R^d, plug-in upper bounds are provided for the L2-error rate of \hat{p}_n and its [s]-th mixed partial derivative \hat{p}^{(s)}_n, via ∥f_{\hat{p}_n}−f_p∥_1, with rates (log a^{−1}_n)^{−N_1} and a^{N_2}_n, respectively, for h super-smooth and smooth; \tilde{q}∈R^+, [s] ≤ \tilde{q}, d≥1, N_1>0, N_2>0. For a_n ∼ (logn)^ζ⋅(n^{−δ}), the former rate is optimal for any δ>0 and the latter misses the optimal by the factor (logn)^ξ when δ=.5; ζ>0, ξ>0. N_1 and N_2 appear in optimal rates and lower error and risk bounds in the deconvolution literature.
@inproceedings{yannis2019plug-in,
title={Plug-in L2-upper error bounds in deconvolution, for a mixing density estimate in Rd and for its derivatives, via the L1-error for the mixture},
author={Yannis G. Yatracos},
url={http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20220621170441441388431},
booktitle={Statistics},
volume={53},
number={6},
pages={1251-1268},
year={2019},
}

Yannis G. Yatracos. Plug-in L2-upper error bounds in deconvolution, for a mixing density estimate in Rd and for its derivatives, via the L1-error for the mixture. 2019. Vol. 53. In Statistics. pp.1251-1268. http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20220621170441441388431.