MathSciDoc: An Archive for Mathematician ∫

Machine Learningmathscidoc:2206.41014

2022.6
When sample, X = x, is observed from intractable c.d.f. Fθ, or a Black-Box with input θ, an Approximate Bayesian Computation (ABC) method provides approximate posterior, π_ϵ. θ^∗ is included in the support of π_ϵ when the Matching distance ρ(S(x^∗), S(x)) ≤ ϵ; x^∗ is a sample drawn from F_{θ^∗}, θ^∗ is obtained from prior π, S is a summary statistic, ϵ > 0. ABC concerns include: the use of only one sample, x^∗, for each θ^∗; the choices of S, ρ and ϵ; π_ϵ(θ^∗), which is determined by arbitrary kernel, K(x, x^∗; ϵ), creating visual π_ϵ-artifacts. The concerns are accommodated with the introduced Fiducial(F)-ABC for all (θ^∗ drawn): M x^∗ are drawn from F_{θ^∗}; a universal S is used, the empirical measure indexed by sets which activate its sufficiency for exchangeable observations-vectors and have been neglected; a strong, probability distance ρ is used, inherently connected with S and Matching; light is thrown to ϵ’s nature and value; πϵ is obtained from the proportions of x∗ matching x. F-ABC for all posterior is closer to Bayesian philosophy, which does not use θ^∗-exclusions. Under few, mild assumptions, π_ϵ converges to the posterior, π(θ|x), when ϵ ↓ 0, and rates of concentration of T (F_{θ^∗}) to T (F_θ) are obtained when n ↑ ∞; T is a functional. Satisfactory F-ABC for all θ^∗ drawn posteriors are depicted for parametric and data generating models, including Tukey’s (a, b, g, h)-model, a 5-parameter normal mixture and a time series model. Various advantages of the F-ABC for all are presented over coarsened posteriors for observations in R^d, d ≥ 1.
```@inproceedings{yannis2022fiducial,