In a Data-Generating Experiment, the observed sample, x, has intractable or unavailable c.d.f., F_θ, and θ’s statistical nature is unknown; θ is element of metric space (Θ, dΘ). Matching estimates of θ are introduced, learning from the “best” x-matches with samples X^∗ from F_{θ^∗}, θ^∗ ∈ Θ. Under mild conditions, these nonparametric estimates are uniformly consistent and the upper bounds on their rates of convergence in probability have the same rate and depend on the Kolmogorov entropies of an increasing sequence of sets covering Θ. When Θ ⊆ Rm and the observations are i.i.d. the upper bounds can be, \sqrt{log n}/\sqrt{n} when m is known, and \sqrt{m_n·log n}/\sqrt{n} when m is unknown; m ≥ 1, m_n ↑ ∞ at a desired rate. Upper bounds
can also be obtained for dependent observations. These rates hold for observations in R^d, complementing recent results obtained for real, i.i.d. observations, under stronger assumptions and using weak probability distances; d ≥ 1. In simulations, the Matching estimates are successful for the mixture of 2 normals and for Tukey’s (a, b, g, h) and the (a, b, g, k) models. Computers’ evolution will allow for more and faster comparisons, resulting in improved Matching estimates for universal use in Machine Learning.