We present a novel variation of the well-known infomax algorithm of blind source separation. Under natural gradient descent, the infomax algorithm converges to a stationary point of a limiting ordinary differential equation. However, due to the presence of saddle points or local minima of the corresponding likelihood function, the algorithm may be trapped around these bad stationary points for a long time, especially if the initial data are near them. To speed up convergence, we propose to add a sequence of random perturbations to the infomax algorithm to shake the iterating sequence so that it is captured by a path descending to a more stable stationary point. We analyze the convergence of the randomly perturbed algorithm, and illustrate its fast convergence through numerical examples on blind demixing of stochastic signals. The examples have analytical structures so that saddle points or local minima of the likelihood functions are explicit. The results may have implications for online learning algorithms in dissimilar problems.
We consider diffusivity of random walks with transition probabilities depending on the number of consecutive traversals of the last traversed edge, the so called senile reinforced random walk (SeRW). In one dimension, the walk is known to be sub-diffusive with identity reinforcement function. We perturb the model by introducing a small probability \delta of escaping the last traversed edge at each step. The perturbed SeRW model is diffusive for any \delta , with enhanced diffusivity (\delta ) in the small \delta regime. We further study stochastically perturbed SeRW models by having the last edge escape probability of the form \delta with \delta 's being independent random variables. Enhanced diffusivity in such models are logarithmically close to the so called residual diffusivity (positive in the zero \delta limit), with diffusivity between \delta and \delta . Finally, we generalize our results to higher dimensions where the unperturbed model is already diffusive. The enhanced diffusivity can be as much as \delta .
We study a system of semilinear hyperbolic equations passively advected by smooth white noise in time random velocity fields. Such a system arises in modelling non-premixed isothermal turbulent flames under single-step kinetics of fuel and oxidizer. We derive closed equations for one-point and multi-point probability distribution functions (PDFs) and closed-form analytical formulae for the one-point PDF function, as well as the two-point PDF function under homogeneity and isotropy. Exact solution formulae allow us to analyse the ensemble-averaged fuel/oxidizer concentrations and the motion of their level curves. We recover the empirical formulae of combustion in the thin reaction zone limit and show that these approximate formulae can either underestimate or overestimate average concentrations when the reaction zone is not tending to zero. We show that the averaged reaction rate slows down locally in
We study the enhanced diffusivity in the so called elephant random walk model with stops (ERWS) by including symmetric random walk steps at small probability \epsilon . At any \epsilon , the large time behavior transitions from sub-diffusive at \epsilon to diffusive in a wedge shaped parameter regime where the diffusivity is strictly above that in the un-perturbed ERWS model in the \epsilon limit. The perturbed ERWS model is shown to be solvable with the first two moments and their asymptotics calculated exactly in both one and two space dimensions. The model provides a discrete analytical setting of the residual diffusion phenomenon known for the passive scalar transport in chaotic flows (eg generated by time periodic cellular flows and statistically sub-diffusive) as molecular diffusivity tends to zero.
This paper introduces an efficient approach to integrating non-local statistics into the higher-order Markov Random Fields (MRFs) framework. Motivated by the observation that many non-local statistics (eg, shape priors, color distributions) can usually be represented by a small number of parameters, we reformulate the higher-order MRF model by introducing additional latent variables to represent the intrinsic dimensions of the higher-order cliques. The resulting new model, called NC-MRF, not only provides the flexibility in representing the configurations of higher-order cliques, but also automatically decomposes the energy function into less coupled terms, allowing us to design an efficient algorithmic framework for maximum a posteriori (MAP) inference. Based on this novel modeling/inference framework, we achieve state-of-the-art solutions to the challenging problems of class-specific image segmentation and template-based 3D facial expression tracking, which demonstrate the potential of our approach.
Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve 1, 2. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours 3, 4, 5, 6, 7, 8. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm 9. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times 10, 11 of random walks 12. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small
When people in a society want to make inference about some parameter, each person would potentially want to use data collected by other people. Information (data) exchange in social contexts is usually costly, so to make sound statistical decisions, people need to compromise between benefits and costs of information acquisition. Conflicts of interests and coordination will arise. Classical statistics does not consider peoples interaction in the data collection process. To address this ignorance, this work explores multi-agent Bayesian inference problems with a game theoretic social network model. Bearing our interest in aggregate inference at the societal level, we propose a new concept finite population learning to address whether with high probability, a large fraction of people can make good inferences. Serving as a foundation, this concept enables us to study the long run trend of aggregate inference quality as population grows.
Economists historically measure the degree to which the market is surprised by an earnings announcement by the consensus forecast error, defined as difference between the actual earnings and the consensus forecast. The consensus might be calculated using either the mean or median of security analysts forecasts. The premise of this measure is that the consensus forecast is a good proxy for the markets expectation of earnings. Hence the consensus forecast error captures how surprised the market is when the earnings is announced. The consensus forecast error is a building block of a host of studies across finance, accounting and economics (see for a survey of event studies in Kothari (2001)). For instance, in finance and accounting, it is used in event studies of how efficiently markets react to earnings announcements. Efficient market studies when it comes to bond or currency markets and macroeconomic
Using high-frequency data, we estimate the risk of a large portfolio with weights being the solution of an optimization problem subject to some linear inequality constraints. We propose a fully nonparametric approach as a benchmark, as well as a factor-based semiparametric approach with observable factors to attack the curse of dimensionality. We provide in-fill asymptotic distributions of the realized volatility estimators of the optimal portfolio, while taking into account the estimation error in the optimal portfolio weights as a result of the covariance matrix estimation. Our theoretical findings suggest that ignoring such an error leads to a first-order asymptotic bias which undermines the statistical inference. Such a bias is related to in-sample optimism in portfolio allocation. Our simulation results suggest satisfactory finite sample performance after bias correction, and that the factor-based approach becomes increasingly superior with a growing cross-sectional dimension. Empirically, using a large cross-section of high-frequency stock returns, we find our estimator successfully addresses the issue of in-sample optimism.
The multiple testing procedure plays an important role in detecting the presence of spatial signals for large scale imaging data. Typically, the spatial signals are sparse but clustered. This paper provides empirical evidence that for a range of commonly used control levels, the conventional FDR procedure can lack the ability to detect statistical significance, even if the -values under the true null hypotheses are independent and uniformly distributed; more generally, ignoring the neighboring information of spatially structured data will tend to diminish the detection effectiveness of the FDR procedure. This paper first introduces a scalar quantity to characterize the extent to which the lack of identification phenomenon(LIP) of the FDR procedure occurs. Second, we propose a new multiple comparison procedure, called FDR, to accommodate the spatial information of neighboring -values, via a local aggregation of -values. Theoretical properties of the FDR procedure are investigated under weak dependence of -values. It is shown that the FDR procedure alleviates the LIP of the FDR procedure, thus substantially facilitating the selection of more stringent control levels. Simulation evaluations indicate that the FDR procedure improves the detection sensitivity of the FDR procedure with little loss in detection specificity. The computational simplicity and detection effectiveness of the FDR procedure are illustrated through a real brain fMRI dataset.
The problem of estimating the density function and the regression function involving errors-in-variables in time series is considered. Under appropriate conditions, it is shown that the rates obtained in Fan (1991), Fan and Truong (1990) are also achievable in the context of dependent observations. Consequently, the results presented here extend our previous results for cross-sectional data to the longitudinal ones. oAbbreviated title. Measurement Errors in Time Series AMS 1980 subject classification. Primary 62G20. Secondary 62G05, 62J99.
Measuring conditional dependence is an important topic in statistics with broad applications including graphical models. Under a factor model setting, a new conditional dependence measure based on projection is proposed. The corresponding conditional independence test is developed with the asymptotic null distribution unveiled where the number of factors could be high-dimensional. It is also shown that the new test has control over the asymptotic significance level and can be calculated efficiently. A generic method for building dependency graphs without Gaussian assumption using the new test is elaborated. Numerical results and real data analysis show the superiority of the new method.
In this note we include a correction for Equation (19) on page 840 which is a step in the proof of Theorem 4 of Fan et al.(2014). There is no change in the statement of Theorem 4, and the rest of the proof stays unchanged. Equation (19) on page 840 should be corrected to as follows: We apply the coordinatewise mean-value theorem with respect to each coordinate of (ie, j) to obtain that
Motivated by the sampling problems and heterogeneity issues common in high-dimensional big datasets, we consider a class of discordant additive index models. We propose method of moments based procedures for estimating the indices of such discordant additive index models in both low and high-dimensional settings. Our estimators are based on factorizing certain moment tensors and are also applicable in the overcomplete setting, where the number of indices is more than the dimensionality of the datasets. Furthermore, we provide rates of convergence of our estimator in both high and low-dimensional setting. Establishing such results requires deriving tensor operator norm concentration inequalities that might be of independent interest. Finally, we provide simulation results supporting our theory. Our contributions extend the applicability of tensor methods for novel models in addition to making progress on understanding theoretical properties of such tensor methods.
Several large volatility matrix estimation procedures have been recently developed for factor-based It processes whose integrated volatility matrix consists of low-rank and sparse matrices. Their performance depends on the accuracy of input volatility matrix estimators. When estimating co-volatilities based on high-frequency data, one of the crucial challenges is non-synchronization for illiquid assets, which makes their co-volatility estimators inaccurate. In this paper, we study how to estimate the large integrated volatility matrix without using co-volatilities of illiquid assets. Specifically, we pretend that the co-volatilities for illiquid assets are missing, and estimate the low-rank matrix using a matrix completion scheme with a structured missing pattern. To further regularize the sparse volatility matrix, we employ the principal orthogonal complement thresholding method (POET). We also investigate the asymptotic
High-dimensional linear regression has been intensively studied in the community of statistics in the last two decades. For convenience of theoretical analyses, classical methods usually assume independent observations and subGaussian-tailed errors. However, neither of them hold in many real high-dimensional time-series data. Recently [Sun, Zhou, Fan, 2019, J. Amer. Stat. Assoc., in press] proposed Adaptive Huber Regression (AHR) to address the issue of heavy-tailed errors. They discover that the robustification parameter of the Huber loss should adapt to the sample size, the dimensionality and (1+\delta) -moments of the heavy-tailed errors. We progress in a vertical direction and justify AHR on dependent observations. Specially, we consider an important dependence structure---Markov dependence. Our results show that the Markov dependence impacts on the adaption of the robustification parameter and the estimation of regression coefficients in the way that the sample size should be discounted by a factor depending on the spectral gap of the underlying Markov chain.
The main purpose of this workshop was to assemble international leaders from statistics and machine learning to identify important research problems, to cross-fertilize between the disciplines, and to ultimately start coordinated research efforts toward better solutions. The workshop focused on discussing modern methods for analysis complex high dimensional data with applications to econometrics, finance, biomedicine, genomics etc.
Abstract Markowitz (1952, 1959) laid down the ground-breaking work on mean-variance analysis without gross exposure constraints. Under this framework, the theoretical optimal allocation vector can be different from the estimated one due to intrinsic difficulty of estimating a large covariance matrix and return vector. This can result in adverse performance in portfolio selected based on empirical data due to noise accumulation on estimation errors (Jagannathan and Ma, 2003; Fan, Fan and Lv, 2008). We address this problem by introducing the gross-exposure constrained mean-variance portfolio selection. We show that with gross-exposure constraint the theoretical optimal portfolios have similar performance as empirically selected ones based on estimated covariance matrices and there is no noise accumulation effect from estimation of covariance matrices. This gives theoretical justification to the empirical results
Statistics Theory and Methodsmathscidoc:1912.43433
Princeton University, USA and Hong Kong University of Science and Technology, HKSAR Extent and terms of a thesis are specified in directions for its elaboration that are opened to the public on the web sites of the faculty
A bandwidth selection method is proposed for local linear regression. Our approach is to combine the ideas of optimal bandwidth selection of Hall et al.(1991) in kernel density estimation, and use of direct bias and variance in Fan and Gijbels (1995) for local linear regression. We show that the bandwidth selector has an optimal relative rate of convergence of n-1/2 with n the sample size.
There are few techniques available for testing whether or not a family of parametric times series models fits a set of data reasonably well without serious restrictions on the forms of alternative models. In this paper, we consider generalised likelihood ratio tests on whether the spectral density function of a stationary time series admits certain parametric forms. We propose a bias correction method for the generalised likelihood ratio test of Fan et al.(2001). In particular, our methods can be applied to test whether or not a residual series is white noise. Sampling properties of the proposed tests are established. A bootstrap approach is proposed for estimating the null distribution of the test statistics. Simulation studies investigate the accuracy of the proposed bootstrap estimate and compare the power of the various ways of constructing the generalised likelihood ratio tests as well as some classic methods like the Cramer-von Mises and Ljung-Box tests. Our results favour the newly proposed bias reduction method using the local likelihood estimator.