This paper introduces an efficient approach to integrating non-local statistics into the higher-order Markov Random Fields (MRFs) framework. Motivated by the observation that many non-local statistics (eg, shape priors, color distributions) can usually be represented by a small number of parameters, we reformulate the higher-order MRF model by introducing additional latent variables to represent the intrinsic dimensions of the higher-order cliques. The resulting new model, called NC-MRF, not only provides the flexibility in representing the configurations of higher-order cliques, but also automatically decomposes the energy function into less coupled terms, allowing us to design an efficient algorithmic framework for maximum a posteriori (MAP) inference. Based on this novel modeling/inference framework, we achieve state-of-the-art solutions to the challenging problems of class-specific image segmentation and template-based 3D facial expression tracking, which demonstrate the potential of our approach.
Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve 1, 2. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours 3, 4, 5, 6, 7, 8. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm 9. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times 10, 11 of random walks 12. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small
When people in a society want to make inference about some parameter, each person would potentially want to use data collected by other people. Information (data) exchange in social contexts is usually costly, so to make sound statistical decisions, people need to compromise between benefits and costs of information acquisition. Conflicts of interests and coordination will arise. Classical statistics does not consider peoples interaction in the data collection process. To address this ignorance, this work explores multi-agent Bayesian inference problems with a game theoretic social network model. Bearing our interest in aggregate inference at the societal level, we propose a new concept finite population learning to address whether with high probability, a large fraction of people can make good inferences. Serving as a foundation, this concept enables us to study the long run trend of aggregate inference quality as population grows.
Economists historically measure the degree to which the market is surprised by an earnings announcement by the consensus forecast error, defined as difference between the actual earnings and the consensus forecast. The consensus might be calculated using either the mean or median of security analysts forecasts. The premise of this measure is that the consensus forecast is a good proxy for the markets expectation of earnings. Hence the consensus forecast error captures how surprised the market is when the earnings is announced. The consensus forecast error is a building block of a host of studies across finance, accounting and economics (see for a survey of event studies in Kothari (2001)). For instance, in finance and accounting, it is used in event studies of how efficiently markets react to earnings announcements. Efficient market studies when it comes to bond or currency markets and macroeconomic
Using high-frequency data, we estimate the risk of a large portfolio with weights being the solution of an optimization problem subject to some linear inequality constraints. We propose a fully nonparametric approach as a benchmark, as well as a factor-based semiparametric approach with observable factors to attack the curse of dimensionality. We provide in-fill asymptotic distributions of the realized volatility estimators of the optimal portfolio, while taking into account the estimation error in the optimal portfolio weights as a result of the covariance matrix estimation. Our theoretical findings suggest that ignoring such an error leads to a first-order asymptotic bias which undermines the statistical inference. Such a bias is related to in-sample optimism in portfolio allocation. Our simulation results suggest satisfactory finite sample performance after bias correction, and that the factor-based approach becomes increasingly superior with a growing cross-sectional dimension. Empirically, using a large cross-section of high-frequency stock returns, we find our estimator successfully addresses the issue of in-sample optimism.
The multiple testing procedure plays an important role in detecting the presence of spatial signals for large scale imaging data. Typically, the spatial signals are sparse but clustered. This paper provides empirical evidence that for a range of commonly used control levels, the conventional FDR procedure can lack the ability to detect statistical significance, even if the -values under the true null hypotheses are independent and uniformly distributed; more generally, ignoring the neighboring information of spatially structured data will tend to diminish the detection effectiveness of the FDR procedure. This paper first introduces a scalar quantity to characterize the extent to which the lack of identification phenomenon(LIP) of the FDR procedure occurs. Second, we propose a new multiple comparison procedure, called FDR, to accommodate the spatial information of neighboring -values, via a local aggregation of -values. Theoretical properties of the FDR procedure are investigated under weak dependence of -values. It is shown that the FDR procedure alleviates the LIP of the FDR procedure, thus substantially facilitating the selection of more stringent control levels. Simulation evaluations indicate that the FDR procedure improves the detection sensitivity of the FDR procedure with little loss in detection specificity. The computational simplicity and detection effectiveness of the FDR procedure are illustrated through a real brain fMRI dataset.
The problem of estimating the density function and the regression function involving errors-in-variables in time series is considered. Under appropriate conditions, it is shown that the rates obtained in Fan (1991), Fan and Truong (1990) are also achievable in the context of dependent observations. Consequently, the results presented here extend our previous results for cross-sectional data to the longitudinal ones. oAbbreviated title. Measurement Errors in Time Series AMS 1980 subject classification. Primary 62G20. Secondary 62G05, 62J99.
Measuring conditional dependence is an important topic in statistics with broad applications including graphical models. Under a factor model setting, a new conditional dependence measure based on projection is proposed. The corresponding conditional independence test is developed with the asymptotic null distribution unveiled where the number of factors could be high-dimensional. It is also shown that the new test has control over the asymptotic significance level and can be calculated efficiently. A generic method for building dependency graphs without Gaussian assumption using the new test is elaborated. Numerical results and real data analysis show the superiority of the new method.
In this note we include a correction for Equation (19) on page 840 which is a step in the proof of Theorem 4 of Fan et al.(2014). There is no change in the statement of Theorem 4, and the rest of the proof stays unchanged. Equation (19) on page 840 should be corrected to as follows: We apply the coordinatewise mean-value theorem with respect to each coordinate of (ie, j) to obtain that