When people in a society want to make inference about some parameter, each person would potentially want to use data collected by other people. Information (data) exchange in social contexts is usually costly, so to make sound statistical decisions, people need to compromise between benefits and costs of information acquisition. Conflicts of interests and coordination will arise. Classical statistics does not consider peoples interaction in the data collection process. To address this ignorance, this work explores multi-agent Bayesian inference problems with a game theoretic social network model. Bearing our interest in aggregate inference at the societal level, we propose a new concept finite population learning to address whether with high probability, a large fraction of people can make good inferences. Serving as a foundation, this concept enables us to study the long run trend of aggregate inference quality as population grows.
Economists historically measure the degree to which the market is surprised by an earnings announcement by the consensus forecast error, defined as difference between the actual earnings and the consensus forecast. The consensus might be calculated using either the mean or median of security analysts forecasts. The premise of this measure is that the consensus forecast is a good proxy for the markets expectation of earnings. Hence the consensus forecast error captures how surprised the market is when the earnings is announced. The consensus forecast error is a building block of a host of studies across finance, accounting and economics (see for a survey of event studies in Kothari (2001)). For instance, in finance and accounting, it is used in event studies of how efficiently markets react to earnings announcements. Efficient market studies when it comes to bond or currency markets and macroeconomic
Using high-frequency data, we estimate the risk of a large portfolio with weights being the solution of an optimization problem subject to some linear inequality constraints. We propose a fully nonparametric approach as a benchmark, as well as a factor-based semiparametric approach with observable factors to attack the curse of dimensionality. We provide in-fill asymptotic distributions of the realized volatility estimators of the optimal portfolio, while taking into account the estimation error in the optimal portfolio weights as a result of the covariance matrix estimation. Our theoretical findings suggest that ignoring such an error leads to a first-order asymptotic bias which undermines the statistical inference. Such a bias is related to in-sample optimism in portfolio allocation. Our simulation results suggest satisfactory finite sample performance after bias correction, and that the factor-based approach becomes increasingly superior with a growing cross-sectional dimension. Empirically, using a large cross-section of high-frequency stock returns, we find our estimator successfully addresses the issue of in-sample optimism.
The multiple testing procedure plays an important role in detecting the presence of spatial signals for large scale imaging data. Typically, the spatial signals are sparse but clustered. This paper provides empirical evidence that for a range of commonly used control levels, the conventional FDR procedure can lack the ability to detect statistical significance, even if the -values under the true null hypotheses are independent and uniformly distributed; more generally, ignoring the neighboring information of spatially structured data will tend to diminish the detection effectiveness of the FDR procedure. This paper first introduces a scalar quantity to characterize the extent to which the lack of identification phenomenon(LIP) of the FDR procedure occurs. Second, we propose a new multiple comparison procedure, called FDR, to accommodate the spatial information of neighboring -values, via a local aggregation of -values. Theoretical properties of the FDR procedure are investigated under weak dependence of -values. It is shown that the FDR procedure alleviates the LIP of the FDR procedure, thus substantially facilitating the selection of more stringent control levels. Simulation evaluations indicate that the FDR procedure improves the detection sensitivity of the FDR procedure with little loss in detection specificity. The computational simplicity and detection effectiveness of the FDR procedure are illustrated through a real brain fMRI dataset.