We develop a generic game platform that can be used to model various real-world systems with multiple intelligent cloud-computing pools and parallel-queues for resources-competing users. Inside the platform, the software structure is modelled as Blockchain. All the users are associated with Big Data arrival streams whose random dynamics is modelled by triply stochastic renewal reward processes (TSRRPs). Each user may be served simultaneously by multiple pools while each pool with parallel- servers may also serve multi-users at the same time via smart policies in the Blockchain, e.g. a Nash equilibrium point myopically at each fixed time to a game-theoretic scheduling problem. To illustrate the effectiveness of our game platform, we model the performance measures of its internal data flow dynamics (queue length and workload processes) as reflecting diffusion with regime-switchings (RDRSs) under our scheduling policies. By RDRS models, we can prove our myopic game-theoretic policy to be an asymptotic Pareto minimal-dual-cost Nash equilibrium one globally over the whole time horizon to a randomly evolving dynamic game problem. Iterative schemes for simulating our multi-dimensional RDRS models are also developed with the support of numerical comparisons.
Herein, we use hybrid resampling to address (a) the long-standing problem of inference on change times and changed parameters in change-point ARX-GARCH models, and (b) the challenging problem of valid confidence intervals, after variable selection under sparsity assumptions, for the parameters in linear regression models with high-dimensional stochastic regressors and asymptotically stationary noise. For the latter problem, we introduce consistent estimators of the selected parameters and a resampling approach to overcome the inherent difficulties of post-selection confidence intervals. For the former problem, we use a sequential Monte Carlo for the latent states (representing the change times and changed parameters) of a hidden Markov model. Asymptotic efficiency theory and simulation and empirical studies demonstrate the advantages of the proposed methods.
Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of portfolios selection among a vast pool of assets, as demonstrated by Fan, Zhang, and Yu. The required high-dimensional volatility matrix can be estimated by using high-frequency financial data. This enables us to better adapt to the local volatilities and local correlations among a vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This article studies the volatility matrix estimation using high-dimensional, high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of pairwise-refresh time and all-refresh time methods based on the concept of refresh time proposed by Barndorff-Nielsen, Hansen, Lunde, and Shephard for the estimation of vast covariance matrix and compare their merits in the portfolio selection. We