Do you want to **read the rest** of this article?

# Modelling multivariate volatilies via conditionally uncorrelated components

**Article**

*in*Journal of the Royal Statistical Society Series B (Statistical Methodology) 70(4):679-702 · February 2008

*with*52 Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

DOI: 10.1111/j.1467-9868.2008.00654.x · Source: RePEc

Cite this publicationAbstract

We propose to model multivariate volatility processes on the basis of the newly defined conditionally uncorrelated components (CUCs). This model represents a parsimonious representation for matrix-valued processes. It is flexible in the sense that each CUC may be fitted separately with any appropriate univariate volatility model. Computationally it splits one high dimensional optimization problem into several lower dimensional subproblems. Consistency for the estimated CUCs has been established. A bootstrap method is proposed for testing the existence of CUCs. The methodology proposed is illustrated with both simulated and real data sets. Copyright (c) 2008 Royal Statistical Society.

- ... We call this class rotated ARCH (RARCH) models. The proposed trans- formation is related to work on the orthogonal GARCH (OGARCH) model of Alexander and Chibumba (1997) and Alexander (2001), and its extensions in van der Weide (2002), Lanne and Saikkonen (2007), Fan et al. (2008) and Boswijk and van der Weide (2011). The interest in these papers is to find orthogonal or uncondition- ally uncorrelated components in the raw returns which can then be modelled individually through univariate volatility models. 1 In contrast, we utilize a related transformation enabling us to fit flex- ible multivariate models to the rotated returns using covariance 1 The model of Fan et al. (2008) differs in that the estimated components are also conditionally uncorrelated, or the least conditionally correlated if conditionally uncorrelated components do not exist. ...... The proposed trans- formation is related to work on the orthogonal GARCH (OGARCH) model of Alexander and Chibumba (1997) and Alexander (2001), and its extensions in van der Weide (2002), Lanne and Saikkonen (2007), Fan et al. (2008) and Boswijk and van der Weide (2011). The interest in these papers is to find orthogonal or uncondition- ally uncorrelated components in the raw returns which can then be modelled individually through univariate volatility models. 1 In contrast, we utilize a related transformation enabling us to fit flex- ible multivariate models to the rotated returns using covariance 1 The model of Fan et al. (2008) differs in that the estimated components are also conditionally uncorrelated, or the least conditionally correlated if conditionally uncorrelated components do not exist. We discuss the relation of our model to OGARCH models in Section 2. 4. targeting. ...... In this subsection, we discuss how the RARCH class differs from the class of OGARCH models introduced in Alexander and Chibumba (1997), and further extended and refined in van der Weide (2002), Lanne and Saikkonen (2007), Fan et al. (2008) and Boswijk and van der Weide (2011) among others. Consider general linear transformations of the form: r t = Ze t , where Z is some invertible matrix. ...Article
- Mar 2014
- J ECONOMETRICS

This paper introduces a new class of multivariate volatility models which is easy to estimate using covariance targeting, even with rich dynamics. We call them rotated ARCH (RARCH) models. The basic structure is to rotate the returns and then to fit them using a BEKK-type parameterization of the time-varying covariance whose long-run covariance is the identity matrix. The extension to DCC-type parameterizations is given, introducing the rotated conditional correlation (RCC) model. Inference for these models is computationally attractive, and the asymptotics are standard. The techniques are illustrated using data on some DJIA stocks. - ... However, to guarantee the diagonality of the conditional covariance matrix, an additional assumption is needed: the factors must be conditionally uncorrelated. Fan et al. (2008) show that this assumption could lead to serious errors in model fitting, and they propose to model multivariate volatilities using conditionally uncorrelated components (CUC-GARCH). ...... e conditional covariance matrix to be constant. All previous models use principal components analysis (PCA) to identify the set of underlying factors, which are unconditionally uncorrelated. However, to guarantee the diagonality of the conditional covariance matrix, an additional assumption is needed: the factors must be conditionally uncorrelated. Fan et al. (2008) show that this assumption could lead to serious errors in model fitting, and they propose to model multivariate volatilities using conditionally uncorrelated components (CUC-GARCH). In this paper we propose a new alternative for modelling multivariate volatilities as linear combination of several univariate GARCH models. We introduce a ...... These models use a small number of factors compared to the number of observed financial time series, and transform the problem to estimate a multivariate GARCH model into a small number of univariate volatility models. Furthermore, the GICA-GARCH model is related to the work proposed by Fan et al. (2008) that models multivariate volatilities through conditionally uncorrelated components. ...ArticleFull-text available
- Jan 2009

We propose a new multivariate factor GARCH model, the GICA-GARCH model , where the data are assumed to be generated by a set of independent components (ICs). This model applies independent component analysis (ICA) to search the conditionally heteroskedastic latent factors. We will use two ICA approaches to estimate the ICs. The first one estimates the components maximizing their non-gaussianity, and the second one exploits the temporal structure of the data. After estimating the ICs, we fit an univariate GARCH model to the volatility of each IC. Thus, the GICA-GARCH reduces the complexity to estimate a multivariate GARCH model by transforming it into a small number of univariate volatility models. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. An empirical application to the Madrid stock market will be presented, where we compare the forecasting accuracy of the GICA-GARCH model versus the orthogonal GARCH one. - ... The closely related model proposed by Vrontos et al. (2003) is also nested as a special case by imposing structure on the linear transformation. Recently, Fan et al. (2008) studied a general version of the model by relaxing the assumption of independent factors to conditionally uncorrelated factors. Note that one has considerable flexibility in specifying models for the factors. ...... , m, i.e., the components of y t are conditionally uncorrelated. The original formulation of the GO-GARCH model involved the stronger assumption of independence of the components of y t , but for the methods presented in the present paper, the conditional uncorrelatedness assumption (proposed by Fan et al. (2008)) suffices. The assumptions also imply that y t is a covariance-stationary process with mean 0 and unconditional variance E(H t ) = I m . ...... sumed to follow a GARCH-type structure. One possibility, as considered by van der Weide (2002), is to assume separate univariate GARCH(1,1) specifications h it = (1 − α i − β i ) + α i y 2 i,t−1 + β i h i,t−1 , α i , β i ≥ 0, α i + β i < 1, (5) which, under a suitable starting value assumption on h i0 , implies independence of the components y it . Fan et al. (2008) propose a more flexible structure, where h it may depend on y j,t−k , j = i, k ≥ 1. A simple extension of (5) is their extended GARCH(1,1) specification: ...Article
- Jul 2011
- J ECONOMETRICS

We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on eigenvectors of suitably defined sample autocorrelation matrices of squares and cross-products of returns. The method is numerically more attractive than likelihood-based estimation. Furthermore, the new method does not require strict assumptions on the volatility models of the factors, and therefore is less sensitive to model misspecification. We provide conditions for consistency of the estimator, and study its efficiency relative to maximum likelihood estimation using Monte Carlo simulations. The method is applied to European sector returns. - ... The GO-GARCH model was proposed by van der Weide (2002), as a generalization of the orthogonal GARCH model of Ding (1994) and Alexander (2001). Closely related models were proposed by Vron tos et al. (2003) and Fan et al. (2008). The starting point of these models is that an observed vector of returns can be expressed as a non-singular linear transformation of latent factors that are conditionally uncorrelated, and that have a GARCH-type conditional variance specification. ...... , m, such that the components of y t are conditionally uncorrelated. The original formulation of the GO- GARCH model involved the stronger assumption of independence of the components of y t , but for the methods presented in the present paper, the conditional uncorrelatedness assumption (proposed by Fan et al. (2008)) suffices. The assumptions also imply that y t is a covariance-stationary process with mean 0 and unconditional variance E(H t ) = I m . ...... e Σ t = ZH t Z , and unconditional variance Σ = ZZ . The conditional variances h it are assumed to follow a GARCH-type structure. One possibility, as considered by van der Weide (2002), is to assume separate univariate GARCH(1,1) specifications which, under a suitable starting value assumption on h i0 , implies independence of the components y it . Fan et al. (2008) propose a more flexible structure, where h it may depend on y j,t−k , j = i, k ≥ 1. A simple extension of (3) is their extended GARCH(1,1) specification: ...ArticleWe propose two tests for the number of heteroskedastic factors in a generalized orthogonal GARCH (GO-GARCH) model. The first test is the Gaussian likelihood ratio test, the second is a reduced rank test applied to suitably defined autocovariance matrices. We characterize the asymp-totic null distributions of the tests, and compare their finite sample size and power properties to an alternative test proposed by Lanne and Saikkonen (2007).
- ... They assume that the data conditional covariance matrix is generated by some underlying factors that follow univariate GARCH processes. Examples of this class of models are the orthogonal GARCH (O-GARCH) model (Alexander, 2001 ), the generalized orthogonal GARCH (GO-GARCH) model (van der Weide, 2002 ), the generalized orthogonal factor GARCH (GOF- GARCH) model (Lanne & Saikkonen, 2007 ), and the conditional uncorrelated component GARCH (CUC-GARCH) model (Fan, Wang, & Yao, 2008). In addition, the full factor GARCH (FF-GARCH) model proposed by Vrontos, Dellaportas, and Politis (2003) and extended by Diamantopoulos and Vrontos (2010) to allow for multivariate Student-t distributions is also nested in the FACTOR-ARCH approach. ...... Engle's factor GARCH model assumes that t is a constant matrix that does not play any role in the model. The GICA-GARCH model is also related to several orthogonal models, such as the O-GARCH (Alexander, 2001), the GO-GARCH (van der Weide, 2002 ), the GOF- GARCH (Lanne & Saikkonen, 2007), and the CUC-GARCH (Fan et al., 2008). All of these models assume that the data are generated by a linear combination of several factors that follow univariate GARCH models. ...... Therefore, the GOF-GARCH model is also related to Engle's model, but, assuming that plays a specific role, it is the conditional covariance matrix of the homoskedastic components. Finally, the GICA-GARCH model is related to the work proposed by Fan et al. (2008) that models multivariate volatilities through conditionally uncorrelated components. Both the GICA-GARCH and CUC- GARCH models separate the estimation of the unobserved components from fitting a univariate GARCH model for each one of them, and they estimate the components by looking for an orthogonal matrix that is the solution of a non-linear optimization problem. ...Article
- Mar 2012
- INT J FORECASTING

We propose a new conditionally heteroskedastic factor model, the GICA-GARCH model, which combines independent component analysis (ICA) and multivariate GARCH (MGARCH) models. This model assumes that the data are generated by a set of underlying independent components (ICs) that capture the co-movements among the observations, which are assumed to be conditionally heteroskedastic. The GICA-GARCH model separates the estimation of the ICs from their fitting with a univariate ARMA-GARCH model. Here, we will use two ICA approaches to find the ICs: the first estimates the components, maximizing their non-Gaussianity, while the second exploits the temporal structure of the data. After estimating and identifying the common ICs, we fit a univariate GARCH model to each of them in order to estimate their univariate conditional variances. The GICA-GARCH model then provides a new framework for modelling the multivariate conditional heteroskedasticity in which we can explain and forecast the conditional covariances of the observations by modelling the univariate conditional variances of a few common ICs. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. Finally, we present an empirical application to the Madrid stock market, where we evaluate the forecasting performances of the GICA-GARCH and two additional factor GARCH models: the orthogonal GARCH and the conditionally uncorrelated components GARCH. - ... A restricted version of the model, where only a subset of the latent factors has a time-varying conditional variance, has been analysed recently by Lanne and Saikkonen (2007). The full factor model proposed by Vrontos et al. (2003) which imposes structure on the linear transformation, and the model of conditionally uncorrelated components (CUC) proposed by Fan et al. (2008), which relaxes the independent assumption to conditional uncorrelatedness are also closely related. There is also a vast collection of literature on latent factor models. ...... where M ¼ V 0 BV. Boswijk and van der Weide (2006) derived conditions under which the probability limit of ^ M ¼ V 0 ^ BV is a diagonal matrix, which indicates that V can be consistently estimated through PCA of ^ B: Fan et al. (2008) proposed an alternative method to estimate V in the second step. Since conditional uncorrelatedness entails E(r i,t r j,t jW tÀ1 ) ¼ 0, which is equivalent to X A2At ...Article
- Jan 2012
- J Time Anal

We propose a new estimation method for the factor loading matrix in modelling multivariate volatility processes. The key step of the method is based on the weighted scatter estimators, which does not involve optimizing any objective function. The method can therefore be easily applied to high‐dimensional systems without running into computational problems. The estimation is proved to be consistent and the asymptotic distribution is derived. The method inherits robust properties in dealing with ‘outlier’ clusters generated by GARCH processes. Through both simulation and real‐world case studies, we show that the method works well. - ... ?y t 2 } for t = 1, . . . , n; see Fan, Wang and Yao (2008). We illustrate this idea by a real data example. ...... This is the significant evidence to support the assertion that Var(x t |F t?1 ) is a diagonal matrix. For this example, the segmentation method leads to the conditional uncorrelated components of Fan, Wang and Yao (2008). ...We extend the principal component analysis (PCA) to second-order stationary vector time series in the sense that we seek for a contemporaneous linear transformation for a $p$-variate time series such that the transformed series is segmented into several lower-dimensional subseries, and those subseries are uncorrelated with each other both contemporaneously and serially. Therefore those lower-dimensional series can be analysed separately as far as the linear dynamic structure is concerned. Technically it boils down to an eigenanalysis for a positive definite matrix. When $p$ is large, an additional step is required to perform a permutation in terms of either maximum cross-correlations or FDR based on multiple tests. The asymptotic theory is established for both fixed $p$ and diverging $p$ when the sample size $n$ tends to infinity. Numerical experiments with both simulated and real data sets indicate that the proposed method is an effective initial step in analysing multiple time series data, which leads to substantial dimension reduction in modelling and forecasting high-dimensional linear dynamical structures. Unlike PCA for independent data, there is no guarantee that the required linear transformation exists. When it does not, the proposed method provides an approximate segmentation which leads to the advantages in, for example, forecasting for future values. The method can also be adapted to segment multiple volatility processes.
- ... It should be mentioned that recently, an approach for modelling multivariate volatilities via conditional uncorrelated components (CUC's) was proposed by Fan et al. (2005). The CUC's in their approach are actually the same as the conditionally uncorrelated factors in our CD-GARCH model. ...... Particularly, this performance index measures how close W 1 W −1 2 is to the generalized permutation matrix. The smaller P err is, the closer W 1 W −1 2 is to the generalized permutation matrix. Permutation and scaling of row of W 1 and W 2 do not affect this measure. From the parameter viii. Fan et al. (2005) gave the bootstrapping procedure for computing standard errors (or confidence sets) of the parameters in factor GARCH models. For DCC and factor-DCC models, the procedure is quite similar. In the bootstrap sampling procedure, we just need to obtain the standardized residuals as H −1/2 t t , to draw the standardized residuals by sampling ...Article
- Feb 2009
- QUANT FINANC

We report that, in the estimation of univariate GARCH or multivariate generalized orthogonal GARCH (GO-GARCH) models, maximizing the likelihood is equivalent to making the standardized residuals as independent as possible. Based on this, we propose three factor GARCH models in the framework of GO-GARCH: independent-factor GARCH exploits factors that are statistically as independent as possible; factors in best-factor GARCH have the largest autocorrelation in their squared values such that their volatilities could be forecast well by univariate GARCH; and factors in conditional-decorrelation GARCH are conditionally as uncorrelated as possible. A convenient two-step method for estimating these models is introduced. Since the extracted factors may still have weak conditional correlations, we further propose factor-DCC models as an extension to the above factor GARCH models with dynamic conditional correlation (DCC) modelling the remaining conditional correlations between factors. Experimental results for the Hong Kong stock market show that conditional-decorrelation GARCH and independent-factor GARCH have better generalization performance than the original GO-GARCH, and that conditional-decorrelation GARCH (among factor GARCH models) and its extension with DCC embedded (among factor-DCC models) behave best. - ... Several orthogonal factor models have also been proposed; these reduce the number of parameters by imposing a common dynamic structure on all elements of the volatility matrix. They include the K-factor GARCH model of Lin (1992), the full-factor GARCH model of Vrontos et al. (2003) , the orthogonal GARCH and the generalized orthogonal GARCH models of Alexander (2001) and van der Weide (2002), respectively, and the conditionally uncorrelated component (CUC) model of Fan et al. (2008). Principal component analysis (PCA) of spectral matrices is another approach to estimate the common factors within various factor models. ...Article
- Jul 2011
- J AM STAT ASSOC

We introduce dynamic orthogonal components (DOC) for multivariate time series and propose a procedure for estimating and testing the existence of DOCs for a given time series. We estimate the dynamic orthogonal components via a generalized decorrelation method that minimizes the linear and quadratic dependence across components and across time. Ljung-Box type statistics are then used to test the existence of dynamic orthogonal components. When DOCs exist, one can apply univariate analysis to build a model for each component. Those univariate models are then combined to obtain a multivariate model for the original time series. We demonstrate the usefulness of dynamic orthogonal components with two real examples and compare the proposed modeling method with other dimension reduction methods available in the literature, including principal component and independent component analyses. We also prove consistency and asymptotic normality of the proposed estimator under some regularity conditions. Some technical details are provided in online Supplementary Materials. - ... Alexander (2001) decorrelates the multivariate time series of returns via principal components (PC) and applies univariate GARCH modelling to each PC. Motivated by the fact that PCs are only unconditionally uncorrelated, Fan et al. (2008a) construct Conditionally Uncorrelated Components and model each one as univariate GARCH. Using a straightforward but elegant decomposition of the conditional covariance matrix into conditional standard deviations and correlations, Bollerslev (1990) models conditional correlations as constant and conditional variances as univariate GARCH processes, whereas Tse and Tsui (2002) and Engle (2002) introduce GARCH-type dynamics into the conditional correlation structure. ...Article
- Jan 2011

We propose a locally stationary linear model for the evolution of high-dimensional financial returns, where the time-varying volatility matrix is modelled as a piecewise constant function of time, with the number of jumps possibly increasing with the sam-ple size. We show that the proposed model accurately reflects the typical stylised facts of multivariate returns. We propose a new wavelet-based technique for estimating the volatility matrix, which combines four essential ingredients: a Haar wavelet decom-position, variance stabilisation of the Haar coefficients via the Fisz transform prior to thresholding, a bias correction, and extra time-domain thresholding (soft or hard). Under the assumption of sparsity, we demonstrate the interval-wise consistency of the proposed estimators of the volatility matrix and its inverse in the operator norm, with rates which adapt to the features of the target matrix. We also propose a version of the estimators based on the polarisation identity, which permits a more precise derivation of the thresholds. Using the example of a stock index portfolio, we discuss practical selection of the parameters of our procedure. - ... To assess the fitness of the models, we use the Box-Pierce statistic test [11,15] to check the cross-product of the standardized residuals. LetˆεLetˆ Letˆε ti be the standardized residual for the i-th series, put ...Conference Paper
- Sep 2008

The time-varying correlations between multivariate financial time series have been intensively studied. For example DCC and Block-DCC models have been proposed. In this paper, we present a novel Clustered DCC model which extends the previous models by incorporating clustering techniques. Instead of using the same parameters for all time series, a cluster structure is produced based on the autocorrelations of standardized residuals, in which clustered entries sharing the same dynamics. We compare and investigate different clustering methods using synthetic data. To verify the effectiveness of the whole proposed model, we conduct experiments on a set of Hong Kong stock daily returns, and the results outperform the original DCC GARCH model as well as Block-DCC model. - ... Entries in boldface denote the best outcomes. Results are based on the same out-of-sample exercise as in Table 3 The related concept of conditionally uncorrelated components is discussed in Fan et al. (2008). An application of the popular iterative FastICA algorithm can be found in Broda and Paolella (2009), where the method is used in a portfolio allocation exercise to estimate the independent components (driven by generalized hyperbolic innovations) of the 30 constituents of the Dow Jones Industrial Average index. ...Article
- Jan 2012
- J ECONOMETRICS

A new model class for univariate asset returns is proposed which involves the use of mixtures of stable Paretian distributions, and readily lends itself to use in a multivariate context for portfolio selection. The model nests numerous ones currently in use, and is shown to outperform all its special cases. In particular, an extensive out-of-sample risk forecasting exercise for seven major FX and equity indices confirms the superiority of the general model compared to its special cases and other competitors. Estimation issues related to problems associated with mixture models are discussed, and a new, general, method is proposed to successfully circumvent these. The model is straightforwardly extended to the multivariate setting by using an independent component analysis framework. The tractability of the relevant characteristic function then facilitates portfolio optimization using expected shortfall as the downside risk measure. - ... The diagonal BEKK model, where parameter-matrices are assumed diagonal, provides some simplification over the full BEKK model. Several models have been proposed in the literature based on transformations of the returns (van der Weide 2002; Fan et al. 2008;Boswijk and van der Weide 2011). Noureldin et al. (2014) proposed the rotated BEKK (RBEKK) model that utilizes the BEKK parametrisation using covariance targeting and aiming at higher dimensional data by exploiting returns rotation. ...ArticleFull-text availableThe purpose of this paper is to develop Bayesian inference for matrix-variate dynamic linear models (MV-DLMs), in order to allow missing observation and variance intervention, of any sub-vector or sub-matrix of the observation time series matrix. The established inverted Wishart distribution of the unknown covariance matrix of the observation innovations, is criticized as restricted, because of its scalar degrees of freedom. We propose generalizations of the inverted Wishart and matrix $t$ distributions, replacing the scalar degrees of freedom by a diagonal matrix of degrees of freedom. Some properties of the new distributions are discussed and the conjugacy of the matrix normal, inverted Wishart, and matrix $t$ distributions is extended to incorporate a matrix of degrees of freedom. The MV-DLM is then re-defined using the new distributions and modifications of the updating algorithm for missing observations are suggested. The problem of variance monitoring and variance intervention is discussed and it is proposed that the updating of the matrix of degrees of freedom offers the advantage to intervene selectively only to a number of variables.
- ... A convenient way to evaluate this measure is to plot K(q) against q. Alternative measures of eigenvector agreement are available; for example, Fan et al. (2008) ...We consider the estimation of integrated covariance matrices of high dimensional diffusion processes by using high frequency data. We start by studying the most commonly used estimator, the realized covariance matrix (RCV). We show that in the high dimensional case when the dimension p and the observation frequency n grow in the same rate, the limiting empirical spectral distribution of RCV depends on the covolatility processes not only through the underlying integrated covariance matrix Sigma, but also on how the covolatility processes vary in time. In particular, for two high dimensional diffusion processes with the same integrated covariance matrix, the empirical spectral distributions of their RCVs can be very different. Hence in terms of making inference about the spectrum of the integrated covariance matrix, the RCV is in general \emph{not} a good proxy to rely on in the high dimensional case. We then propose an alternative estimator, the time-variation adjusted realized covariance matrix (TVARCV), for a class of diffusion processes. We show that the limiting empirical spectral distribution of our proposed estimator TVARCV does depend solely on that of Sigma through a Marcenko-Pastur equation, and hence the TVARCV can be used to recover the empirical spectral distribution of Sigma by inverting the Marcenko-Pastur equation, which can then be applied to further applications such as portfolio allocation, risk management, etc..
- ... Such specification greatly simplifies the parameter estimation and reduces the convergence difficulty of estimation algorithms found in most MGARCH models. Several alternative specifications have been proposed including, for example, Hyvarinen et al. (2001), van der Weide (2002) and Fan et al. (2008). However, most of these alternative specifications require more advanced development in computing power and high dimensional optimization algorithms that are not commonly available. ...ArticleFull-text available
- Jan 2014

Abstract Consistent robust estimation of the linkage matrix in O-GARCH model is studied. Although robust estimation is a known method, it has been rarely applied in the O-GARCH modeling due to the complicated distribution problems under the model specification. In this paper, we solve the distribution problems and use robust estimation to compare with the existing standard method. The results are very favorable in terms of both simulation and real data analysis. - ... Palandri [7] uses a sequential Cholesky decomposition to build a multivariate volatility of 69 stock returns. The independent component models have also been used to simplify the modeling procedure, e.g., see [6]. ...Article
- Mar 2007

Correlations between asset returns are important in many financial applications. In recent years, multivariate volatility models have been used to describe the time-varying feature of the correlations. However, the curse of dimensionality quickly becomes an issue as the number of correlations is $k(k-1)/2$ for $k$ assets. In this paper, we review some of the commonly used models for multivariate volatility and propose a simple approach that is parsimonious and satisfies the positive definite constraints of the time-varying correlation matrix. Real examples are used to demonstrate the proposed model. - ... The modeling approach of Engle is motivated by pragmatic considerations, as the DCC intends to scale well when the cross-sectional dimension of the time-series. In related models a factor structure is imposed on the volatilities and correlations; see, for example, Tsay (2005) and Fan, Wang, and Yao (2008). ...Article
- Mar 2010
- J BUS ECON STAT

We propose a new class of observation-driven time-varying parameter models for dynamic volatilities and correlations to handle time series from heavy-tailed distributions. The model adopts generalized autoregressive score dynamics to obtain a time-varying covariance matrix of the multivariate Student's t distribution. The key novelty of our proposed model concerns the weighting of lagged squared innovations for the estimation of future correlations and volatilities. When we account for heavy tails of distributions, we obtain estimates that are more robust to large innovations. The model also admits a representation as a time-varying heavy-tailed copula which is particularly useful if the interest focuses on dependence structures. We provide an empirical illustration for a panel of daily global equity returns. - ... A new type of multivariate GARCH was proposed by (Weide, 2002), that will parameterize large covariance matrices leaving a large degrees of freedom to facilitate parameter estimation. (Fan, et al, 2008) also proposed a multivariate volatility process based on a newly defined conditionally uncorrelated components (CUC) that is a parsimonious representation for matrix-valued processes. The method generally decomposed the high dimensional problem into several lower dimensional representations. ...
- ... This drawback has been overcome by second generation models, yet at the cost of imposing either parameter restrictions on the BEKK model, as for the case of the scalar BEKK model and the exponentially weighted moving average model introduced by J.P.Morgan (1996) , or on the conditional correlation matrix, assumed time-invariant in the constant conditional correlation CCC model of Bollerslev (1990). Alternatively, restrictions have been imposed through factor structures, likewise Engle and Gonzalez-Rivera (1991) and the orthogonal models of Alexander (2002), van der Weide (2002), Vrontos et al. (2003) and Fan et al. (2008). On the other hand, a different approach has been pursued by the most recent third generation of multivariate GARCH models, i.e., the dynamic conditional correlation models, grounded on a two-step estimation procedure, involving the estimation of univariate GARCH models for the conditional variances in the first step and then the estimation of the conditional covariances in the second step. ...The paper introduces a new simple semiparametric estimator of the conditional variance covariance and correlation matrix (SP-DCC). While sharing a similar sequential approach to existing dynamic conditional correlation (DCC) methods, SP-DCC has the advantage of not requiring the direct parameterization of the conditional covariance or correlation processes, therefore also avoiding any assumption on their long-run target. In the proposed framework, conditional variances are estimated by univariate GARCH models, for actual and suitably transformed series, in the first step; the latter are then nonlinearly combined in the second step, according to basic properties of the covariance and correlation operator, to yield nonparametric estimates of the various conditional covariances and correlations. Moreover, in contrast to available DCC methods, SP-DCC allows for straightforward estimation also for the non-symultaneous case, i.e., for the estimation of conditional cross-covariances and correlations, displaced at any time horizon of interest. A simple ex-post procedure, to ensure well behaved conditional covariance and correlation matrices, grounded on nonlinear shrinkage, is finally proposed. Due to its sequential implementation and scant computational burden, SP-DCC is very simple to apply and suitable for the modeling of vast sets of conditionally heteroskedastic time series.
- ... Motivated by such observations, joint modelling of several financial returns has attracted considerable attention in the literature, and many models have been proposed for multivariate GARCH processes including the vectorized multivariate GARCH models (Bollerslev et al., 1988), the Baba-Engle-Kraft-Kroner (BEKK) model (Engle and Kroner, 1995), the constant conditional correlation (CCC) model (Bollerslev, 1990), the dynamic conditional correlation (DCC) model (Engle, 2002), the generalized orthogonal GARCH models (Van derWeide, 2002), the full-factor multivariate GARCH models (Vrontos et al., 2003) and the conditionally uncorrelated components-based multivariate volatility processes (Fan et al., 2008); for a survey of recent approaches to multivariate GARCH modelling and inference, seeBauwens et al. (2006).The assumption that the underlying dynamics remain unchanged is rather restrictive considering that the fundamentals driving an economy, or the asset markets in particular, exhibit sudden changes or regimes switches. These change-points (a.k.a. ...Article
- Jun 2017

An assumption in modelling financial risk is that the underlying asset returns are stationary. However, there is now strong evidence that multivariate financial time series entail changes not only in their within-series dependence structure, but also in the correlations among them. Failing to address these structural changes is more likely to result in a crude approximation of the risk measures adopted and, thereby, to affect the monitoring and managing capital adequacy of financial institutions. For this reason, we propose a method for consistent detection of multiple change-points in (possibly high) $N$-dimensional GARCH panel data set, where both individual GARCH processes and their correlations are allowed to change. The method consists of two stages: i) the transformation of $N$-dimensional GARCH processes into $N(N+1)/2$-`mean plus noise' series, and ii) the application of the Double CUSUM Binary Segmentation algorithm for simultaneous segmentation of the $N(N+1)/2$-dimensional transformation of the input data. We show the consistency of the proposed methodology in estimating both the total number and locations of the change-points. Its good performance is demonstrated through an extensive simulation study and an application to a real dataset, where we show the importance of identifying the change-points prior to calculating Value-at-Risk under a stressed scenario. - ... Furthermore, based on the idea that co-movements in the market can be driven by a few components, factor models appear in the economic and financial literature as an alternative way to achieve dimension reduction and to tackle the curse of dimensionality. See, for instance Fan et al. (2008), Pan et al. (2010), Matteson and Tsay (2011), García-Ferrer et al. (2012), Santos and Moura (2014), Matilainen et al. (2015) and Barigozzi and Hallin (2015) for some references. ...Technical ReportFull-text available
- Mar 2018

In this paper, we analyse the recent principal volatility components analysis procedure. The procedure overcomes several difficulties in modelling and forecasting the conditional covariance matrix in large dimensions arising from the curse of dimensionality. We show that outliers have a devastating effect on the construction of the principal volatility components and on the forecast of the conditional covariance matrix and consequently in economic and financial applications based on this forecast. We propose a robust procedure and analyse its finite sample properties by means of Monte Carlo experiments and also illustrate it using empirical data. The robust procedure outperforms the classical method in simulated and empirical data. - ... Furthermore, based on the idea that co-movements in the market can be driven by a few components, factor models appear in the economic and financial literature as an alternative way to achieve dimension reduction and to tackle the curse of dimensionality. See, for instance Fan et al. (2008), Pan et al. (2010), Matteson and Tsay (2011), García-Ferrer et al. (2012), Santos and Moura (2014), Matilainen et al. (2015) and Barigozzi and Hallin (2015) for some references. ...
- ... The maximization part of the third-step is carried out by suitably parameterizing the joint likelihood in (19) as ...PreprintFull-text available
- Jun 2018

This paper proposes a three-step estimation strategy for dynamic conditional correlation models. In the first step, conditional variances for individual and aggregate series are estimated by means of QML equation by equation. In the second step, conditional covariances are estimated by means of the polarization identity, and conditional correlations are estimated by their usual normalization. In the third step, the two-step conditional covariance and correlation matrices are regularized by means of a new non-linear shrinkage procedure and used as starting value for the maximization of the joint likelihood of the model. This yields the final, third step smoothed estimate of the conditional covariance and correlation matrices. Due to its scant computational burden, the proposed strategy allows to estimate high dimensional conditional covariance and correlation matrices. An application to global minimum variance portfolio is also provided, confirming that SP-DCC is a simple and viable alternative to existing DCC models. - ... These authors provide various multivariate GARCH models for the conditional correlations across the financial markets. On the other hand, Fan et al. (2008) assumed that the multivariate financial time series is a linear combination of a set of conditionally uncorrelated components , which overcomes several drawbacks of early models. All these studies provide a reason for the testing of conditional uncorrelatedness, especially since the acceptance of conditional uncorrelatedness leads to analyzing the multivariate GARCH model simply by estimating the univariate GARCH models. ...We propose a nonparametric test for conditional uncorrelatedness in multiple-equation models such as seemingly unrelated regressions (SURs), multivariate volatility models, and vector autoregressions (VARs). Under the null hypothesis of conditional uncorrelatedness, the test statistic converges to the standard normal distribution asymptotically. We also study the local power property of the test. Simulation shows that the test behaves quite well in finite samples.
- ... Furthermore, based on the idea that co-movements in the market can be driven by a few components, factor models appear in the economic and financial literature as an alternative way to achieve dimension reduction and to tackle the curse of dimensionality. See, for instance, Fan et al. (2008), Pan et al. (2010), Matteson and Tsay (2011), García-Ferrer et al. (2012), Santos and Moura (2014), Matilainen et al. (2015) and Barigozzi and Hallin (2015) for some references. ...In this paper, we analyse the recent principal volatility components analysis procedure. The procedure overcomes several difficulties in modelling and forecasting the conditional covariance matrix in large dimensions arising from the curse of dimen-sionality. We show that outliers have a devastating effect on the construction of the principal volatility components and on the forecast of the conditional covariance matrix and consequently in economic and financial applications based on this forecast. We propose a robust procedure and analyse its finite sample properties by means of Monte Carlo experiments and also illustrate it using empirical data. The robust procedure outperforms the classical method in simulated and empirical data.
- ... The diagonal BEKK model, where parameter-matrices are assumed diagonal, provides some simplification over the full BEKK model. Several models have been proposed in the literature based on transformations of the returns (van der Weide 2002; Fan et al. 2008;Boswijk and van der Weide 2011). Noureldin et al. (2014) proposed the rotated BEKK (RBEKK) model that utilizes the BEKK parametrisation using covariance targeting and aiming at higher dimensional data by exploiting returns rotation. ...Bayesian inference is proposed for volatility models, targeting financial returns, which exhibit high kurtosis and slight skewness. Rotated GARCH models are considered which can accommodate the multivariate standard normal, Student t, generalized error distributions and their skewed versions. Inference on the model parameters and prediction of future volatilities and cross-correlations are addressed by Markov chain Monte Carlo inference. Bivariate simulated data is used to assess the performance of the method, while two sets of real data are used for illustration: the first is a trivariate data set of financial stock indices and the second is a higher dimensional data set for which a portfolio allocation is performed.
- ... In the application, we set m = N. Other specifications belonging to this group are the generalized orthogonal GARCH model by van der Weide (2002), the full factor GARCH model by Vrontos, Dellaportas, and Politis (2003) and the conditionally uncorrelated components GARCH by Fan, Wang, and Yao (2008). However, these models are computationally challenging when the dimension is large. ...Article
- Jan 2010
- J APPL ECONOMET

This paper addresses the question of the selection of multivariate GARCH models in terms of variance matrix forecasting accuracy with a particular focus on relatively large scale problems. We consider 10 assets from NYSE and NASDAQ and compare 125 model based one-step-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the Superior Predictive Ability (SPA) tests. Model performances are evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over/under predictions. When considering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over relatively unstable periods, i.e. dot-com bubble, the set of superior models is composed of more sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007-2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle (2002) with leverage in the conditional variances of the returns. - Article
- Jan 2010

This thesis develops methodology and asymptotic analysis for sparse estimators of the covariance matrix and the inverse covariance (concentration) matrix in high-dimensional settings. We propose estimators that are invariant to the ordering of the variables and estimators that exploit variable ordering. For the estimators that are invariant to the ordering of the variables, estimation is based on both lasso-type penalized normal likelihood and a new proposed class of generalized thresholding operators which combine thresholding with shrinkage applied to the entries of the sample covariance matrix. For both approaches we obtain explicit convergence rates in matrix norms that show the trade-off between the sparsity of the true model, dimension, and the sample size. In addition, we show that the generalized thresholding approach estimates true zeros as zeros with probability tending to 1, and is sign consistent for non-zero elements. We also derive a fast iterative algorithm for computing the penalized likelihood estimator. To exploit a natural ordering of the variables to estimate the covariance matrix, we propose a new regression interpretation of the Cholesky factor of the covariance matrix, as opposed to the well known regression interpretation of the Cholesky factor of the inverse covariance, which leads to a new class of regularized covariance estimators suitable for high-dimensional problems. We also establish theoretical connections between banding Cholesky factors of the covariance matrix and its inverse and constrained maximum likelihood estimation under the banding constraint. These covariance estimators are compared to other estimators on simulated data and on real data examples from gene microarray experiments and remote sensing. Lastly, we propose a procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for correlation of the response variables. An efficient optimization algorithm and a fast approximation are developed and we show that the proposed method outperforms relevant competitors when the responses are highly correlated. We also apply the new method to a finance example on predicting asset returns. - Article
- Jan 2011
- COMPUT STAT DATA AN

The estimation of multivariate GARCH time series models is a difficult task mainly due to the significant overparameterization exhibited by the problem and usually referred to as the "curse of dimensionality". For example, in the case of the VEC family, the number of parameters involved in the model grows as a polynomial of order four on the dimensionality of the problem. Moreover, these parameters are subjected to convoluted nonlinear constraints necessary to ensure, for instance, the existence of stationary solutions and the positive semidefinite character of the conditional covariance matrices used in the model design. So far, this problem has been addressed in the literature only in low dimensional cases with strong parsimony constraints. In this paper we propose a general formulation of the estimation problem in any dimension and develop a Bregman-proximal trust-region method for its solution. The Bregman-proximal approach allows us to handle the constraints in a very efficient and natural way by staying in the primal space and the Trust-Region mechanism stabilizes and speeds up the scheme. Preliminary computational experiments are presented and confirm the very good performances of the proposed approach. - We propose a new class of observation-driven time-varying parameter models for dynamic volatilities and correlations to handle time series from heavy-tailed distributions. The model adopts generalized autoregressive score dynamics to obtain a time-varying covariance matrix of the multivariate Student's t distribution. The key novelty of our proposed model concerns the weighting of lagged squared innovations for the estimation of future correlations and volatilities. When we account for heavy tails of distributions, we obtain estimates that are more robust to large innovations. The model also admits a representation as a time-varying heavy-tailed copula which is particularly useful if the interest focuses on dependence structures. We provide an empirical illustration for a panel of daily global equity returns.
- Article
- Mar 2009
- ECONOMET J

In order to describe the co-movements in both conditional mean and conditional variance of high dimensional non-stationary time series by dimension reduction, we introduce the conditional heteroscedasticity with factor structure to the error correction model (ECM). The new model is called the error correction--volatility factor model (EC--VF). Some specification and estimation approaches are developed. In particular, the determination of the number of factors is discussed. Our setting is general in the sense that we impose neither i.i.d. assumption on idiosyncratic components in the factor structure nor independence between factors and idiosyncratic errors. We illustrate the proposed approach with a Monte Carlo simulation and a real data example. Copyright The Author(s). Journal compilation Royal Economic Society 2008 - Article
- Oct 2019

Designing modern seed processing machines requires a study of the regularities of technological processes, dynamics and conditions of operation. To determine the control parameters and their optimum values, it is necessary to use high-precision mathematical models of technologies of processing the vegetable and melon seed mass. A method has been suggested of modelling the technology of processing the seed mass of vegetables and melons based on nonlinear canonical decomposition of a random sequence of changes in the technological process parameters. The method of modelling the technology of processing the seed mass of vegetables and melons can be used to determine the optimum values of design and operation parameters of seed separating machines. This method allows obtaining mathematical models of technological processes for an arbitrary number of input parameters used to evaluate the characteristics of seeds, the degree of nonlinearity, and the peculiarities of stochastic connections. The method consists of the following stages: collection of statistical data; calculation parameters in the mathematical model; evaluation of the values of the parameters; calculation of the input parameters. The mathematical model of the processing technology of the seed mass of vegetables and melons does not impose any restrictions on the properties of the random sequence of input and output parameters (linearity, stationarity, monotonicity, scalarity, etc.). It allows taking into account the features of seed processing and, as a result, achieving the maximum quality of separation of vegetable and melon seeds. The method has been approved on the basis of the experimental installation of a separating machine. The results of the experimental studies have confirmed the high accuracy of the suggested method. The application of the suggested models reduces the average error of determination of seed losses. Statistic data for calculating mathematical model parameters have been obtained in the course of processing melons and cucumbers on an experimental installation. The results of the experimental studies have confirmed the high accuracy of the suggested method. - ArticleFull-text available
- Oct 2005

Volatility plays an important role in controlling and forecasting risks in various �nancial operations. For a univariate return series, volatility is often represented in terms of conditional variances or conditional standard deviations. Many statistical models have been developed for modelling univariate conditional variance processes. While univariate descriptions are useful and important, problems of risk assessment, asset allocation, hedging in futures markets and options pricing require a multivariate framework, since high volatilities are often observed in the same time periods across di�erent assets. Statistically this boils down to model time-varying conditional variance and covariance matrices of a vector-valued time series. Section 2 below lists some existing statistical models for multivariate volatility processes. We refer to Bauwens, Laurent and Rombouts (2005) for a more detailed survey on this topic. We propose a new and ad hoc method with numerical illustration in section 3. We concludes in section 4 with a brief summary. - Recently a flexible class of semiparametric copula-based multivariate GARCH models has been proposed to quantify multivariate risks, in which uni-variate GARCH models are used to capture the dynamics of individual financial series, and parametric copulas are used to model the contemporaneous dependence among GARCH residuals with nonparametric marginals. In this paper we address two questions regarding statistical inference for this class of models. (1) Under what mild sufficient conditions is the asymptotic distribution of the pseudo max-imum likelihood estimator (MLE) of the residual copula parameter of Chen and Fan (2006a) justified? (2) How do we test the correct specification of a parametric copula for the GARCH residuals? In order to answer both questions rigorously, we establish a new weighted approximation for the empirical distributions of the GARCH residuals, which is of interest in its own right. Simulation studies and data examples are provided to examine the finite sample performance of the pseudo MLE of the residual copula parameter and the proposed goodness-of-fit test.
- ArticleFull-text available
- Jan 2009

Glossary Definition of the Subject Introduction Properties of the GARCH(1,1) Model Estimation and Inference Testing for ARCH Asymmetry, Long Memory, GARCH-in-Mean Non- and Semi-parametric Models Multivariate GARCH Models Stochastic Volatility Aggregation Future Directions Bibliography - ArticleWith the growth in the requirements of the risk management indus-try and the complexity of instruments that are used in nance, there has been a signicant growth in the forms of multivariate GARCH models. These models now allow a signicant number of dimensions to be con-sidered rather than the relatively small number that used to be the case. This paper examines three multivariate GARCH models: the Dynamic Conditional Correlation GARCH model of Engle [2002], the Generalized Orthogonal GARCH model of Broda and Paolella [2008]and the General-ized Orthogonal GARCH model of Boswijk and van der Weide [2009]for modelling conditional correlation. The data from Polish Stock Exchange are considered for ten companies. The results present high volatility in conditional correlation for both GO-GARCH models whereas DCC seem to more stable.
- Consistently determining the number of factors plays an important role in factor modelling for volatility of multivariate time series. In this paper, the modelling is extended to handle the nonstationary time series scenario with conditional heteroscedasticity. Then a ridge-type ratio estimate and a BIC-type estimate are proposed and proved to be consistent. Their finite sample performance is examined through simulations and the analysis of two data sets. An observation from the numerical studies is, that unlike the cases with stationary and homoscedastic sequences in the literature, the dimensionality blessing no longer holds for the ratio-based estimates, but still does for the BIC-type estimate.
- Article
- Oct 2015
- J BUS ECON STAT

Volatility, represented in the form of conditional heteroscedasticity, plays an important role in controlling and forecasting risks in various financial operations including asset pricing, portfolio allocation, and hedging futures. However, modeling and forecasting multi-dimensional conditional heteroscedasticity are technically challenging. As the volatilities of many fnancial assets are often driven by a few common and latent factors, we propose in this paper a dimension reduction method to model a multivariate volatility process and to estimate a lower-dimensional space, to be called the volatility space, within which the dynamics of the multivariate volatility process is confined. The new method is simple to use, as technically it boils down to an eigenanalysis for a non-negative definite matrix. Hence it is applicable to the cases when the number of assets concerned is in the order of thousands (using an ordinary PC/laptop). On the other hand, the model has the capability to cater for complex conditional heteroscedasticity behavior for multi-dimensional processes. Some asymptotic properties for the new method are established. We further illustrate the new method using both simulated and real data examples. - ArticleFull-text available
- Jan 2007

Forecasting temporal dependence in second order moments of returns is a relevant problem in many contexts of financial econometrics. It is commonly accepted that financial volatilities move together over time across assets and markets. For this reason in this paper we propose an approach based on the analysis of independent temporal components to model the multivariate volatility. We have assumed that the underlying factors or sources of the model are AR-APARCH processes with errors interpreted by the Meixner distribution. An application with two sets of real data shows the use of the model in the analysis of parallel financial series. - Preprint
- Jun 2018

In economic and business data, the covariance or correlation matrix of a random vector often fluctuates with time and exhibits seasonality. Widely-used approaches for estimating and forecasting the correlation matrix include multivariate GARCH and other models, which treat the correlation coefficients as endogenous variables and lack economic interpretation. Application of these models is often hindered by estimation and inference difficulties, and they are contingent on some impractical assumptions. In this paper we take a simple approach to modeling and forecasting correlation matrices that assumes the correlation is driven by some common factors. The correlation coefficients in our model are exogenous, which simplifies estimation and interpretation. Our model is applied to natural gas and power prices from 2008-2012 in Boston, MA, where the common factors include daily humidity and temperature. By simulations it is shown that this model appropriately captures the pattern of correlation between natural gas and power prices. Moreover, this model is able to simulate price spikes and mean-reversions. - ChapterFull-text available
- Feb 2019

The autoregressive conditional heteroscedasticity model and the generalized autoregressive conditional heteroscedasticity (GARCH) model are used to study the heteroscedasticity problem. These models can be applied to both univariate autoregressive integrated moving average and vector autoregressive moving average models. Great challenges in these extensions include the representations of the models and their estimation. This chapter introduces some useful representations of multivariate GARCH models and the estimation of these models. To simplify the interpretation and representation, one can also use factor models where the volatility process is assumed to be determined only by a small number of underlying common factors. The linkage matrix and the independent components are obtained by performing a principal component analysis on the series through the sample covariance matrix. There are many multivariate GARCH models. Because of its generality and feasibility, the chapter focuses on the estimation of the generalized orthogonal (GO)‐GARCH model. - Article
- Apr 2012
- QUANT FINANC

The complexity of multivariate time series models increases dramatically when the number of component series increases. This is a phenomenon observed in both low- and high-frequency financial data analysis. In this paper, we develop a regularization framework for multivariate time series models based on the penalized likelihood method. We show that, under certain conditions, the regularized estimators are sparse-consistent and satisfy an asymptotic normality. This framework provides a theoretical foundation for addressing the curse of dimensionality in multivariate econometric models. We illustrate the utility of our method by developing a sparse version of the full-factor multivariate GARCH model. We successfully apply this model to simulated data as well as the minute returns of the Dow Jones industrial average component stocks.

- Article
- May 2000

This paper presents theoretical results in the formulation and estimation of multivariate generalized ARCH models within simultaneous equations systems. A new parameterization of the multivariate ARCH process is proposed and equivalence relations are discussed for the various ARCH parameterizations. Constraints sufficient to guarantee the positive definiteness of the conditional covariance matrices are developed, and necessary and sufficient conditions for covariance stationarity are presented. Identification and maximum likelihood estimation of the parameters in the simultaneous equations context are also covered. * This paper began as a synthesis of at least three UCSD Ph.D. dissertations on various aspects of multivariate ARCH modelling, byYoshi Baba, Dennis Kraft and Ken Kroner. In fact, an early version of this paper was written by Baba, Engle, Kraft and Kroner, which led to the acronym (BEKK) used in this paper for the new parameterization presented. In the interests of continui... - Chapter
- May 2002

In the preceding chapters, the authors introduced several different estimation principles and algorithms for independent component analysis (ICA). In this chapter, they provide an overview of these methods. First, they show that all these estimation principles are intimately connected, and the main choices are between cumulant-based vs. negentropy/likelihood-based estimation methods, and between one-unit vs. multi-unit methods. They compare the algorithms experimentally, and show that the main choice here is between on-line (adaptive) gradient algorithms vs. fast batch fixed-point algorithms. At the end of the chapter, they provide a short summary of basic ICA estimation. - Book
- Jan 2003

Introduction.- Stationary Time Series.- Smoothing in Time Series.- ARMA Modeling and Forecasting.- Parametric Nonlinear Time Series Models.- Nonparametric Models.- Hypothesis Testing.- Continuous Time Models in Finance.- Nonlinear Prediction. - Article
- Jan 2000

We propose a new method to predict time series using the technique of Independent Component Analysis (ICA) as a preprocessing tool. If certain assumptions hold, we show that ICA can be used to transform a set of time series into another set that is easier to predict. These assumptions are not unrealistic for many real-world time series, including financial time series. We have tested this approach on two sets of data: artificial toy data and financial time series. Simulations with a set of foreign exchange rate time series suggest that these can be predicted more accurately using the ICA preprocessing. - Article
- Dec 1981
- J AM STAT ASSOC

An approach to the modeling and analysis of multiple time series is proposed. Properties of a class of vector autoregressive moving average models are discussed. Modeling procedures consisting of tentative specification, estimation, and diagnostic checking are outlined and illustrated by three real examples. - Article
- Jul 1983
- J Time Anal

Squared-residual autocorrelations have been found useful in detecting non-linear types of statistical dependence in the residuals of fitted autoregressive-moving average (ARMA) models [cf. C. W. J. Granger and A. P. Andersen, An introduction to bilinear time series models. (1978; Zbl 0379.62074)]. In this note it is shown that the normalized squared-residual autocorrelations are asymptotically unit multivariate normal. The results of a simulation experiment confirming the small- sample validity of the proposed tests is reported. - Book
- Jun 2001

A comprehensive introduction to ICA for students and practitionersIndependent Component Analysis (ICA) is one of the most exciting new topics in fields such as neural networks, advanced statistics, and signal processing. This is the first book to provide a comprehensive introduction to this new technique complete with the fundamental mathematical background needed to understand and utilize it. It offers a general overview of the basics of ICA, important solutions and algorithms, and in-depth coverage of new applications in image processing, telecommunications, audio signal processing, and more.Independent Component Analysis is divided into four sections that cover:* General mathematical concepts utilized in the book* The basic ICA model and its solution* Various extensions of the basic ICA model* Real-world applications for ICA modelsAuthors Hyvarinen, Karhunen, and Oja are well known for their contributions to the development of ICA and here cover all the relevant theory, new algorithms, and applications in various fields. Researchers, students, and practitioners from a variety of disciplines will find this accessible volume both helpful and informative. - Book
- Jan 1996

This book provides an account of weak convergence theory and empirical processes and their applications to a wide variety of applications in statistics. The first part of the book presents a thorough account of stocastic convergence in its various forms. Part 2 brings together the theory of empirical processes in a form accessible to statisticians and probabilists. In Part 3, the authors cover a range of topics which demonstrate the applicability of the theory to important questions such as: limit theorems in asymptotic statistics; measures of goodness of fit; the bootstrap; and semiparametric estimation. Most of the sections conclude with "problems and complements". Some of these are exercises to help the reader's understanding of the material whereas others are intended to supplement the text. - Article
- Apr 1998
- STAT SINICA

In the present paper we examine the strict stationarity and the existence of higher-order moments for the GARCH(p,q) model under general and tractable assumptions. - ArticleFull-text available
- Oct 2005

Volatility plays an important role in controlling and forecasting risks in various �nancial operations. For a univariate return series, volatility is often represented in terms of conditional variances or conditional standard deviations. Many statistical models have been developed for modelling univariate conditional variance processes. While univariate descriptions are useful and important, problems of risk assessment, asset allocation, hedging in futures markets and options pricing require a multivariate framework, since high volatilities are often observed in the same time periods across di�erent assets. Statistically this boils down to model time-varying conditional variance and covariance matrices of a vector-valued time series. Section 2 below lists some existing statistical models for multivariate volatility processes. We refer to Bauwens, Laurent and Rombouts (2005) for a more detailed survey on this topic. We propose a new and ad hoc method with numerical illustration in section 3. We concludes in section 4 with a brief summary. - Article
- May 2001

A new representation of the diagonal Vech model is given using the Hadamard product. Sufficient conditions on parameter matrices are provided to ensure the positive definiteness of covariance matrices from the new representation. Based on this, some new and simple models are discussed. A set of diagnostic tests for multivariate ARCH models is proposed. The tests are able to detect various model misspecifications by examing the orthogonality of the squared normalized residuals. A small Monte-Carlo study is carried out to check the small sample performance of the test. An empirical example is also given as guidance for model estimation and selection in the multivariate framework. For the specific data set considered, it is found that the simple one and two parameter models and the constant conditional correlation model perform fairly well. - Article
- Nov 1999
- J Time Anal

In this paper we consider several tests for model misspecification after a multivariate conditional heteroscedasticity model has been fitted. We examine the performance of the recent test due to Ling and Li (J. Time Ser. Anal. 18 (1997), 447–64), the Box–Pierce test and the residual-based F test using Monte Carlo methods. We find that there are situations in which the Ling–Li test has very weak power. The residual-based diagnostics demonstrate significant under-rejection under the null. In contrast, the Box–Pierce test based on the cross-products of the standardized residuals often provides a useful diagnostic that has reliable empirical size as well as good power against the alternatives considered. - Article
- Jun 1994
- J THEOR PROBAB

This paper gives sufficient conditions for the weak convergence to Gaussian processes of empirical processes andU-processes from stationary kp/(p - 2) (logk)2(p - 1)/(p - 2) bk ® 0 as k ® ¥k^{p/(p - 2)} (\log k)^{2(p - 1)/(p - 2)} \beta _k \to 0 as k \to \infty In the case that the functions in theV-C subgraph class are uniformly bounded, we obtain uniform central limit theorems for the empirical process and theU-process, provided that the decay rate of the mixing coefficient satisfies k =O(k –r ) for somer>1. These conditions are almost minimal. - Article
- Jul 1997
- STOCH PROC APPL

Bahadur-Kiefer approximations for generalized quantile processes as defined in Einmahl and Mason (1992) are given which generalize results for the classical one-dimensional quantile processes. An as application we consider the special case of the volume process of minimum volume sets in classes of subsets of the d-dimensional Euclidean space. Minimum volume sets can be used as estimators of level sets of a density and might be useful in cluster analysis. The volume of minimum volume sets itself can be used for robust estimation of scale. Consistency results and rates of convergence for minimum volume sets are given. Rates of convergence of minimum volume sets can be used to obtain Bahadur-Kiefer approximations for the corresponding volume process and vice versa. A generalization of the minimum volume approach to non-i.i.d. problems like regression and spectral analysis of time series is discussed. - Article
- Oct 2006
- ECON LETT

We adapt the Lagrange multiplier (LM) principle to test for noncausality in variance of financial returns. The new test is compared with a Portmanteau statistic [Cheung, Y.W., Ng, L.K., 1996. A causality in variance test and its application to financial market prices. Journal of Econometrics 72, 33–48.]. A Monte Carlo study reveals superior power of the LM test. - Article
- Mar 2006
- J ECONOMETRICS

We propose a new model for the variance between multiple time series, the regime switching dynamic correlation. We decompose the covariances into correlations and standard deviations and the correlation matrix follows a regime switching model; it is constant within a regime but different across regimes. The transitions between the regimes are governed by a Markov chain. This model does not suffer from a curse of dimensionality and it allows analytic computation of multi-step ahead conditional expectations of the variance matrix when combined with the ARMACH model (Taylor (Modelling Financial Time Series. Wiley, New York) and Schwert (J. Finance 44(5) (1989) 1115)) for the standard deviations. We also present an empirical application which illustrates that our model can have a better fit of the data than the dynamic conditional correlation model proposed by Engle (J. Business Econ. Statist. 20(3) (2002) 339). - This paper develops a test for causality in variance. The test is based on the residual cross-correlation function (CCF) and is robust to distributional assumptions. Asymptotic normal and asymptotic χ2 statistics are derived under the null hypothesis of no causality in variance. Monte Carlo results indicate that the proposed CCF test has good empirical size and power properties. Two empirical examples illustrate that the causality test yields useful information on the temporal dynamics and the interaction between two time series.
- Article
- Jul 1996
- J ECONOMETRICS

This paper extends the work by Ding, Granger, and Engle (1993) and further examines the long memory property for various speculative returns. The long memory property found for S&P 500 returns is also found to exist for four other different speculative returns. One significant difference is that for foreign exchange rate returns, this property is strongest when instead of at d = 1 for stock returns. The theoretical autocorrelation functions for various GARCH(1, 1) models are also derived and found to be exponential decreasing, which is rather different from the sample autocorrelation function for the real data. A general class of long memory models that has no memory in returns themselves but long memory in absolute returns and their power transformations is proposed. The issue of estimation and simulation for this class of model is discussed. The Monte Carlo simulation shows that the theoretical model can mimic the stylized empirical facts strikingly well. - Conference Paper
- Jan 1998

- Article
- Aug 1997
- INT J NEURAL SYST

This paper discusses the application of a modern signal processing technique known as independentcomponent analysis (ICA) or blind source separation to multivariate financial time series such as aportfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a newspace of statistically independent components (ICs). This can be viewed as a factorization of the portfoliosince joint probabilities become simple products in the coordinate system of the ICs.We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results withthose obtained using principal component analysis. The results indicate that the estimated ICs fall into twocategories, (i) infrequent but large shocks (responsible for the major changes in the stock prices), and (ii)frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overallstock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs.In contrast, when using shocks derived from principal components instead of independent components, thereconstructed price is less similar to the original one. Independent component analysis is a potentially powerfulmethod of analyzing and understanding driving mechanisms in financial markets. There are furtherpromising applications to risk management since ICA focuses on higher-order statistics. - We present a dynamic stochastic general equilibrium (DSGE) New Keynesian model with indivisible labor and a dual labor market: a Walrasian one where wages are fully flexible and a unionized one characterized by real wage rigidity. We show that the negative effect of a productivity shock on inflation and the positive effect of a cost-push shock are crucially determined by the proportion of firms that belong to the unionized sector. The larger this number, the larger are these effects. Consequently, the larger the union coverage, the larger should be the optimal response of the nominal interest rate to exogenous productivity and cost-push shocks. The optimal inflation and output gap volatility increases as the number of the unionized firms in the economy increases.
- ArticleFull-text available
- Oct 2001

In this paper, we develop the theoretical and empirical properties of a new class of multivariate GARCH models capable of estimating large time-varying covariance matrices, Dynamic Conditional Correlation Multivariate GARCH. We show that the problem of multivariate conditional variance estimation can be simplified by estimating univariate GARCH models for each asset, and then, using transformed residuals resulting from the first stage, estimating a conditional correlation estimator. The standard errors for the first stage parameters remain consistent, and only the standard errors for the correlation parameters need be modified. We use the model to estimate the conditional covariance of up to 100 assets using S&P 500 Sector Indices and Dow Jones Industrial Average stocks, and conduct specification tests of the estimator using an industry standard benchmark for volatility models. This new estimator demonstrates very strong performance especially considering ease of implementation of the estimator. - Article
- Sep 2002
- ECONOMET J

In this paper we introduce a bootstrap procedure to test parameter restrictions in vector autoregressive models which is robust in cases of conditionally heteroskedastic error terms. The adopted wild bootstrap method does not require any parametric specification of the volatility process and takes contemporaneous error correlation implicitly into account. Via a Monte Carlo investigation empirical size and power properties of the new method are illustrated. We compare the bootstrap approach with standard procedures either ignoring heteroskedasticity or adopting a heteroskedasticity consistent estimation of the relevant covariance matrices in the spirit of the White correction. In terms of empirical size the proposed method clearly outperforms competing approaches without paying any price in terms of size adjusted power. We apply the alternative tests to investigate the potential of causal relationships linking daily prices of natural gas and crude oil. Unlike standard inference ignoring time varying error variances, heteroskedasticity consistent test procedures do not deliver any evidence in favor of short run causality between the two series. - Article
- Jan 2003
- Econometrica

This paper constructs a two-country (Home and Foreign) general equilibrium model of Schumpeterian growth without scale effects. The scale effects property is removed by introducing two distinct specifications in the knowledge production function: the permanent effect on growth (PEG) specification, which allows policy effects on long-run growth; and the temporary effects on growth (TEG) specification, which generates semi-endogenous long-run economic growth. In the present model, the direction of the effect of the size of innovations on the pattern of trade and Home’s relative wage depends on the way in which the scale effects property is removed. Under the PEG specification, changes in the size of innovations increase Home’s comparative advantage and its relative wage, while under the TEG specification, an increase in the size of innovations increases Home’s relative wage but with an ambiguous effect on its comparative advantage. - ArticleTraducción del ruso Incluye bibliografía
- Classical empirical process theory for Vapnik-Cervonenkis classes deals mainly with sequences of independent variables. This paper extends the theory to stationary sequences of dependent variables. It establishes rates of convergence for $\beta$-mixing and $\phi$-mixing empirical processes indexed by classes of functions. The method of proof depends on a coupling of the dependent sequence with sequences of independent blocks, to which the classical theory can be applied. A uniform $O(n^{-s/(1+s)})$ rate of convergence over V-C classes is established for sequences whose mixing coefficients decay slightly faster than $O(n^{-s})$.
- Article
- Mar 1993
- ANN STAT

In this paper two bootstrap procedures are considered for the estimation of the distribution of linear contrasts and of F-test statistics in high dimensional linear models. An asymptotic approach will be chosen where the dimension p of the model may increase for sample size $n\rightarrow\infty$. The range of validity will be compared for the normal approximation and for the bootstrap procedures. Furthermore, it will be argued that the rates of convergence are different for the bootstrap procedures in this asymptotic framework. This is in contrast to the usual asymptotic approach where p is fixed. - ArticleFull-text available
- Jan 2005

- Article
- Feb 1988
- J POLIT ECON

The capital asset pricing model provides a theoretical structure for the pricing of assets with uncertain returns. The premium to induc e risk-averse investors to bear risk is proportional to the nondivers ifiable risk, which is measured by the covariance of the asset return with the market portfolio return. In this paper, a multivariate, gen eralized-autoregressive, conditional, heteroscedastic process is esti mated for returns to bills, bonds, and stocks where the expected retu rn is proportional to the conditional covariance of each return with that of a fully diversified or market portfolio. It is found that the conditional covariances are quite variable over time and are a signi ficant determinant of the time-varying risk premia. The implied betas are also time varying and forecastable. Copyright 1988 by University of Chicago Press. - Article
- Aug 1990
- Rev Econ Stat

A multivariate time series model with time varying conditional variances and covariances, but constant conditional correlations is proposed. In a multivariate regression framework, the model is readily interpreted as an extension of the Seemingly Unrelated Regression (SUR) model allowing for heteroskedasticity. Parameterizing each of the conditional variances as a univariate Generalized Autoregressive Conditional Heteroskedastic (GARCH) process, the descriptive validity of the model is illustrated for a set of five nominal European U.S. dollar exchange rates following the inception of the European Monetary System (EMS). When compared to the pre- EMS free float period, the comovements between the currenciess are found to be significantly higher over the later period. Copyright 1990 by MIT Press. - Article
- May 2003
- J APPL ECONOMET

This paper surveys the most important developments in multivariate ARCH-type modelling. It reviews the model speciÞcations, the inference methods, and the main areas of application of these models in Þnancial econometrics. - Article
- Feb 2000

This paper studies a broad class of nonnegative ARCH( ) models. Sufficient conditions for the existence of a stationary solution are established and an explicit representation of the solution as a Volterra type series is found. Under our assumptions, the covariance function can decay slowly like a power function, falling just short of the long memory structure. A moving average representation in martingale differences is established, and the central limit theorem is proved. - Article
- Dec 2003
- Biometrika

Hall & Yao (2003) showed that, for ARCH/GARCH, i.e. autoregressive conditional heteroscedastic/generalised autoregressive conditional heteroscedastic, models with heavy-tailed errors, the conventional maximum quasilikelihood estimator suffers from complex limit distributions and slow convergence rates. In this paper three types of absolute deviations estimator have been examined, and the one based on logarithmic transformation turns out to be particularly appealing. We have shown that this estimator is asymptotically normal and unbiased. Furthermore it enjoys the standard convergence rate of n-super-1/2 regardless of whether the errors are heavy-tailed or not. Simulation lends further support to our theoretical results. Copyright Biometrika Trust 2003, Oxford University Press. - ArticleFull-text available
- Jan 1988

Matching university places to students is not as clear cut or as straightforward as it ought to be. By investigating the matching algorithm used by the German central clearinghouse for university admissions in medicine and related subjects, we show that a procedure designed to give an advantage to students with excellent school grades actually harms them. The reason is that the three-step process employed by the clearinghouse is a complicated mechanism in which many students fail to grasp the strategic aspects involved. The mechanism is based on quotas and consists of three procedures that are administered sequentially, one for each quota. Using the complete data set of the central clearinghouse, we show that the matching can be improved for around 20% of the excellent students while making a relatively small percentage of all other students worse off. - The second alternative has been proposed by Andersen et al. (2003). In this case, a daily measure of variances and covariances is computed as an aggregate measure from intraday returns. More specifically, a daily realized variance for day t is computed as the sum of the squared intraday equidistant returns for the given trading day and a daily realized covariance is obtained by summing the products of intraday returns. Once such daily measures have been obtained, they can be modelled, e.g. for a prediction purpose. A nice feature of this approach is that unlike MGARCH and multivariate stochastic volatility models, the N(N − 1)/2 covariance components of the conditional variance matrix (or, rather, the components of its Choleski decomposition) can be forecasted independently, using as many univariate models. As shown by Andersen et al. (2003), although the use of the realized covariance matrix facilitates rigorous measurement of conditional volatility in much higher dimensions than is feasible with MGARCH and multivariate SV models, it does not allow the dimensionality to become arbitrarily large. Indeed, to ensure the positive definiteness of the realized covariance matrix, the number of assets (N) cannot exceed the number of intraday returns for each trading day. The main drawback is that intraday data remain relatively costly and are not readily available for all assets. Furthermore, a large amount of data handling and computer programming is usually needed to retrieve the intraday returns from the raw data files supplied by the exchanges or data vendors. On the contrary, working with daily data is relatively simple and the data are broadly available.
- Article
- Sep 2002
- J Appl Econometrics

Multivariate GARCH specifications are typically determined by means of practical considerations such as the ease of estimation, which often results in a serious loss of generality. A new type of multivariate GARCH model is proposed, in which potentially large covariance matrices can be parameterized with a fairly large degree of freedom while estimation of the parameters remains feasible. The model can be seen as a natural generalization of the O-GARCH model, while it is nested in the more general BEKK model. In order to avoid convergence difficulties of estimation algorithms, we propose to exploit unconditional information first, so that the number of parameters that need to be estimated by means of conditional information is more than halved. Both artificial and empirical examples are included to illustrate the model. Copyright © 2002 John Wiley & Sons, Ltd.