You are on page 1of 61

Portfolio Value-at-Risk Estimation with a Time-varying Copula

Approach: An Illustration of Model Risk

Yi-Hsuan Chen ∗
Chung-Hua University

Anthony H. Tu
National Chengchi University

Abstract

The conventional portfolio value-at-risk (PVaR) estimation method commonly used in


current practice exhibits considerable biases due to model specification errors. This paper
attempts to improve PVaR estimation by relaxing the conventional assumption of normal joint
distribution and developing an empirical model of time-varying PVaR conditional on
time-varying dependences among portfolio components. Specifically, single-parameter
conditional copulas and copula mixture models are used to form a joint distribution. Using a
sample period covering 1 January 2004 to 29 October 2007, the PVaR estimates for optimal
hedged portfolios are computed using the various copula models. Backtesting diagnostics
indicate that the copula-based PVaR outperforms the conventional PVaR estimator at the 99%
and the 95% confidence levels. To reduce the model risk, our results indicate that using a
smaller nominal coverage probability (say, 95% instead of 99%) is preferable.

JEL classification: G11; G16


Keywords: Copula; Value-at-risk; Hedge ratios; Backtests; Subprime market crash


Corresponding author. Department of Finance, Chung-Hua University , Hsinchu , Taiwan.
Tel: 886-3-5186057; Fax: 886-3-5186054; e-mail: cathy1107@gmail.com

1
1. Introduction

Value-at-risk (VaR) has become one of the most popular tools of risk

measurement as the topic of risk management dramatically exposed to both academia

and practical investment. The potential estimation biases and model risk in the VaR

model have been generally discussed in previous studies (Jorion 1996; Kupiec, 1999;

Rich, 2003; Miller and Liu, 2006; Brooks and Persand, 2002). Firms may maintain

either insufficient risk capital reserves to absorb large financial shocks or excessive

risk capital to reduce capital management efficiency. Citi group, Merrill Lynch,

Lehman Brothers, and Morgan Stanley are good examples of major financial

institutions who crashed after the recent outbreak of U.S. subprime markets.

The first purpose of this paper is to assess the potential loss of accuracy from

estimation error when calculating VaR. This study use portfolio value–at-risk (PVaR)

estimation to illustrate that the model risk is attributable to the inappropriate use of

correlation coefficient and normal joint distribution in PVaR estimation. The

correlation coefficient only measures the “degree” or “level” of dependence, which

reflects the overall strength of the relation. However, it fails to model the “structure”

of dependence, which describes the manner in which two assets are correlated. In

addition, it is not robust enough for heavily tailed distributions and not adequate for

non-linear relationships.
2
Jorion(1996) first indicated that VaR estimates are themselves affected by their

sampling variation or “estimation risk”. Brooks and Persand (BP) (2002) investigated

a number of statistical modeling issues in the context of determining market-based

capital risk requirements. They highlighted several potential pitfalls in commonly

applied methodologies and concluded that model risk can be serious in VaR

Calculation. They found that when the actual data is fat-tailed, using critical values

from a normal distribution in conjunction with a parametric approach can lead to a

substantially less accurate VaR estimate than using a nonparametric approach.

However, the above analysis considers univariate distribution only.

The second purpose of this paper is to reexamine whether a more accurate VaR

estimate could be derived by using the fifth percentile (5%) instead of the first

percentile (1%) (currently adopted by Basle Committee) under the alternative

copula-based joint distributions. BP found that the model risk measured by standard

errors could be more severe for first percentile of the normal return distribution. They

suggested that the closer the quantile are to the mean of the distribution, the more

accurately they can be estimated. Thus, to ensure that virtually all probability losses

are covered, the use of a smaller coverage rate (say, 95% instead of 99%) combined a

larger multiple is preferable. Thus, similar to Christoffersen and Goncalves (CG)

(2005), our paper uses the bootstrapping resampling technique to quantify ex ante the

3
magnitude of the estimation error (model risk) by constructing confidence intervals

around the point PVaR estimation 1.

VaR applications are somewhat limited in hedged portfolios because the PVaR

model cannot be stated in a closed form and can only be approximated with complex

computational algorithms. Miller and Liu (2006) criticized current PVaR approaches.

First, the assumption of normal joint distribution is unrealistic, though convenient to

use. Second, though nonparametric approaches are not subject to the criticism of

distributional assumptions, tail probability estimations based on the empirical

distribution function may be poor because the observed outcomes in the tails of

leptokurtic distributions are typically sparse. Third, PVaR estimation based solely on

models of the return distribution tails rather than entire return distribution may incur

significant information loss. Finally, significant nonlinear correlations between the

portfolio components are evident.

Although multivariate extreme value theory (EVT) models may help mitigate the

problems mentioned above, the models used in current practice may exhibit

undesirable properties and may not avoid substantial PVaR estimation biases. First,

multivariate EVT models do not retain the marginal distribution properties of the

1
CG uses a bootstrap resampling technique to take into account parameter estimation error of portfolio
variance in calculating VaR. Our paper considers the potential estimation error in PVaR estimator due
to the inappropriate use of joint distributions.

4
univariate return series. Second and more importantly, as indicated by Smith (2000),

multivariate EVT models may not rich enough to encompass all forms of tail

behaviors.

Copulas enable the modeler to construct flexible multivariate distributions

exhibiting rich patterns of tail behavior, ranging from tail independence to tail

dependence, and different kinds of asymmetry. They are an alternative to correlation

in the modeling of financial risks (Embrechts et al. (1999)). This paper employs

single-parameter conditional copula to represent the dependence between index

futures and spot returns, conditional upon historical information provided by a

previous pair of index futures and spot returns. The parameter of the conditional

copula, which is time-varying, depends upon conditional information 2. In addition, we

model the dependence structure as a mixture of different copulas with parameters

changing over time. Using the copula mixture, parametric or nonparametric marginals

with quite different tail shapes, that are initially estimated separately, can then be

combined in a joint risk distribution that preserves the original characteristics of the

marginals (Rosenberg and Schuermann (2006)) 3.

2
The general theory of copulas is described by Joe (1997) and Nelsen (1999) and finance applications
are emphasized by Cherubini et al. (2004). Important conditional theory has been developed and
applied to financial market data by Patton (2006a, b).
3
Mixtures of copulas are copulas. See Nelsen (1999) for details.

5
Recently, Miller and Liu (2006) used a copula-mixed distribution (CMX)

approach to estimate the joint distribution of log returns for the Taiwan Stock

Exchange (TSE) index and its corresponding futures contract on SGX and TAIFEX.

This approach converges in probability to the true marginal return distribution, but is

based on weaker assumptions. PVaR estimates for various hedged portfolios are

computed from the fitted CMX model, and backtesting diagnostics show that CMX

outperforms the alternative PVaR estimators. Unfortunately, Miller and Liu (2006)

employed a static Gaussian copula to estimate the PVaR of a hedge portfolio. Using

Gaussian copula alone may be subject to the potential estimation bias because the

joint distribution may not be normal and the true dependence structures between

portfolio components are time-varying.

Thus, this paper attempts to improve PVaR estimation by relaxing the

conventional assumption of normal joint distribution and developing an empirical

model of time-varying PVaR conditional on time-varying dependencies between

portfolio components. Our conditional PVaR model easily passes associated criticisms

by applying a time-varying copula approach to measure dynamic dependencies

between portfolio components. Our study makes several contributions. First, with

time-varying considerations, we avoid underestimating PVaR and facilitate the

decisions of risk management. The second contribution is showing how a time-varying

6
copula model can be applied to PVaR estimation. In effect, a copula-based measure can

specify both the structure and the degree of dependence, which takes the non-linear

property into account and allows a more comprehensive understanding as well.

The third contribution of this paper is related to Rosenberg and Schuermann

(2006). They claim that during a period of financial collapse, portfolios are subject to

at least two of three types of risk: market, credit, and operational risk. The

distributional shape of each risk type varies considerably. They use the copula mixture

method to construct a joint risk distribution and compare it with several conventional

approaches to computing risk. Their hybrid copula approach is surprisingly accurate.

We borrow their idea and extend it to PVaR estimation to support the argument from

Rosenberg and Schuermann (2006) that integrated risk management requires an

approach that aggregates different risk types.

Based on the results of three backtests (unconditional and conditional coverage

test, dynamic quantile test, distribution and tail forecast test), this study demonstrates

that the copula-based PVaR model, compared with the conventional GARCH model

(which is based on the constant correlation coefficient), exhibits superior performance

under all significance levels (95% and 99%). From the bootstrapping evidence, we

also illustrate BP’s finding that the closer the quantiles are to the mean of the

distribution, the smaller the estimation error will be for both copula-based and

7
conventional models. In other words, the more accurately the PVaR can be estimated

by choosing a smaller coverage rate. Thus, it implies that, for the conventional risk

management models, using a smaller nominal coverage probability (say, 95% instead

of 99%) helps to ensure that virtually all probability losses are covered. Choosing an

extremely large coverage probability, say 99%, deteriorates model risk caused by

model’s inability to take tail dependences and the nature of time variation into

account.

The paper is organized as follows. In section 2, the time-varying copula

methodology and the portfolio VaR model are presented. Section 3 describes data and

the empirical results. Section 4 performs the backtests of various PVaR versions and

compares their relative performance. Finally, we conclude afterwards.

8
2. Methodology

PVaR estimation methods such as variance-covariance method, Monte Carlo

simulation, historical simulation, RiskMetrics method, Jorion’s method, and extreme

value theory (EVT) have been well established. The basic properties and limitations

of these methods are discussed by the earlier literature. Recently, copula method has

been emphasized because of its capability in modeling the contemporaneous

dependencies between portfolio asset returns. This method is increasing in popularity

because it can analyze dependence structures in portfolio components beyond linear

correlations. It also provides a higher degree of flexibility in estimation by separating

marginal and joint distributions.

Cherubini and Luciano (2001) used copula functions to evaluate tail probabilities

and market risk (VaR) trade-offs at a given confidence level, dropping the joint

normality assumption on returns. Ané and Kharoubi (2003) presented a new approach

to parametrically modeling the dependence structure of stock index returns through a

copula representation. They also investigated the relative importance of marginal

distributions and dependence structured in VaR estimation. They found that a

misspecification of the dependence structure can account for up to 20% of the error in

portfolio VaR estimates. Miller and Liu (2006) employed a Gaussian copula to

estimate the PVaR of a hedge portfolio. However, all of these studies are limited to a
9
static copula model and may be subjected to the potential estimation bias because the

dependence structures between portfolio components are time-varying. Therefore, a

time-varying specification must be established to capture the dynamics of dependence

structures. In a time-varying copula setting, the dependence parameters in the copula

function are modeled as a dynamic process conditional on currently available

information. This allows a non-linear, time dependant relationship. The PVaR is

therefore estimated conditional on previously estimated time-varying dependencies.

2.1. The conditional copula model

We assume that the marginal distribution for each portfolio asset return (index

and its corresponding futures) is characterized by an GJR-GARCH(1,1)-AR(1)-t

model since the asymmetric information impact is a well-known effect for financial

2
assets 4. Let 𝑅𝑅𝑖𝑖,𝑡𝑡 and ℎ𝑖𝑖,𝑡𝑡 denote asset i’s return (spot (s) or futures (f)) and its

conditional variance for period 𝑡𝑡, respectively. Ω𝑡𝑡−1 denotes previous information

set. The GJR-GARCH(1,1)-AR(1)-t model for asset return i is defined by 5:

𝑅𝑅𝑖𝑖,𝑡𝑡 = 𝑢𝑢𝑖𝑖 + ∅𝑖𝑖 𝑅𝑅𝑖𝑖,𝑡𝑡−1 + 𝜀𝜀𝑖𝑖,𝑡𝑡 (1a)


2 2 2 2
ℎ𝑖𝑖,𝑡𝑡 = 𝜔𝜔𝑖𝑖 + 𝛽𝛽𝑖𝑖 ℎ𝑖𝑖,𝑡𝑡−1 + 𝛼𝛼𝑖𝑖,1 𝜀𝜀𝑖𝑖,𝑡𝑡−1 + 𝛼𝛼𝑖𝑖,2 𝑠𝑠𝑖𝑖,𝑡𝑡−1 𝜀𝜀𝑖𝑖,𝑡𝑡−1 (1b)

𝜈𝜈
z𝑖𝑖,𝑡𝑡 |𝛺𝛺𝑡𝑡−1 = � 2 (𝜈𝜈𝑖𝑖 𝜀𝜀𝑖𝑖,𝑡𝑡 𝑧𝑧𝑖𝑖,𝑡𝑡 ~𝑖𝑖𝑖𝑖𝑖𝑖 𝑡𝑡𝜈𝜈 𝑖𝑖 (1c)
ℎ 𝑖𝑖 −2)
𝑖𝑖,𝑡𝑡

4
The conditional densities of equity index returns are leptokurtic, and their variances are asymmetric
functions of previous returns (Nelson, 1991; Engle and Ng, 1993; Glosten et al., 1993)
2
5
In Enders (2004), the Eq. (1c) can be alternatively expressed as 𝜀𝜀𝑖𝑖,𝑡𝑡 |Ω𝑡𝑡−1 ~𝑡𝑡𝜈𝜈 𝑖𝑖 (0, ℎ𝑖𝑖,𝑡𝑡 ).

10
𝑖𝑖 ∈ {𝑠𝑠, 𝑓𝑓}

with 𝑠𝑠𝑖𝑖,𝑡𝑡−1 = 1 when 𝜀𝜀𝑖𝑖,𝑡𝑡−1 is negative and otherwise 𝑠𝑠𝑖𝑖,𝑡𝑡−1 = 0. 𝜈𝜈𝑖𝑖 is the degree

of freedom.

Assume that the conditional cumulative distribution functions of 𝑧𝑧𝑠𝑠 and 𝑧𝑧𝑓𝑓 are

𝐹𝐹𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 � and 𝐹𝐹𝑓𝑓,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 �, respectively. The conditional copula function,

denoted as 𝐶𝐶𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝛺𝛺𝑡𝑡−1 ) , is defined by the two time-varying cumulative

distribution functions of random variables 𝑢𝑢𝑡𝑡 = 𝐹𝐹𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 � and

𝑣𝑣𝑡𝑡 = 𝐹𝐹𝑓𝑓,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 �. Let Φ𝑡𝑡 be the bivariate conditional cumulative distribution

functions of 𝑧𝑧𝑠𝑠,𝑡𝑡 and 𝑧𝑧𝑓𝑓,𝑡𝑡 . Using the Sklar theorem, we have

Φ𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 , 𝑧𝑧𝑓𝑓,𝑡𝑡 �Ω𝑡𝑡−1 � = 𝐶𝐶𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝛺𝛺𝑡𝑡−1 )

= 𝐶𝐶𝑡𝑡 �𝐹𝐹𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 �, 𝐹𝐹𝑓𝑓,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 ��𝛺𝛺𝑡𝑡−1 � (2)

The bivariate conditional density function of 𝑧𝑧𝑠𝑠,𝑡𝑡 and 𝑧𝑧𝑓𝑓,𝑡𝑡 can be constructed by the

product of their copula density and their two marginal conditional densities,

respectively denoted by 𝑓𝑓𝑠𝑠,𝑡𝑡 and 𝑓𝑓𝑓𝑓,𝑡𝑡 :

φ𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 , 𝑧𝑧𝑓𝑓,𝑡𝑡 �Ω𝑡𝑡−1 �

= 𝑐𝑐𝑡𝑡 �𝐹𝐹𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 �, 𝐹𝐹𝑓𝑓,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 ��𝛺𝛺𝑡𝑡−1 � × 𝑓𝑓𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �Ω𝑡𝑡−1 � × 𝑓𝑓𝑓𝑓 ,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �Ω𝑡𝑡−1 � (3)
𝜕𝜕 2 C 𝑡𝑡 (u 𝑡𝑡 ,𝑣𝑣𝑡𝑡 |Ω𝑡𝑡−1 )
where 𝑐𝑐𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |Ω𝑡𝑡−1 ) = , 𝑓𝑓𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �Ω𝑡𝑡−1 � is the conditional density of
𝜕𝜕𝑢𝑢 𝑡𝑡 𝜕𝜕𝑣𝑣𝑡𝑡

𝑧𝑧𝑠𝑠,𝑡𝑡 and 𝑓𝑓𝑓𝑓 ,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 � is the conditional density of 𝑧𝑧𝑓𝑓,𝑡𝑡 .

Cherubini et al. (2004) claimed that copula functions with upper (or lower) tail

dependence are suggested in VaR applications. Moreover, a time-varying copula is

quite capable of calculating portfolio VaR. This study employs the Gaussian, the
11
Gumbel and the Clayton copula for specification and calibration. The Gaussian copula

is generally viewed as a benchmark for comparison, while the Gumbel and the

Clayton copula are used to capture the upper and lower tail dependence, respectively.

The Clayton copula is especially prevalent because the evidence indicates that equity

returns exhibit more joint negative extremes than joint positive extremes, leading to

the observation that stocks tend to crash together but not boom together (Poon et al.,

2004; Longin and Solnik, 2001; Bae et al., 2003).

2.2. Bivariate copula density

The conditional Gaussian copula function is the density of joint standard uniform

variables (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 ) because the random variables are bivariate normal with the

time-varying correlation ρt . Let 𝑥𝑥𝑡𝑡 = 𝛷𝛷−1 (𝑢𝑢𝑡𝑡 ) and 𝑦𝑦𝑡𝑡 = 𝛷𝛷−1 (𝑣𝑣𝑡𝑡 ) , where 𝛷𝛷−1 (.)

denotes the inverse of the cumulative density function of the standard normal

distribution. The density of the time-varying Gaussian copula can be shown as

1 2𝜌𝜌 𝑡𝑡 𝑥𝑥 𝑡𝑡 𝑦𝑦 𝑡𝑡 −𝑥𝑥 𝑡𝑡2 −𝑦𝑦𝑡𝑡2 𝑥𝑥 𝑡𝑡2 +𝑦𝑦 𝑡𝑡2


𝑐𝑐𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 ) = 𝑒𝑒𝑒𝑒𝑒𝑒 � + � (4)
�1−𝜌𝜌 𝑡𝑡 2�1−𝜌𝜌 𝑡𝑡2 � 2

The Gumbel and the Clayton copula can efficiently capture the tail dependence

arising from the extreme observations caused by asymmetry. The density of the

time-varying Gumbel copula is

(− 𝑙𝑙𝑙𝑙 𝑢𝑢𝑡𝑡 )𝛿𝛿𝑡𝑡−1 (− 𝑙𝑙𝑙𝑙 𝑣𝑣𝑡𝑡 )𝛿𝛿𝑡𝑡−1 1


𝑐𝑐𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝛿𝛿𝑡𝑡 ) = 𝑒𝑒𝑒𝑒𝑒𝑒 �– �(− 𝑙𝑙𝑙𝑙 𝑢𝑢𝑡𝑡 )𝛿𝛿𝑡𝑡−1 + (− 𝑙𝑙𝑙𝑙 𝑣𝑣𝑡𝑡 )𝛿𝛿𝑡𝑡 −1 �𝛿𝛿𝑡𝑡 �
𝑢𝑢𝑡𝑡 𝑣𝑣𝑡𝑡

12
2
1−𝛿𝛿𝑡𝑡 1−2 𝛿𝛿𝑡𝑡
� � � �
�−�(− 𝑙𝑙𝑙𝑙 𝑢𝑢𝑡𝑡 )𝛿𝛿𝑡𝑡 −1 + (− 𝑙𝑙𝑙𝑙 𝑣𝑣𝑡𝑡 )𝛿𝛿𝑡𝑡 −1 � 𝛿𝛿𝑡𝑡 + (𝛿𝛿𝑡𝑡 − 1)�(− 𝑙𝑙𝑙𝑙 𝑢𝑢𝑡𝑡 )𝛿𝛿𝑡𝑡−1 + (− 𝑙𝑙𝑙𝑙 𝑣𝑣𝑡𝑡 )𝛿𝛿𝑡𝑡 −1 � 𝛿𝛿𝑡𝑡 � (5)

where 𝛿𝛿𝑡𝑡 ∈ [1, ∞) measures the degree of dependence between 𝑢𝑢𝑡𝑡 and 𝑣𝑣𝑡𝑡 . 𝛿𝛿𝑡𝑡 = 1

implies an independent relationship and 𝛿𝛿𝑡𝑡 → ∞ represents perfect dependence. The

density of the time-varying Clayton copula is

2𝜃𝜃 𝑡𝑡+1

𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 −𝜃𝜃 −𝜃𝜃 𝜃𝜃 𝑡𝑡 −𝜃𝜃𝑡𝑡 −1 −𝜃𝜃𝑡𝑡 −1
𝑐𝑐𝜃𝜃𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜃𝜃𝑡𝑡 ) = (𝜃𝜃𝑡𝑡 + 1)�𝑢𝑢𝑡𝑡 𝑡𝑡 + 𝑣𝑣𝑡𝑡 𝑡𝑡 − 1� 𝑢𝑢𝑡𝑡 𝑣𝑣𝑡𝑡 (6)

where 𝜃𝜃𝑡𝑡 ∈ [0, ∞) measures the degree of dependence between 𝑢𝑢𝑡𝑡 and 𝑣𝑣𝑡𝑡 . 𝜃𝜃𝑡𝑡 = 0

implies an independent relationship and 𝜃𝜃𝑡𝑡 → ∞ represents perfect dependence.

2.3. A Hybrid method: The mixture of copulas

To find the copula that best estimates the PVaR, we consider some possible

mixtures of different copulas. As indicated by Rosenberg and Schuermann (2006),

integrated risk management requires a method, such as mixture of copulas, to

incorporate realistic marginal distributions. A combination of realistic marginal

distributions enables us to capture essential empirical features of various risks (market,

credit, and operational) such as skewness and fat-tails while allowing for a rich

dependence structure.

Regardless of the initial risk source in financial collapses (such as the outbreaks

of subprime markets), all portfolios composed of an index and its corresponding

13
futures are subject to at least two types of risk: market, credit and operational risk 6.

The distributional shape of each risk type varies considerably. Market risk typically

generates portfolio value distributions that are nearly symmetric and often

approximated as normal. Credit (and especially operational) risk generate more

skewed distributions because of occasional extreme losses.

Hu (2006) pointed out that empirical applications so far have been limited to

using individual copulas, however, there is no single copula that applies to all

situations. Thus, a mixture model is better able to generate dependence structures that

do not belong to one particular existing copula family. By carefully choosing the

component copulas in the mixture, a model that is simple yet flexible enough to

generate most dependence patterns in financial data can be constructed 7. Beyond Hu’s

study, we propose a “time-varying mixture copula” (or conditional mixture copula) to

generate more flexible dependence structures than existing copula families. Our

time-varying mixture copula comprises of a conditional Gaussian copula, a

6
According to the definition of the Basle Committee, operational risk is defined as losses due to failure
of internal processes, people, and systems, or from external events.
7
Hu (2006) suggests some implications for risk management. First, the use of multivariate normality
and correlation coefficients to measure dependence may significantly underestimate the downside risk,
while that computed using a mixed copula is much more realistic. Second, in risk measurement, the
valuation model should include both the structure and the degree of dependence.

14
conditional Gumbel copula, and a conditional Clayton copula to capture all possibility

of dependence structures. The mixture copula can be defined as

𝐶𝐶𝑡𝑡𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 , 𝛿𝛿𝑡𝑡 , 𝜃𝜃𝑡𝑡 ) =

𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶
𝑤𝑤𝑡𝑡 𝐶𝐶𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜃𝜃𝑡𝑡 ) + 𝑤𝑤𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 𝐶𝐶𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝛿𝛿𝑡𝑡 ) + (1 − 𝑤𝑤𝑡𝑡𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 − 𝑤𝑤𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 )𝐶𝐶𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 ) (7)

Clay
Where wt is the time-varying weight of the conditional Clayton copula, and

wtGum is the time-varying weight of the conditional Gumbel copula. Let

Clay Clay
wt , wtGum ∈ [0,1] and wt + wtGum ≤ 1 . These weights, named as shape

parameters 8 , reflect the structures of dependence and capture changes in tail

Clay
dependence. For instance, after an increase in wt , the copula assigns more

probability mass to the left tail. Compared to the models of Hu (2006), Li (2000), and

Lai et al. (2007), our mixture copula model is not restricted to static weights as theirs

but extends to a time-varying version to flexibly capture the dynamics of dependence

structures. Even though Rodriguez (2007) modeled the weights changing over time

according to a Markov Switching process, only two states, high variance and low

variance states, are allowed and remaining parameters of the mixture copula are

assumed constant 9.

8
Hu (2006) defined these weights as shape parameters to reflect dependence structures.
9
The other restriction is that two-step estimation cannot be used because copula and marginal
parameters change simultaneously according to a Markov Switching process, which burdens the
estimation process.

15
2.4. Parameterizing time-series variation in the conditional copula

A portfolio with time-invariant dependences among its components seems

unreasonable in reality. Recently, a conditional copula with a time-varying

dependence parameter has become prevalent in the literature (Patton, 2006a, b;

Bartram et al., 2007; Jondeau and Rochinger, 2006; Rodriguez, 2007). Following the

studies of Patton (2006a) and Bartram et al.(2007), we assume that the dependence

parameter is determined by previous information such as its previous dependence and

the historical absolute difference between cumulative probabilities of portfolio asset

returns 10.

A conditional dependence parameter can be modeled as an AR(1)-like process

because autoregressive parameters over lag one are rarely different from zero

(Bartram et al. 11, 2007; Samitas et al., 2007). The dependence process of a Gaussian

copula is therefore:

𝜌𝜌𝑡𝑡 = 𝛬𝛬(𝛽𝛽𝜌𝜌𝑡𝑡−1 + 𝜔𝜔 + 𝛾𝛾|𝑢𝑢𝑡𝑡−1 − 𝑣𝑣𝑡𝑡−1 |) (8)

The conditional dependence, 𝜌𝜌𝑡𝑡 , depends on its previous dependence, 𝜌𝜌𝑡𝑡−1 , and

historical absolute difference, |𝑢𝑢𝑡𝑡−1 − 𝑣𝑣𝑡𝑡−1 |. This formulation captures both the

persistence and the variation in the dependence process. 𝛬𝛬(𝑥𝑥) is defined as (1 −

10
There are different ways of capturing possible time variation in a conditional copula. This paper
assumes that the functional form of the copula remains fixed over the sample whereas the parameters
vary according to some evolution equation, as in Patton (2006a).
11
Bartram et al. (2007) assumed that the time-varying dependence process follows an AR(2) model.

16
𝑥𝑥
𝑒𝑒 −𝑥𝑥 )(1 + 𝑒𝑒 −𝑥𝑥 ) = 𝑡𝑡𝑡𝑡𝑡𝑡ℎ �2 � , which is the modified logistic transformation to keep 𝜌𝜌𝑡𝑡

in (-1,1) at all times (Patton, 2006a). Time-varying dependence processes for the

Gumbel copula and the Clayton copula are described as Eq. (9) and (10), respectively.

𝛿𝛿𝑡𝑡 = 𝛽𝛽𝑈𝑈 𝛿𝛿𝑡𝑡−1 + 𝜔𝜔 + 𝛾𝛾|𝑢𝑢𝑡𝑡−1 − 𝑣𝑣𝑡𝑡−1 | (9)

𝜃𝜃𝑡𝑡 = 𝛽𝛽𝐿𝐿 𝜃𝜃𝑡𝑡−1 + 𝜔𝜔 + 𝛾𝛾|𝑢𝑢𝑡𝑡−1 − 𝑣𝑣𝑡𝑡−1 | (10)

where δ𝑡𝑡 ∈ [1, ∞) measures the degree of dependence in the Gumbel copula and has

a lower bound equal to 1, indicating an independent relationship, whereas 𝜃𝜃𝑡𝑡 ∈

[−1, ∞) measures the degree of dependence in the Clayton copula. Boundaries of

parameters are set up in the estimation software.

2.5. Data and PVaR estimation with the time-varying copula

The Monte Carlo simulation is widely used to generate draws from stochastic

models. In particular, the copula framework makes it easy to simulate portfolio

returns from a general multivariate distribution (Meneguzzo and Vecchiato, 2004).

Given a chosen copula function and its estimated time-varying parameter in the

previous section, this study can generate multivariate random variables {𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 }. For

each copula function, we generate 200 pairs of {𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 , 𝛿𝛿𝑡𝑡 , 𝜃𝜃𝑡𝑡 } conditional on

dynamic dependence coefficients 𝜌𝜌𝑡𝑡 , 𝛿𝛿𝑡𝑡 , 𝑜𝑜𝑜𝑜 𝜃𝜃𝑡𝑡 . Therefore, at time t, conditional joint

distributions such as 𝑐𝑐(𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 ), 𝑐𝑐(𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 | 𝛿𝛿𝑡𝑡 ) and 𝑐𝑐(𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 | 𝜃𝜃𝑡𝑡 ) can be obtained.

The next step is to convert conditional uniform random variables

{𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 , 𝛿𝛿𝑡𝑡 , 𝜃𝜃𝑡𝑡 }, generated from conditional joint distributions, to portfolio
17
component returns by constructing empirical distributions for each sample day. We

use historical data from the previous 60 and 90 trading days and roll it over. Thus,

given the estimated conditional joint distribution of asset returns, replicated samples

can be drawn for the portfolio components.

To demonstrate the application of this time-varying copula in PVaR estimation,

we constructed a hedged portfolio comprised of the S&P 500 index and its index

futures. The sample period covers 1 January 2004 to 29 October 2007, including the

period of the outbreak of the U.S. subprime market collapse from August to October

2007. 998 daily observations for the index and index futures are obtained.

The minimum variance hedge ratio (MVHR), which is the ratio of futures

contracts to a specific spot position that minimizes the variance of hedged portfolio

returns, has been broadly used as a futures hedging strategy. Following Johnson (1960)

and Stein (1961), the MVHR is calculated as the ratio of covariance between spot and

futures returns to the variance of futures returns. Early research utilized regression

models to estimate time-invariant MVHRs. More recently, Bivariate GARCH

(BGARCH) models have been adopted to estimate time-varying variance and

12
covariance and subsequently generate dynamic MVHRs . Specifically, the

12
Dynamic MVHRs often outperform those estimated from the regression models (Baillie and Myers
(1991), Kroner and Sultan (1993), Brooks et al. (2002)), among many others.

18
conditional variance-covariance matrix of residual series from (𝜀𝜀𝑠𝑠,𝑡𝑡 , 𝜀𝜀𝑓𝑓,𝑡𝑡 ) in (1a), is

denoted by
2
ℎ𝑠𝑠,𝑡𝑡 ℎ𝑠𝑠𝑠𝑠 ,𝑡𝑡
𝑉𝑉𝑉𝑉𝑉𝑉�𝜀𝜀𝑠𝑠,𝑡𝑡 , 𝜀𝜀𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 � = � 2 �,
ℎ𝑠𝑠𝑠𝑠 ,𝑡𝑡 ℎ𝑓𝑓,𝑡𝑡

Assume that the optimal weight of this hedge portfolio is its conditional

minimum-variance hedge ratio to keep all variables under a time-varying version. The

optimal conditional minimum-variance hedge ratio, {𝐻𝐻𝑡𝑡∗ |𝜌𝜌𝑡𝑡 }, can then be defined as

ℎ�𝑠𝑠𝑠𝑠 ,𝑡𝑡
𝐻𝐻𝑡𝑡∗ = (11)
ℎ�2𝑓𝑓,𝑡𝑡

ℎ�
= 𝜌𝜌𝑡𝑡 ℎ� 𝑠𝑠,𝑡𝑡
𝑓𝑓,𝑡𝑡

and

∞ ∞
𝜌𝜌𝑡𝑡 = ∫−∞ ∫−∞ zs,t zf,t φt �zs,t zf,t �Ωt−1 � dw1 dw2

2 2
where ℎ𝑠𝑠𝑠𝑠 and ℎ𝑓𝑓𝑓𝑓 are conditional variances of index and index futures, respectively.

𝜌𝜌𝑡𝑡 , estimated from the bivariate conditional Gaussian density function described in

Section 2.4, is the copula-based conditional correlation between index and index

future 13. Therefore, the distributions of portfolio returns, {𝑝𝑝𝑡𝑡 |𝐻𝐻𝑡𝑡∗ }, conditional on

dynamic hedge ratios, can be obtained. Finally, use α quantile of the conditional

portfolio distributions as the conditional PVaR estimates. Accordingly, PVaR

13
The dynamic hedge ratio in this study is similar to that of Hsu et al. (2008). They found that the
hedge effectiveness of a time-varying copula-based model is better than that of conventional static
models and even other dynamic hedge models such as CCC-GARCH and DCC-GARCH.

19
conditional on a time-varying dependence between portfolio components is estimated

for each sample day.

20
3. Empirical results of PVaR estimation

3.1. Estimation results of the time-varying copula models

Table 1 reports the summary statistics for the S&P 500 index returns and its

futures returns. Table 2 shows the estimated parameters of the marginal distributions

characterized by a GJR-GARCH(1,1)-AR(1)-t model given by Eq. (1). As Table 2

indicates, all parameters are at least significant at the 5 percent level. Furthermore,

their residual series pass the goodness-of-fit test 14.

[Insert Table 1 here]

[Insert Table 2 here]

The Inference function for Margins (IFM) method is implemented by estimating

the marginal distribution parameters prior to the copula function parameters to

enhance estimation efficiency. Given that the marginal distributions are estimated, the

parameters of time-varying correlations in the Gaussian copula are calibrated and

reported in Panel A of Table 3 15. In Eq. (8), the parameter β captures the degree of

persistence in the dependence and γ captures the adjustment in the dependence

process. Panels B and C in Table 3 report the estimated parameters of time-varying

14
We test whether transformed series are Unif(0,1) using the Kolmogorov-Smirnov test (see appendix
in Patton (2006a)).
15
The appendix A describes parameter estimation of the conditional copula.
21
asymmetric dependences specified by the Gumbel and the Clayton copula,

respectively. Eq. (9) and (10) are their time-varying dependence processes. For each

sample day, there are three types of time-varying dependences specified by the

Gaussian, the Gumbel and the Clayton copula.

[Insert Table 3 here]

Table 4 reports the summary statistics of the weight estimates of conditional

mixture copulas. These weights are estimated by MLE according to Eq. (7). Panel A

reports the weight estimates across the entire sample period, while Panel B focuses on

the period of the U.S. subprime market crash from August to October 2007. The

statistics of weight estimates in the conditional Clayton copula are generally higher

than those in the conditional Gumbel and the conditional Gaussian copula, indicating

the conditional mixture copulas allocate more weights on the left tail dependence to

reflect the fact that markets are more likely to crash together than to boom together,

especially during U.S. subprime market crash. Figure 1 depicts a time series plot of

the time-varying weight estimates of conditional mixture copulas.

[Insert Table 4 here]

[Insert Figure 1 here]

3.2. Statistical results of conditional PVAR estimates

22
Implementing the Monte Carlo simulation generates replicated samples for the

index and futures returns given the estimated parameters of the time-varying

dependence models in each sample day. As the optimal weight of a hedge portfolio is

assumed to be its conditional minimum-variance hedge ratio in Eq. (11), conditional

distributions for portfolio returns are generated for each sample day. Furthermore, 1%

and 5 % quantiles of the conditional portfolio distributions are used to estimate the

conditional PVaR. Table 5 summarizes the statistics of the conditional PVaR across

the sample period. D60 and D90 are the rolling horizons, and indicate that the

empirical distributions are constructed using historical data from the previous 60 and

90 trading days, respectively 16 . Accordingly, 998 empirical distributions across

sample period can be obtained. As Table 5 shows, regardless of the significance level

or the rolling horizon, the conditional PVaR estimates specified by the Clayton copula

are the most severe for each statistic, whereas the Gaussian copula produces the most

tolerant estimates.

Violation occurs if the actual portfolio return is worse than the PVaR estimate.

Numbers of violation measures the frequency of violation, and mean violation refers

to the loss in excess of the PVaR estimate. The frequency of violation and the

magnitude of exceedances are both largest the Gaussian copula. As the portfolio

16
The empirical distributions are constructed by previous 60 and 90 trading days to convert uniform

23
assets exhibit asymmetric dependence, especially for lower tail dependence, the

stricter PVaR estimate of the Clayton copula should be used. Table 5 shows that

conditional PVaR estimates from the Clayton copula exhibit a lower frequency of

violation and magnitude of exceedances than other copulas. In particular, the number

of violations in the mixture copula is lowest, and its statistics are more modest than

others.

[Insert Table 5 here]

For comparison, Figure 2 shows the time series plots of the conditional PVaR

estimates with different significance levels (99% and 95%) and different rolling

horizons (60 and 90 trading days). In general, the conditional PVaR estimates from

the Clayton copula are more severe than others, as is quite evident during the U.S.

subprime market crash period. Note that the time series for the PVaR estimates of the

mixture copula are also less volatile.

[Insert Figure 2 here]

variables from marginal distributions to simulated index and futures returns.

24
4. Do Copulas Improve the Estimation of PVaR: The Backtests

4.1. Backtests

This section proposes three backtest methods to examine the accuracy of the

PVaR estimates and their sensitivity to various alternative copula models:(1)

unconditional and conditional coverage tests, (2) dynamic quantile test (Engle and

Manganelli(2004)), and (3) distribution and tail forecasts test (Berkowitz(2001)).

Unconditional and conditional coverage tests are statistical backtests based on the

frequency of exceedances. Dynamic quantile test is a statistical backtests based on the

whole of the distribution of exceedances, while distribution and tail forecast test is

based on the whole of the distribution of portfolio returns. Later two tests use more

information than the frequency-based tests, and are generally more reliable. Moreover,

we compare the performance of our conditional PVaR model with that of the

conventional GARCH model, in which constant correlation coefficient is assumed.

4.1.1 Unconditional and conditional coverage tests

The traditional approach to validating such interval forecasts is to compare the

targeted violation rate (𝛼𝛼) to the observed violation rate. The Kupiec (1995) LR test

assess the equality between the proportion of VaR violations and the expected α

level (referred to as “unconditional coverage” in Christoffersen (1998)). However,

with the small samples involved, unconditional coverage tests are less robust than

25
alternative hypotheses (Kupiec (1995), Christoffersen (1998) and Berkowitz (2001)).

To backtest the VaR results, we use the powerful tests developed by Christoffersen

(1998), who enables us to test both coverage and independence hypotheses at the

same time. More precisely, this study uses the conditional coverage test to determine

if the PVaR violations are not only violated at α percent of the time, but they should

also be independent and identically distributed (i.i.d.) over time.

Using the same notation as Christoffersen (1998), we define the indicator

sequence of VaR violations as {𝐼𝐼𝑡𝑡 }, where 𝐼𝐼𝑡𝑡 is a dummy variable equal to 1 if a VaR

violation occurs at time t and 0 if there is no VaR violation at time t. If 𝜋𝜋𝑖𝑖,𝑗𝑗 is the

transition probability for two successive 𝐼𝐼𝑡𝑡 dummy variables, i.e.,

𝜋𝜋𝑖𝑖,𝑗𝑗 = P(𝐼𝐼𝑡𝑡 = 𝑗𝑗|𝐼𝐼𝑡𝑡−1 = 𝑖𝑖), then the likelihood function for the sequence of 𝐼𝐼𝑡𝑡 is equal

to:

𝑛𝑛 𝑛𝑛 𝑛𝑛 𝑛𝑛
𝐿𝐿1 = 𝜋𝜋0,00,0 𝜋𝜋0,10,1 𝜋𝜋1,01,0 𝜋𝜋1,11,1

where 𝑛𝑛𝑖𝑖,𝑗𝑗 is the number of observations with value i followed by value j.

The maximized likelihood function is then equal to:

𝑛𝑛 0,0 𝑛𝑛 0,1 𝑛𝑛 1,0 𝑛𝑛 1,1


𝑛𝑛 0,0 𝑛𝑛 0,1 𝑛𝑛 1,0 𝑛𝑛 1,1
𝐿𝐿�1 = �𝑛𝑛 � �𝑛𝑛 � �𝑛𝑛 � �𝑛𝑛 �
0,0 +𝑛𝑛 0,1 0,0 +𝑛𝑛 0,1 1,0 +𝑛𝑛 1,1 1,0 +𝑛𝑛 1,1

If the {𝐼𝐼𝑡𝑡 } sequence (the sequence of VaR violations) is (first-order) independent,

then 𝜋𝜋0,0 = 1 − π, 𝜋𝜋0,1 = π, 𝜋𝜋1,0 = 1 − π, and 𝜋𝜋1,1 = π. This gives the likelihood

under the null of first-order independence:


26
𝐿𝐿2 = (1 − 𝜋𝜋)𝑛𝑛 0,0 +𝑛𝑛 1,0 𝜋𝜋 𝑛𝑛 0,1 +𝑛𝑛 1,1

which is then estimated as

𝑛𝑛 0,0 +𝑛𝑛 1,0 𝑛𝑛 0,1 +𝑛𝑛 1,1


𝑛𝑛0,1 + 𝑛𝑛1,1 𝑛𝑛0,1 + 𝑛𝑛1,1
𝐿𝐿�2 = �1 − � � �
𝑛𝑛0,0 + 𝑛𝑛1,0 + 𝑛𝑛0,1 + 𝑛𝑛1,1 𝑛𝑛0,0 + 𝑛𝑛1,0 + 𝑛𝑛0,1 + 𝑛𝑛1,1

The LR test statistic for (first-order) independence in the VaR violations is:

𝐿𝐿𝐿𝐿𝑖𝑖𝑖𝑖𝑖𝑖 = −2 (𝑙𝑙𝑙𝑙�𝐿𝐿�2 � − 𝑙𝑙𝑙𝑙�𝐿𝐿�1 �) ~ χ2 (1)

If we condition on the first observant in the test for unconditional coverage, the

LR statistic for conditional coverage (i.e., the joint hypothesis of unconditional

coverage and independence) is:

𝐿𝐿𝐿𝐿𝑐𝑐𝑐𝑐 = 𝐿𝐿𝐿𝐿𝑢𝑢𝑢𝑢 + 𝐿𝐿𝐿𝐿𝑖𝑖𝑖𝑖𝑖𝑖 ~ χ2 (2)

where 𝐿𝐿𝐿𝐿𝑢𝑢𝑢𝑢 is the 𝐿𝐿𝐿𝐿 statistic for unconditional coverage (computed above for the

Kupiec test). See additional details in Christoffersen (1998).

4.1.2 Dynamic quantile test

Christoffersen (1998) assume that {𝐼𝐼𝑡𝑡 } is a binary first-order Markov chain,

implying that only information of t-1 is referable. By incorporating all available

previous information, Engle and Manganelli (2004) provide the dynamic quantile

(DQ) test to correct for the insufficiency in the conditional coverage test of

Christoffersen (1998). Testing model is built by regressing {𝐼𝐼𝑡𝑡 } on its past values and

on the current VaR estimate, that is,

𝑝𝑝
𝐼𝐼𝑡𝑡 = α0 + � 𝛽𝛽𝑖𝑖 𝐼𝐼𝑡𝑡−𝑖𝑖 + 𝛽𝛽𝑝𝑝+1 𝑉𝑉𝑉𝑉𝑉𝑉𝑡𝑡 + 𝑢𝑢𝑡𝑡
𝑖𝑖=1

27
where, under the null hypothesis, α0 = α, the target violation rate, and 𝛽𝛽𝑖𝑖 = 0,

𝑖𝑖 = 1, … , 𝑝𝑝 + 1. In vector notation, we have

𝐇𝐇 − 𝛂𝛂𝛂𝛂 = 𝐗𝐗𝐗𝐗 + 𝐮𝐮

where 1 is a vector of ones. A good model should produce a sequence of unbiased and

uncorrelated 𝐼𝐼𝑡𝑡 variables, and the regressors should have no explanatory power, that

is H0 : 𝛃𝛃 = 𝟎𝟎. This regression model is estimated by ordinary least squares (OLS) and

yields the test statistic

𝛽𝛽̂𝐿𝐿𝐿𝐿

𝑋𝑋 ′ 𝑋𝑋𝛽𝛽̂𝐿𝐿𝐿𝐿 2
DQ = ∼ 𝜒𝜒𝑝𝑝+2
𝛼𝛼(1 − 𝛼𝛼)

where 𝛽𝛽̂𝐿𝐿𝐿𝐿 is the OLS estimates of 𝛃𝛃. The DQ test statistic has an asymptotic

Chi-square distribution with 𝑝𝑝 + 2 degrees of freedom. For the purpose of empirical

application in this study, regressor matrix X contains constant, four lagged hits,

𝐼𝐼𝑡𝑡−1 , … , 𝐼𝐼𝑡𝑡−4 and the contemporaneous PVaR estimate.

4.1.3 Distribution and tail forecast test

Berkowitz (2001) criticized the conditional coverage test model for requiring

large sample size and regarding violation as an iid Bernoulli sequence 17. He, however,

claimed that density evaluation methods make use of the full distribution of outcomes

and thus extract a greater amount of information from the available data. Rather than

focusing on rare violations, it is possible to transform all the realizations into a series

17
The key problem argued by Berkowitz (2001) is that Bernoulli variables take on only two values ( 0
and 1) and take on the value 1 very rarely.

28
of independent and identically distributed random variables. This idea is implemented

by applying probability integral transformation,


𝑦𝑦 𝑡𝑡
𝑥𝑥𝑡𝑡 = � 𝑓𝑓̂ (𝑢𝑢)𝑑𝑑𝑑𝑑 = 𝐹𝐹� (𝑦𝑦𝑡𝑡 )
−∞

where 𝑦𝑦𝑡𝑡 is the ex post portfolio return realization and 𝑓𝑓̂(∙) is the ex ante forecasted

portfolio density. If the underlying PVaR model is valid, then its {𝑥𝑥𝑡𝑡 } series should

be iid and distributed uniformly on (0,1).

Berkowitz (2001) proposed the likelihood ratio test to evaluate density

forecast 18
. For this test, the {𝑥𝑥𝑡𝑡 } series is transformed using the inverse of the

standard normal cumulative distribution function, that is, 𝑧𝑧𝑡𝑡 = 𝛷𝛷−1 (𝑥𝑥𝑡𝑡 ). If the PVaR

model is correctly specified, the {𝑧𝑧𝑡𝑡 } series should be an iid N(0,1). This hypothesis

can be tested against alternative specification, such as

𝑧𝑧𝑡𝑡 − 𝜇𝜇 = 𝜌𝜌(𝑧𝑧𝑡𝑡−1 − 𝜇𝜇) + 𝜀𝜀𝑡𝑡

where the parameter 𝜇𝜇 and 𝜌𝜌 are, respectively, the conditional mean and AR(1)

coefficient corresponding to the {𝑧𝑧𝑡𝑡 } series, and 𝜀𝜀𝑡𝑡 is a normal random variable

with mean zero and variance 𝜎𝜎 2 . Under the null hypothesis 𝜇𝜇 = 0, 𝜌𝜌 = 0, and

var(𝜀𝜀𝑡𝑡 ) = 1. The LR statistic is

𝐿𝐿𝐿𝐿𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 2[𝐿𝐿(𝜇𝜇, 𝜌𝜌, 𝜎𝜎 2 ) − 𝐿𝐿(0,0,1)]

The test statistic is asymptotically distributed χ2 (3).

29
Berkowitz (2001) specially allowed the user to evaluate tail forecast due to the

fact that risk managers are exclusively interested in an accurate description of large

losses or tail behavior. By considering a tail that is defined by the user, any

observations that do not fall in the tail will be intentionally truncated. Letting the

desired cutoff point PVaR = Φ−1 (α) , we choose PVaR = −1.64 and PVaR =

−2.33 to focus on the 5% and 1% lower tail, respectively. Then the log-likelihood

function for joint estimation is

1 𝑧𝑧 − 𝜇𝜇 PVaR − 𝜇𝜇
L(μ, σ|𝐙𝐙) = � ln 𝜙𝜙 � �+� ln �1 − Φ � ��
𝑧𝑧<𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 σ 𝜎𝜎 𝑧𝑧≥PVaR 𝜎𝜎

Therefore, a test statistic can be based on the likelihood of a censored normal

which contains only observations falling in the tail. This test naturally provides a

middle ground between the full distribution approach and the conditional coverage

approach. The relevant test statistic is thus

𝐿𝐿𝑅𝑅𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 = 2[𝐿𝐿(𝜇𝜇, 𝜎𝜎 2 ) − 𝐿𝐿(0,1)]

which is asymptotically distributed χ2 (2).

4.2. Backtest results

4.2.1 The unconditional and conditional coverage test results

The conventional approach to validating estimated models is to compare the

targeted violation rate to the observed violation rate. The first column in Table 6

18
Berkowitz (2001) claimed that his likelihood ratio test is more powerful than that of Christoffersen
(1998) because it evaluates the entire density distribution, rather than a scalar or interval.
30
reports the actual violation rates. The actual violation rates of our proposed model are

less than their targeted violation rates, 5% and 1%, on average. Irrespective of the

rolling horizon and significance level, the violation rate for the Mixture copula is the

lowest, whereas the violation rates of the conventional GARCH model approach 5%

at 95% significance level and 2% at 99% significance level. Column 2 reports

unconditional likelihood ratio (LR) statistics for the null hypothesis at the 5% and 1%

violation rate, respectively. The p-value, shown in parentheses, indicates whether the

observed violation rate is significantly different from the targeted violation rate. Our

empirical studies have found that the conventional GARCH model is invalid under

backtesting criteria at the 1% desired violation rate. In the column three,

independence test fails to reject the null hypothesis, indicating that there is no

clustering effect. Column four reports the results of conditional coverage test. It’s

evident that the copula-based PVaR model outperforms the conventional GARCH

model under backtesting criteria at the 1% desired violation rate regardless of the

rolling horizon. Although rejections arise at the 5% violation rate in the proposed

model, this does not imply that they have failed because the frequency of violations is

less than the desired five percent19. Under backtesting criteria at the 5% desired

violation rate, the conventional GARCH model is acceptable, but it becomes invalid if

19
This finding is the consistent with that of Miller and Liu(2006).

31
we choose an extreme violation rate. Generally speaking, the copula-based PVaR

model, compared with the conventional GARCH model, exhibits superior

performance under extreme significance levels (say, 99%).

[Insert Table 6 here]

Figure 3 displays the time series of actual portfolio returns and corresponding

conditional PVaR estimates. The conditional PVaR estimate appears to track actual

portfolio return well. In addition, violations rarely occur during the U.S. subprime

market crash from August to October 2007. In Figure 4, we demonstrate the degree of

underestimation or overestimation from the conventional GARCH model by

comparing its estimates with those from the various conditional copula models.

Positive difference means that the conventional GARCH estimates are underestimated

while negative difference indicates that they are overestimated. As can be found, the

conventional GARCH estimates are generally underestimated, especially for those

under a 99% significance level.

[Insert Figure 3 here]

[Insert Figure 4 here]

4.2.2 Dynamic quantile test results

Table 7 presents the results of the PVaR estimates evaluation using the dynamic

quantile test. Under the 1% quantile, the results are mainly consistent with those of

32
conditional coverage test. Under the 5% quantile level, however, the time-varying

copula models exhibit the superior performance relative to the benchmark model,

except for the Gaussian copula. Most test statistics generated from the time-varying

copula models do not reject null hypothesis. In particular, the time-varying Clayton

copula presents the best performance at 1% quantile due to its smallest test statistics.

[Insert Table 7 here]

4.2.3 Distribution and tail forecast test results

Table 8 reports the results of distribution and tail forecast test. If we focus on the

interior of the distribution, most models predict the portfolio distribution well, except

for the Clayton copula model. If we turn to the 5% and 1% lower tail, all copula

model are relatively outstanding in comparison with the benchmark model. This

finding indicates that the time-varying copula models have a superior ability for

specifying the tail of distribution. Additionally, the success of time-varying copula

models in PVaR estimates can be attributed to its superiority in dealing with changes

in assets' tail dependences.

[Insert Table 8 here]

4.3. Implication for model risk

33
A popular issue in risk management practice concerns the coverage rate that

should be required by the minimum capital risk requirements (MCRRs). To ensure

adequate coverage, the Basle Committee chose to focus on the first percentile (1%) of

return distributions. In other words, risk managers are required to hold sufficient

capital to absorb all but 1 percent of expected losses, rather than the 5 percent level

used by Risk Metrics in the J.P. Morgan approach (1996)20.

Brooks and Persand (BP) (2002) sought to determine whether a more accurate

VaR estimate could be derived by using the fifth percentile instead of the first

percentile of the return distribution (together with an appropriately enlarged scaling

factor)21. They found that when the actual data is fat tailed, using critical values from

a normal distribution in conjunction with a parametric approach can lead to a

substantially less accurate VaR estimate than using a nonparametric approach. In

particular, the model risk measured by standard errors could be more severe for first

percentile of the normal return distribution22. Additionally, they suggested that the

closer the quantile are to the mean of the distribution (say, normal), the more

20
The actual coverage rate is even greater than 99% due to a scaling factor of at least 3.
21
An alternative viewpoint indicates that given a limited amount of data, the farther out in the tails the
cutoff is set, the fewer observations are available to estimate the required parameters and the larger the
standard error is around those estimates.
22
Kupiec(1995) showed that for a normal distribution, the standard error of the first percentile is about
50 percent larger than that of fifth percentile, and it will be double if the distributions of market returns
are fat-tailed.
34
accurately they can be estimated. Therefore, if a regulator has the desirable objective

of ensuring that virtually all probable losses can be covered, the use of a smaller

nominal coverage probability (say, 95% instead of 99%) combined with a larger

multiplier is preferable.

According to the conditional coverage test results presented in Table 6, although

the copula-based PVaR model performs better than the traditional model only under

backtesting criteria at the 1% (not for 5%) desired violation rate, from the results of

dynamic quantile test (Table 7) and tail forecast test (Table 8), our proposed copula

models outperforms the benchmark model at all coverage rates. In addition, both

dynamic quantile test and distribution forecast test are more powerful and robust than

conditional coverage test. Engle and Manganelli (2004) argue that the most difficulty

in calculating PVaR is specifying the distribution of the portfolio. Existing work

suggests that parameter uncertainty(estimation risk) is of second-order importance

when compared to other sources of inaccurate forecasts such as model

misspecification (Chatfield, 1993; Diebold et al., 1998; Berkowitz, 2001). Our

findings coincides with above arguments that the model risk plays an important role,

and this risk may be mainly attributed to model’s inability to specify tail dependences

and take the nature of time variation into account. Also, these results support BP’s

viewpoint that a smaller nominal coverage probability (95% instead of 99%) is

35
suggested for the conventional risk management models to avoid the model risk

arising from an inappropriate use of correlation measures.

4.4. Bootstrap:PVaR interval estimation

An interval estimate is often more useful and informative than just a point

estimate (Efron and Tibshirani, 1993). Jorion (1996) suggested that VaR should be

reported with confidence intervals. Christoffersen and Goncalves (2005) claim that

accurate confidence intervals reported along with the VaR point estimate will facilitate

the use of VaR in active portfolio management. Most importantly, the intervals widen

substantially as one moves to more extreme quantiles because fewer observations are

involved. Christoffersen and Goncalves (2005) illustrate that VAR confidence

intervals arise from the estimation error from the GARCH model. In terms of the

possibility of the estimation error from our copula model, the 90% confidence

intervals around the point estimates of time-varying PVaR are computed by

bootstrapping the data 23 . The appendix B reports the bootstrap algorithm for

copula-based risk measure.

23
Due to the analytic estimation of the standard error of a PVaR estimator is too difficult to calculate,
the bootstrap technique is then used. The bootstrap method estimates the sampling distribution of a test
statistic on the basis of the observed data. The advantage of the bootstrap method is that it proceeds as
if the sample is the population for the purpose of estimating the sampling distribution. Unlike the
Monte Carlo method, it does not have to known the underlying data generation process.

36
Table 9 shows the results of bootstrap. Lower limit means that the average of

lower bound of interval across sample day, while upper limit indicates that the

average of upper bound of interval across sample day. The upper limits of PVaR

estimates from the GARCH model are generally higher than those from time-varying

copula models, implying that PVaR estimates from the GARCH model tend to be

underestimated even considering an estimation error problem.

The uncertainty of the PVaR point estimate arising from estimation risk needs to

be identified. As Christoffersen and Goncalves (2005), we apply a bootstrap method

to calculate confidence intervals around the PVaR point estimate. Besides

demonstrating estimation error from the GARCH model, we also quantify the

estimation risk (as measured by the length of confidence interval) from various

copula-based models and find that the copula-based models outperform the

conventional GARCH model. In Table 9, we also found that the estimation risk of

first-percentile coverage rate is higher (around double) than that of fifth-percentile

coverage ratio under various copula-based models and the conventional GARCH

model. The result is consistent with the findings by BP and Kupiec(1995)24. Figure 5

presents the plot of 90% bootstrapped confidence intervals for various PVaR models.

24
The results of BP and Kupiec(1995) are for normal distribution only.

37
We also attempt to highlight that the upper limit of PVaR interval estimates may

fail in the backtest, even its point estimate passes the backtest. By taking estimation

error into account, we conduct backtest again for upper limit of PVaR interval

estimates and report results in Table 10. Apparently, the conventional GARCH has the

greatest violation rate. Under backtesting criteria at the 5% desired violation rate, the

conventional GARCH model is absolutely invalid based on unconditional,

independent and conditional coverage tests. Our proposed copula models, however,

are acceptable even with the estimation error problem. If we turn to backtesting

criteria at the 1% desired violation rate, the Gaussian, the Gumbel copula and the

GARCH models are failed to backtest, whereas the Clayton and the mixture copula

models are valid, even with estimation error.25

[Insert Table 9 here]

[Insert Table 10 here]

[Insert Figure 5 here]

5. Conclusion

The conventional portfolio value-at-risk (PVaR) estimation method commonly

used in current practice exhibits considerable biases due to model specification errors.

25
As for other two backtests (dynamic quantile and distribution and tail forecast), the results of using
the upper limit of PVaR interval remain same.

38
This study uses PVaR estimation to illustrate that the model risk is attributable to the

inappropriate use of correlation coefficient and normal joint distribution. We also

attempt to improve the PVaR estimation and thus reduce model risk by relaxing the

conventional assumption of normal joint distribution and developing an empirical

model of time-varying PVaR conditional on time-varying dependencies between

portfolio components. To demonstrate the dynamic hedging version of the

time-varying PVaR comparisons, both single-parameter conditional copulas and

copula mixture models are applied to form a flexible joint distribution.

PVaR estimates for the optimal hedged portfolios are computed from various

copula models, and backtesting diagnostics indicate that the copula-based PVaR

outperforms the conventional PVaR estimator at the 99% and 95% significance level.

Our results imply that model risk may be more severe under the nominal coverage

probability of 99 percent, as compared with that of 95 percent. In other words, the

PVaR point estimate with 99 percent coverage rate is quite uncertain due to estimation

risk. The copula-based model is acceptable even with estimation risk, whereas the

conventional GARCH model is absolutely invalid.

Our findings have significant implications for regulators. First, the benefit of

applying the copula model to PVaR estimation can be identified even after

39
considering model risk. Second, to reduce the model risk, PVaR estimation using a

smaller nominal coverage rate (say, 95% instead of 99%) is preferable.

40
Reference
Ané, T. and C. Kharoubi, 2003. Dependence structure and risk measure. Journal of
Business 76, 411-438.

Bae, K. H., Karolyi, G. A., Stulz, R. M., 2003. A new approach to measuring financial
contagion. Review of Financial Studies 16 (3), 717-763.

Baillie, R., Myers, R., 1991. Bivariate GARCH estimation of the optimal commodity
futures Hedge. Journal of Applied Econometrics 6, 109-124.

Bartram, S. M., Taylor, S. J., Wang, Y. H., 2007. The Euro and European financial
market dependence. Journal of Banking and Finance 31, 1461-1481.

Berkowitz, J., 2001. Testing density forecasts with application to risk management.
Journal of Business and Economic Statistics 19, 465-474.

Brooks, C., Henry, O., Persand, G., 2002. The effect of asymmetries on optimal hedge
ratios. Journal of Business 75, 333-352.

Brooks, C., G. Persand, 2002. Model choice and value-at-risk performance. Financial
Analysts Journal, September/October, 87-97.

Chatfield, C., 1993. Calculating interval forecasts. Journal of Business and Economic
Statistics 11, 121-135.

Cherubini, U., Luciano, E., 2001. Value-at-risk trade-off and capital allocation with
copulas. Economic Notes 30(2), 2001.

Cherubini, U., Luciano, E., Vecchiato, W. 2004. Copula Method in Finance, John Wiley
& Sons, Ltd.

Christoffersen, P. F., 1998. Evaluating interval forecasts. International Economic


Review 39(4), 841-862.

Christoffersen, P., Goncalves, S., 2005. Estimation risk in financial risk management.
Journal of Risk 7(3), 1-28.

41
Diebold, F.X., Gunther, T.A., Tay, A.S., 1998. Evaluating density forecasts.
International Economic Review 39(4), 863-883.

Efron, B., Tibshirani, R.J., 1993. An introduction to the bootstrap. Chapman & Hall,
New York.

Embrechts, P., A. J. McNeil, and D. Straumann, 1999. Correlation: Pitfalls and


alternatives. Risk 12, 69-71.

Enders, W., 2004. Applied Econometric Time Series, 2nd ed., Wiley.

Engle, R.F., Maganelli, S., 2004. CAViaR: conditional autoregressive value at risk by
regression quantiles. Journal of Business & Economic Statistics 22 (4), 367-381.

Engle, R.F., Ng, V.K., 1993. Measuring and testing the impact of news on volatility.
Journal of Finance 48, 1749-1778.

Glosten, L. R., R. Jagannathan, and D. Runkle, 1993. Relationship between the


expected value and the volatility of the nominal excess return on stocks. Journal of
Finance 48, 1779-1801.

Hsu, C. C., Wang, Y. H., Tseng, C. P., 2008. Dynamic hedging with futures: A
copula-based GARCH model. Journal of Futures Markets 28, 1095-1116.

Hu, L., 2006. Dependence patterns across financial markets: A mixed copula approach.
Applied Financial Economics 16, 717-729.

J.P. Morgan, 1996. Riskmetrics-Technical Document. 4th ed. New York: J.P. Morgan
and Reuters. (Available at www.riskmetrics.com/rmcovv.html)

Johnson, L., 1960. The theory of hedging and speculation in commodity futures.
Review of Economic Studies 27, 139-151.

Jondeau, E., Rockinger, M., 2006. The copula-garch model of conditional


dependencies: An international stock market application. Journal of International
Money and Finance 25 (5), 827-853.

Joe, H., 1997. Multivariate models and dependence concepts, Chapman & Hall,

42
London.

Jorion, P., 1996. Risk2: Measuring the risk in value at risk. Financial Analysts Journal,
November/December, 47-56.

Kroner, K. F., Sultan, J., 1993. Time varying distribution and dynamic hedging with
foreign currency futures. Journal of Financial Quantitative Analysis 28, 535-551.

Kupiec, P., 1995. Techniques for verifying the accuracy of risk measurement models.
Journal of Derivatives Winter, 73-84.

Kupiec, P. H. 1999. Risk capital and VAR. Journal of Derivatives 7, 41-52.

Lai, Y. Chen, C. W. S., Gerlach, R., 2007. Optimal dynamic hedging using
copula-threshold-garch models. Working paper in Feng Chia University.

Li, D. X., 2000. On default correlation: A copula function approach. The Journal of
Fixed Income 9(4), 43-54.

Longin, F., Solnik, B., 2001. Extreme correlation of international equity market.
Journal of Finance 56 (2), 649-676.

Meneguzzo, D., Vecchiato, W., 2004. Copula sensitivity in collateralized debt


obligations and basket default swaps. Journal of Futures Markets 24(1), 37-70.

Miller, D. J., Liu, W. H., 2006. Improved estimation of portfolio value-at-risk under
copula models with mixed marginals. Journal of Futures Markets 26(10), 997-1018.

Nelson, D.B., 1991. Conditional heteroskedasticity in asset returns: A new approach.


Econometrica 59, 347-370.

Nelsen, R. B., 1999. An introduction to copula, Springer, New York.

Patton, A. J., 2006a. Modelling asymmetric exchange rate dependence. International


Economic Review 47(2), 527-556.

Patton, A. J., 2006b. Estimation of multivariate models for time series of possibly
different lengths. Journal of Applied Econometrics 21, 147-173.

43
Poon, S. H., Rockinger, M., Tawn, J., 2004. Extreme value dependence in financial
markets: Diagnostics, models, and financial implications. The Review of Financial
Studies 17(2), 581-610.

Rich, D., 2003. Second generation VAR and risk-adjusted return on capital. Journal of
Derivatives 10, 51-61.

Rodriguez, J. C., 2007. Measuring financial contagion: A coupla approach. Journal of


Empirical Finance 14, 401-423.

Rosenberg, J. V., and T. Schuermann, 2006. A general approach to integrated risk


management with skewed, fat-tailed risks. Journal of Financial Economics 79,
569-614.

Samitas, A., Kenourgios, D., Paltalidis, N., 2007. Financial crises and stock market
dependence. Working paper.

Smith, R. L., 2000. Measuring risk with extreme value theory. In P. Embrechts
(edited), Extremes and integrated risk management, London: Risk Books.

Stein, J. L., 1961. The simultaneous determination of spot and futures prices. American
Economic Review 51, 1012-1025.

44
Appendix A: Parameter Estimation of Conditional Copula

At time t, the log-likelihood function can be derived by taking the logarithm of

(3):

𝑙𝑙𝑙𝑙𝑙𝑙𝜑𝜑𝑡𝑡 = 𝑙𝑙𝑙𝑙𝑙𝑙𝑐𝑐𝑡𝑡 + 𝑙𝑙𝑙𝑙𝑙𝑙𝑓𝑓𝑠𝑠,𝑡𝑡 + 𝑙𝑙𝑙𝑙𝑙𝑙𝑓𝑓𝑓𝑓 ,𝑡𝑡 (A1)

Let the parameters in 𝑓𝑓𝑠𝑠,𝑡𝑡 and 𝑓𝑓𝑓𝑓 ,𝑡𝑡 be denoted as 𝜃𝜃𝑠𝑠 and 𝜃𝜃𝑓𝑓 , respectively, and

the other parameters in 𝑐𝑐𝑡𝑡 be denoted as 𝜃𝜃𝑐𝑐 . These parameters can be estimated by

maximizing the following log-likelihood function:

𝐿𝐿𝑠𝑠,𝑓𝑓 (𝜃𝜃) = 𝐿𝐿𝑠𝑠 (𝜃𝜃𝑠𝑠 ) + 𝐿𝐿𝑓𝑓 (𝜃𝜃𝑓𝑓 ) + 𝐿𝐿𝑐𝑐 (𝜃𝜃𝑐𝑐 ) (A2)

where 𝜃𝜃 = (𝜃𝜃𝑠𝑠 , 𝜃𝜃𝑓𝑓 , 𝜃𝜃𝑐𝑐 ) and 𝐿𝐿𝑘𝑘 represent the sum of the log-likelihood function

values across observations of the variable k.

Since the dimensions of the estimated equation may be quite large, it is quite

difficult, in practice, to achieve simultaneous maximization of 𝐿𝐿𝑠𝑠,𝑓𝑓 (𝜃𝜃) for all

parameters. To effectively solve this problem, we follow the two-stage estimation

procedure adopted by Patton (2006a, b) and Bartram et al. (2007).

In the first stage, the parameters of the marginal distribution are estimated from

the univariate time series by

𝜃𝜃�𝑠𝑠 ≡ arg max ∑𝑇𝑇𝑡𝑡=1 log 𝑓𝑓𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 ; 𝜃𝜃𝑠𝑠 �,

𝜃𝜃�𝑓𝑓 ≡ arg max ∑𝑇𝑇𝑡𝑡=1 log 𝑓𝑓𝑓𝑓 ,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 ; 𝜃𝜃𝑓𝑓 �. (A3)

The second stage then estimates the dependence parameters as

𝜃𝜃�c ≡ arg max ∑𝑇𝑇𝑡𝑡=1 log c𝑡𝑡 �u𝑡𝑡 , 𝑣𝑣𝑡𝑡 �Ω𝑡𝑡−1 ; 𝜃𝜃�𝑠𝑠 , 𝜃𝜃�𝑓𝑓 , 𝜃𝜃𝑐𝑐 �. (A4)

45
Patton (2006a) shows that the two-stage ML estimates 𝜃𝜃� = �𝜃𝜃�𝑠𝑠 ; 𝜃𝜃�𝑓𝑓 ; 𝜃𝜃�𝑐𝑐 � are

asymptotically as efficient as one-stage ML estimates. The variance-covariance matrix

of 𝜃𝜃� must be obtained from numerical derivatives. We have only been able to obtain

satisfactory first derivatives, from which the fully efficient two-stage estimator

(𝑛𝑛𝑉𝑉� )−1 of the variance-covariance matrix is given by

𝑉𝑉� = 𝑛𝑛−1 ∑𝑛𝑛𝑡𝑡=1 𝜙𝜙�𝑡𝑡 𝜙𝜙�𝑡𝑡′ ,

where the score vector 𝜙𝜙�𝑡𝑡 = 𝜕𝜕log 𝜑𝜑𝑡𝑡 /𝜕𝜕𝜕𝜕 is evaluated at 𝜃𝜃 = 𝜃𝜃�.

46
Appendix B: Bootstrap Algorithm for Copula-based Risk Measures

Step 1 Generate a sample of T × 200 bootstrapped portfolio returns {𝑃𝑃𝑡𝑡∗ : t = 1, … , T}

by resampling with replacement from the conditional portfolio returns generated from

time-varying copula models.

Step 2 Compute the estimates of PVaR on the bootstrap sample

𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t∗ = Qα ({𝑃𝑃𝑡𝑡∗ }t×200+200


t×200+1 ) t = 0, … , (T − 1)

Step 3 Repeat Step 1 and 2 a large number of times, say 1000 times, and obtain a

∗α(i)
sequence of bootstrapped copula-based risk measures, �𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t : i = 1, … , 1000�

Step 4 The 100(1 − α) bootstrap prediction interval for PVaRαt is given by

1000 1000
∗α(i) ∗α(i)
�Qα/2 ��𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t � � , Q1−α/2 ��𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t � � �
i=1 i=1

∗α(i)
where Qα (⋅) is the α quantile of the empirical distribution of �𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t �

47
Table 1 Summary statistics

This table shows summary statistics for the percentage log returns of the S&P 500 index and S&P 500
index futures. The sample period covers 1 January 2004 to 29 October 2007. 998 daily observations for
the index and index futures are collected.

Mean Standard Skewness Kurtosis


Deviation
Index 0.03317 0.70894 -0.29684 1.78907
Index futures 0.03365 0.71083 -0.44807 2.07703

Table 2 Estimated parameters for GJR-GARCH(1,1)-AR-t marginal distributions

This table reports the estimated parameters of the marginal distributions for index and futures returns.
They are assumed to be characterized by an GJR-GARCH(1,1)-AR-t model given by Eq. (1). The
symbol * denotes significance at the 5% levels.

AR(1) GARCH Lagged Lagged Asymmetric Degree of


constant variance residual residual freedoma
Index -0.0664* 0.0090* 0.9535* -0.0397 0.1054* 7.5638

Index -0.0450 0.0056* 0.9740* -0.0455* 0.0968* 6.1457


futures
a
Degree of freedom is one of the parameters to be estimated in student’s t distribution.

48
Table 3 Estimated parameters of time-varying copula functions

This table shows the estimated parameters of time-varying dependencies in the chosen copulas. The
time-varying dependence models in Eq. (8), (9), (10) are estimated and calibrated. The parameter β
captures the degree of persistence in the dependence and γ captures the adjustment in the dependence
process. LLF(c) is the maximum copula component of the log-likelihood function. The symbol *
denotes significance at the 5% levels.

β ω γ LLF(c)
Panel A: Gaussian copula
0.94990* 0.71938* -0.96763* 907.88
Panel B: Gumbel copula
0.94743* 0.25587 -0.95841* 826.09
Panel C: Clayton copula
0.93638* 0.30843* -0.94724* 729.68

Table 4 Summary statistics of weight estimates of conditional mixture copulas

This table summarizes minimum, maximum, mean, 25% quantile, median, 75% quantile, and standard
deviation of weight estimates for conditional mixtures copulas. These weights are estimated by MLE
according to Eq. (7). Panel A reports the weight estimates across the entire sample period, while Panel
B focuses on the period of the U.S. subprime market crash from August to October 2007.

Weight Min Max Mean Q1 Q2 Q3 Standard


Estimates Deviation
Panel A: Full Sample Period
Gaussian weight 0.09289 0.15944 0.09825 0.09339 0.09520 0.09976 0.00791
Gumbel weight 0.28972 0.55507 0.45063 0.42249 0.45234 0.47990 0.04310
Clayton weight 0.28549 0.55452 0.45112 0.42508 0.45476 0.48210 0.04340
Panel B: Period of the crash of U.S. subprime market
Gaussian weight 0.09290 0.13586 0.10001 0.09413 0.09696 0.10119 0.00876
Gumbel weight 0.31329 0.53994 0.44279 0.40657 0.44641 0.48139 0.05069
Clayton weight 0.33759 0.55585 0.45720 0.42533 0.46055 0.49498 0.04888

49
Table 5 Summary statistics of conditional PVaR estimates

This table summarizes the minimum, maximum, mean, 25% quantile, median, and 75% quantile of the
conditional PVaR estimates across sample period. The number of violations measures the frequency of
violation, and mean violation refers to the loss in excess of the PVaR estimate. D60 and D90 are the
rolling horizons, which indicate that the empirical distributions are constructed using historical data
from the previous 60 and 90 trading days, respectively. Also, SL0.05 and SL0.01 denote that the
conditional PVaR is estimated at the 95% and 99 % significance level, respectively. 998 daily
conditional PVaR estimates are summarized below.

Copula Min Max Mean Q1 Q2 Q3 Standard Numbers Mean


PVaR Deviation Violation Violation
Panel A: D90_SL0.05
Gaussian -0.00764 -0.00138 -0.00339 -0.00372 -0.00328 -0.00288 0.00092 24 -0.00186
Gumbel -0.00777 -0.00128 -0.00336 -0.00377 -0.00319 -0.00272 0.00106 21 -0.00169
Clayton -0.01229 -0.00189 -0.00436 -0.00497 -0.00413 -0.00354 0.00124 19 -0.00135
Mixture -0.00933 -0.00176 -0.00382 -0.00418 -0.00367 -0.00328 0.00091 18 -0.00161
Panel B: D90_SL0.01
Gaussian -0.01820 -0.00189 -0.00554 -0.0061 -0.00515 -0.00442 0.00184 8 -0.00185
Gumbel -0.02513 -0.00209 -0.00634 -0.00700 -0.00584 -0.00482 0.00259 4 -0.00222
Clayton -0.01914 -0.00352 -0.00856 -0.00976 -0.00813 -0.00680 0.00258 3 -0.00147
Mixture -0.01690 -0.00331 -0.00726 -0.00803 -0.00693 -0.00601 0.00191 3 -0.00186
Panel C: D60_SL0.05
Gaussian -0.00813 -0.00137 -0.00347 -0.00383 -0.00333 -0.00280 0.00107 27 -0.00156
Gumbel -0.00982 -0.00127 -0.00344 -0.00389 -0.00321 -0.00267 0.00121 22 -0.00164
Clayton -0.01358 -0.00175 -0.00444 -0.00505 -0.00417 -0.00346 0.00144 14 -0.00165
Mixture -0.01021 -0.00176 -0.00390 -0.00437 -0.00368 -0.00318 0.00107 13 -0.00217
Panel D: D60_SL0.01
Gaussian -0.02067 -0.00197 -0.00569 -0.00640 -0.00528 -0.00436 0.00208 8 -0.00246
Gumbel -0.02486 -0.00204 -0.00641 -0.00722 -0.00594 -0.00463 0.00277 4 -0.00212
Clayton -0.01993 -0.00361 -0.00840 -0.00975 -0.00785 -0.00654 0.00275 3 -0.00183
Mixture -0.01567 -0.00307 -0.00724 -0.00814 -0.00686 -0.00569 0.00217 3 -0.00229

50
Table 6 Unconditional and conditional coverage tests

Our backtest comprises unconditional coverage, independence and conditional coverage test, which
takes into account not only if violations occur, but they should also be independent and identically
distributed over time. Likelihood ratio statistics are reported in each test, and the p-values are displayed
in parentheses. The symbol * denotes significance at the 5% levels.
Violation Unconditional Independence Conditional
Rate Coverage (LRuc) (LRind) Coverage (LRcc)
Panel A: D90_SL0.05
Gaussian 0.02405 7.54228* 0.11650 7.67994*
(0.00603) (0.73286) (0.02149)
Gumbel 0.02104 9.69396* 0.39248 10.10492*
(0.00185) (0.53100) (0.00639)
Clayton 0.01904 11.33691* 0.34766 11.70127*
(0.00076) (0.55544) (0.00288)
Mixture 0.01804 12.22717* 0.22654 9.12764*
(0.00047) (0.63410) (0.01042)
Conventional 0.05711 0.57255 0.02101 0.64560
GARCH (0.47924) (0.88475) (0.72411)
Panel B: D90_SL0.01
Gaussian 0.00802 0.18488 0.05620 0.24808
(0.66721) (0.81260) (0.88335)
Gumbel 0.00401 2.03328 0.01399 2.05077
(0.15389) (0.90584) (0.35866)
Clayton 0.00301 2.95206 0.00786 2.96254
(0.08588) (0.92935) (0.22735)
Mixture 0.00301 2.95206 0.00786 2.96254
(0.08588) (0.92935) (0.22735)
Conventional 0.02605 7.82245* 0.60475 8.45015*
GARCH (0.00516) (0.43677) (0.01462)
Panel C: D60_SL0.05
Gaussian 0.02705 5.72521* 0.65285 6.40189*
(0.01672) (0.41910) (0.04072)
Gumbel 0.02204 8.93692* 0.19167 9.14796*
(0.00279) (0.66153) (0.01032)
Clayton 0.01403 16.31016* 0.17318 16.49562*
(0.00005) (0.67730) (0.00026)
Mixture 0.01303 17.47862* 0.14917 17.63918*
(0.00003) (0.69933) (0.00015)
Conventional 0.05311 0.86370 0.73701 0.87079
GARCH (0.35270) (0.39062) (0.64701)
Panel D: D60_SL0.01
Gaussian 0.00802 0.18488 0.05620 0.24808
(0.66721) (0.81260) (0.88335)
Gumbel 0.00401 2.03328 0.01399 2.05077
(0.15389) (0.90584) (0.35866)
Clayton 0.00301 2.95206 0.00786 2.96254
(0.08577) (0.92935) (0.22735)
Mixture 0.00301 2.95206 0.00786 2.96254
(0.08577) (0.92935) (0.22735)
Conventional 0.02305 5.44529* 0.47177 5.93733*
GARCH (0.01956) (0.49298) (0.05000)

51
Table 7 Dynamic quantile test

This table presents the results of the PVaR estimates evaluation using the dynamic quantile (DQ) test
proposed by Engle and Maganelli (2004). The DQ statistics and its asymptotic p-values for the
alternative PVaR models under 1%, 5% significance level are reported. The DQ statistics are
asymptotically distributed χ2 (6). The cells in bold font indicate rejection of null hypothesis of correct
PVaR estimates at the 5% significance level.

DQ p-value
statistics
Panel A: D90_SL0.05
Gaussian 13.22377 0.03961
Gumbel 10.34363 0.11104
Clayton 11.57783 0.07207
Mixture 11.26154 0.08062
Conventional GARCH 77.50367 0.00000
Panel B: D90_SL0.01
Gaussian 7.02197 0.31881
Gumbel 1.65849 0.94828
Clayton 1.49195 0.96002
Mixture 1.70745 0.94454
Conventional GARCH 109.1632 0.00000
Panel C: D60_SL0.05
Gaussian 16.51582 0.01123
Gumbel 11.84296 0.06556
Clayton 5.55835 0.47443
Mixture 11.71187 0.06871
Conventional GARCH 77.50367 0.00000
Panel D: D60_SL0.01
Gaussian 9.42000 0.15129
Gumbel 1.99757 0.91992
Clayton 1.29356 0.97201
Mixture 2.04019 0.91596
Conventional GARCH 109.16320 0.00000

52
Table 8 Distribution and tail forecast test

This table reports the results of distribution and tail forecast tests developed by Berkowitz (2001). The
LRdist test statistics evaluate the interior of the distribution, while LR5%tail and LR1%tail evaluate 5% and
1% lower tail of distribution, respectively. Their asymptotic p-values are shown in parentheses. The
cells in bold font indicate rejection of null hypothesis of correct PVaR estimates at the 5% significance
level.

LRdist LR5%tail LR1%tail


Gaussian 3.09180 5.81881 1.70678
(0.37768) (0.0545) (0.42596)
Gumbel 2.81602 3.85452 0.56714
(0.42086) (0.14554) (0.75308)
Clayton 13.34834 4.16241 3.62677
(0.00394) (0.12477) (0.16310)
Mixture 4.66271 3.97142 2.41872
(0.19822) (0.13728) (0.29838)
Convential GARCH 1.11626 32.46121 26.82990
(0.77314) (0.00000) (0.00000)

53
Table 9 Bootstrapped interval properties for the conditional PVaR models

This table shows the results of bootstrap. Lower limit means that the average of lower bound of interval
across sample day, while upper limit indicates that the average of upper bound of interval across
sample day. Length is defined as the difference between upper limit and lower limit.
Lower Upper Limit Length
Limit
Panel A: D90_SL0.05
Gaussian -0.00404 -0.00264 0.00140
Gumbel -0.00414 -0.00245 0.00169
Clayton -0.00559 -0.00290 0.00269
Mixture -0.00478 -0.00267 0.00211
Conventional GARCH -0.00316 -0.00189 0.00127
Panel B: D90_SL0.01
Gaussian -0.00668 -0.00379 0.00289
Gumbel -0.00798 -0.00369 0.00429
Clayton -0.01057 -0.00548 0.00509
Mixture -0.00901 -0.00450 0.00451
Conventional GARCH -0.00452 -0.00257 0.00195
Panel C: D60_SL0.05
Gaussian -0.00413 -0.00270 0.00143
Gumbel -0.00423 -0.00250 0.00173
Clayton -0.00565 -0.00302 0.00263
Mixture -0.00486 -0.00275 0.00211
Conventional GARCH -0.00327 -0.00172 0.00155
Panel D: D60_SL0.01
Gaussian -0.00687 -0.00387 0.00300
Gumbel -0.00803 -0.00387 0.00416
Clayton -0.01024 -0.00551 0.00473
Mixture -0.00890 -0.00461 0.00429
Conventional GARCH -0.00432 -0.00255 0.00177

54
Table 10 Backtests of the upper limit of conditional PVaR model

The backtest which comprises unconditional coverage, independence and conditional coverage test is
conducted for upper limit of PVaR interval estimates. Likelihood ratio statistics are reported in each
test, and the p-values are displayed in parentheses. The symbol * denotes significance at the 5% levels.
Violation Unconditional Independence Conditional
Rate Coverage (LRuc) (LRind) Coverage (LRcc)
Panel A: D90_SL0.05
Gaussian 0.06012 0.88023 0.46737 0.97803
(0.34814) (0.49420) (0.61323)
Gumbel 0.06513 1.91498 1.71047 3.68397
(0.16642) (0.19093) (0.15850)
Clayton 0.05311 0.08637 0.73701 0.87079
(0.76893) (0.39062) (0.64701)
Mixture 0.05311 0.08637 0.73701 0.87079
(0.76893) (0.39062) (0.64701)
Conventional 0.10621 22.11063* 4.71424* 23.24240*
GARCH (0.00001) (0.02991) (0.00001)
Panel B: D90_SL0.01
Gaussian 0.03607 17.81594* 0.11861 15.12604*
(0.00001) (0.73056) (0.00051)
Gumbel 0.04409 27.66562* 0.64352 23.04649*
(0.00001) (0.42245) (0.00001)
Clayton 0.01303 0.36601 0.86830 1.24570
(0.54519) (0.35143) (0.53641)
Mixture 0.02004 3.41700 0.28938 3.72397
(0.06453) (0.59062) (0.15537)
Conventional 0.05511 43.33641* 0.01028 38.49426*
GARCH (0.00001) (0.91924) (0.00001)
Panel C: D60_SL0.05
Gaussian 0.05812 0.57255 1.09694 1.72152
(0.44925) (0.29495) (0.42284)
Gumbel 0.06413 1.67859 0.71246 0.12846
(0.19511) (0.39863) (0.93779)
Clayton 0.04709 0.07850 0.39401 0.51444
(0.77934) (0.53020) (0.77320)
Mixture 0.05511 0.23097 0.87303 1.15326
(0.63081) (0.35012) (0.56179)
Conventional 0.11924 32.05510* 5.33564* 19.09897*
GARCH (0.00001) (0.02089) (0.00001)
Panel D: D60_SL0.01
Gaussian 0.02906 10.50900* 0.46580 7.96872*
(0.00119) (0.49493) (0.01860)
Gumbel 0.03607 17.81594* 0.69016 15.69760*
(0.00001) (0.40616) (0.00039)
Clayton 0.01603 1.34672 2.02871 0.15858
(0.24586) (0.15435) (0.92377)
Mixture 0.01503 0.95960 0.65661 1.62937
(0.32729) (0.41776) (0.44278)
Conventional 0.05611 44.86769* 0.94503 45.86291*
GARCH (0.00001) (0.33099) (0.00001)

55
Figure 1 Time-varying weight estimates of conditional mixture copulas

Figure 1 depicts the time series of the weight estimates of conditional mixture copulas across the
sample period. The Gaussian weight estimates are displayed as a red line. The weight estimates of the
Gumbel and Clayton copulas are represented by green and blue lines, respectively.

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
2004/01/01

2004/04/01

2004/07/01

2004/10/01

2005/01/01

2005/04/01

2005/07/01

2005/10/01

2006/01/01

2006/04/01

2006/07/01

2006/10/01

2007/01/01

2007/04/01

2007/07/01

2007/10/01
GaussianWeight GumbelWeight ClaytonWeight

56
Figure 2 Comparisons between conditional Gaussian, Gumbel, Clayton and Mixture PVaR estimates

Conditional Gaussian (red line), Gumbel (green line), Clayton (blue line) and Mixture (black line) PVaR estimates are compared under a 90 or 60 rolling horizon and 95% or 99%
significance level.

D90_SL0.05 D90_SL0.01
0.000 0.000

-0.002
-0.005
-0.004
-0.010
-0.006
-0.015
-0.008
-0.020
-0.010

-0.012 -0.025

-0.014 -0.030
04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9

04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9
D60_SL0.05 0.000
D60_SL0.01
0.000
-0.002
-0.005
-0.004
-0.010
-0.006
-0.008 -0.015

-0.010
-0.020
-0.012
-0.025
-0.014
-0.016 -0.030
04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9

04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9
57
Figure 3 Comparisons between realized portfolio return and conditional PVaR estimates

Time series of daily portfolio return from January 2004 to October 2007(solid lines) are plotted with conditional Gaussian, Gumbel, Clayton and Mixture PVAR estimates (dotted
lines).
Panel A: D90_SL0.05
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional ClaytonPVaR Conditional Mixture PVaR
0.008 0.008 0.010 0.008
0.006 0.006 0.006
0.004 0.004 0.005 0.004
0.002 0.002 0.002
0.000 0.000 0.000
0.000
-0.002 -0.002 -0.002
-0.005
-0.004 -0.004 -0.004
-0.006 -0.006 -0.006
-0.010
-0.008 -0.008 -0.008
-0.010 -0.010 -0.015 -0.010
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
Panel B: D90_SL0.01
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional Clayton PVaR Conditional Mixture PVaR
0.010 0.010 0.010 0.010
0.005 0.005
0.005 0.005
0.000
0.000
0.000 0.000
-0.005
-0.005
-0.005 -0.010 -0.005
-0.010
-0.015 -0.010
-0.010
-0.015
-0.020
-0.015 -0.015
-0.025 -0.020
-0.020 -0.030 -0.025 -0.020

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
58
Panel C: D60_SL0.05
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional Clayton PVaR Conditional Mixture PVaR
0.008 0.008 0.010 0.008
0.006 0.006 0.006
0.004 0.004 0.005 0.004
0.002 0.002
0.002
0.000 0.000
0.000 0.000
-0.002
-0.002 -0.002
-0.004 -0.005
-0.004 -0.006 -0.004
-0.006 -0.008 -0.010 -0.006
-0.008 -0.010 -0.008
-0.010 -0.012 -0.015 -0.010

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
Panel D: D60_SL0.01
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional Clayton PVaR Conditional Mixture PVaR
0.010 0.010 0.010 0.010

0.005 0.005 0.005 0.005


0.000
0.000 0.000 0.000
-0.005
-0.005 -0.005 -0.005
-0.010
-0.010 -0.010 -0.010
-0.015
-0.015 -0.015 -0.015
-0.020
-0.020 -0.025 -0.020 -0.020

-0.025 -0.030 -0.025 -0.025


04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9

04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
59
Figure 4 Differences between conditional PVaR estimates and the conventional GARCH estimates.

Conditional Gaussian (red line), Gumbel (green line), Clayton (blue line) and Mixture (black line) PVaR estimates are compared with that of the conventional GARCH estimates
under a 90 or 60 rolling horizon and 95% or 99% significance level. Positive difference means that the conventional GARCH estimates are underestimated while negative difference
indicates that they are overestimated.

D90_SL0.05 0.025
D90_SL0.01
0.025

0.020 0.020

0.015 0.015

0.010 0.010

0.005 0.005

0.000 0.000

-0.005 -0.005
04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9

04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9
D60_SL0.05 D60_SL0.01
0.025 0.025

0.020 0.020

0.015 0.015

0.010 0.010

0.005 0.005

0.000 0.000

-0.005 -0.005
04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9

04/1

04/5

04/9

05/1

05/5

05/9

06/1

06/5

06/9

07/1

07/5

07/9
60
Figure 5 Bootstrapped confidence intervals for various PVaR models.

The bootstrapped 90% confidence intervals around the point estimates of time-varying PVaR are depicted in this figure. The estimation risk, measured by the length of confidence
interval, of first-percentile coverage rate is higher (around double) than that of fifth-percentile coverage ratio under various copula-based models and the conventional GARCH
model.

0.000
D90_SL0.05 D90_SL0.01
0.000
-0.002 -0.002
-0.004 -0.004
-0.006 -0.006
-0.008 -0.008

-0.010 -0.010

-0.012 -0.012
Gaussian Gumbel Clayton Mixture GARCH Gaussian Gumbel Clayton Mixture GARCH

D60_SL0.05 D60_SL0.01
0.000 0.000

-0.002 -0.002

-0.004 -0.004

-0.006 -0.006

-0.008 -0.008

-0.010 -0.010

-0.012 -0.012
Gaussian Gumbel Clayton Mixture GARCH Gaussian Gumbel Clayton Mixture GARCH

61

You might also like