Professional Documents
Culture Documents
Yi-Hsuan Chen ∗
Chung-Hua University
Anthony H. Tu
National Chengchi University
Abstract
∗
Corresponding author. Department of Finance, Chung-Hua University , Hsinchu , Taiwan.
Tel: 886-3-5186057; Fax: 886-3-5186054; e-mail: cathy1107@gmail.com
1
1. Introduction
Value-at-risk (VaR) has become one of the most popular tools of risk
and practical investment. The potential estimation biases and model risk in the VaR
model have been generally discussed in previous studies (Jorion 1996; Kupiec, 1999;
Rich, 2003; Miller and Liu, 2006; Brooks and Persand, 2002). Firms may maintain
either insufficient risk capital reserves to absorb large financial shocks or excessive
risk capital to reduce capital management efficiency. Citi group, Merrill Lynch,
Lehman Brothers, and Morgan Stanley are good examples of major financial
institutions who crashed after the recent outbreak of U.S. subprime markets.
The first purpose of this paper is to assess the potential loss of accuracy from
estimation error when calculating VaR. This study use portfolio value–at-risk (PVaR)
estimation to illustrate that the model risk is attributable to the inappropriate use of
reflects the overall strength of the relation. However, it fails to model the “structure”
of dependence, which describes the manner in which two assets are correlated. In
addition, it is not robust enough for heavily tailed distributions and not adequate for
non-linear relationships.
2
Jorion(1996) first indicated that VaR estimates are themselves affected by their
sampling variation or “estimation risk”. Brooks and Persand (BP) (2002) investigated
applied methodologies and concluded that model risk can be serious in VaR
Calculation. They found that when the actual data is fat-tailed, using critical values
The second purpose of this paper is to reexamine whether a more accurate VaR
estimate could be derived by using the fifth percentile (5%) instead of the first
copula-based joint distributions. BP found that the model risk measured by standard
errors could be more severe for first percentile of the normal return distribution. They
suggested that the closer the quantile are to the mean of the distribution, the more
accurately they can be estimated. Thus, to ensure that virtually all probability losses
are covered, the use of a smaller coverage rate (say, 95% instead of 99%) combined a
(2005), our paper uses the bootstrapping resampling technique to quantify ex ante the
3
magnitude of the estimation error (model risk) by constructing confidence intervals
VaR applications are somewhat limited in hedged portfolios because the PVaR
model cannot be stated in a closed form and can only be approximated with complex
computational algorithms. Miller and Liu (2006) criticized current PVaR approaches.
use. Second, though nonparametric approaches are not subject to the criticism of
distribution function may be poor because the observed outcomes in the tails of
leptokurtic distributions are typically sparse. Third, PVaR estimation based solely on
models of the return distribution tails rather than entire return distribution may incur
Although multivariate extreme value theory (EVT) models may help mitigate the
problems mentioned above, the models used in current practice may exhibit
undesirable properties and may not avoid substantial PVaR estimation biases. First,
multivariate EVT models do not retain the marginal distribution properties of the
1
CG uses a bootstrap resampling technique to take into account parameter estimation error of portfolio
variance in calculating VaR. Our paper considers the potential estimation error in PVaR estimator due
to the inappropriate use of joint distributions.
4
univariate return series. Second and more importantly, as indicated by Smith (2000),
multivariate EVT models may not rich enough to encompass all forms of tail
behaviors.
exhibiting rich patterns of tail behavior, ranging from tail independence to tail
in the modeling of financial risks (Embrechts et al. (1999)). This paper employs
previous pair of index futures and spot returns. The parameter of the conditional
changing over time. Using the copula mixture, parametric or nonparametric marginals
with quite different tail shapes, that are initially estimated separately, can then be
combined in a joint risk distribution that preserves the original characteristics of the
2
The general theory of copulas is described by Joe (1997) and Nelsen (1999) and finance applications
are emphasized by Cherubini et al. (2004). Important conditional theory has been developed and
applied to financial market data by Patton (2006a, b).
3
Mixtures of copulas are copulas. See Nelsen (1999) for details.
5
Recently, Miller and Liu (2006) used a copula-mixed distribution (CMX)
approach to estimate the joint distribution of log returns for the Taiwan Stock
Exchange (TSE) index and its corresponding futures contract on SGX and TAIFEX.
This approach converges in probability to the true marginal return distribution, but is
based on weaker assumptions. PVaR estimates for various hedged portfolios are
computed from the fitted CMX model, and backtesting diagnostics show that CMX
outperforms the alternative PVaR estimators. Unfortunately, Miller and Liu (2006)
employed a static Gaussian copula to estimate the PVaR of a hedge portfolio. Using
Gaussian copula alone may be subject to the potential estimation bias because the
joint distribution may not be normal and the true dependence structures between
portfolio components. Our conditional PVaR model easily passes associated criticisms
between portfolio components. Our study makes several contributions. First, with
6
copula model can be applied to PVaR estimation. In effect, a copula-based measure can
specify both the structure and the degree of dependence, which takes the non-linear
(2006). They claim that during a period of financial collapse, portfolios are subject to
at least two of three types of risk: market, credit, and operational risk. The
distributional shape of each risk type varies considerably. They use the copula mixture
method to construct a joint risk distribution and compare it with several conventional
We borrow their idea and extend it to PVaR estimation to support the argument from
test, dynamic quantile test, distribution and tail forecast test), this study demonstrates
that the copula-based PVaR model, compared with the conventional GARCH model
under all significance levels (95% and 99%). From the bootstrapping evidence, we
also illustrate BP’s finding that the closer the quantiles are to the mean of the
distribution, the smaller the estimation error will be for both copula-based and
7
conventional models. In other words, the more accurately the PVaR can be estimated
by choosing a smaller coverage rate. Thus, it implies that, for the conventional risk
management models, using a smaller nominal coverage probability (say, 95% instead
of 99%) helps to ensure that virtually all probability losses are covered. Choosing an
extremely large coverage probability, say 99%, deteriorates model risk caused by
model’s inability to take tail dependences and the nature of time variation into
account.
methodology and the portfolio VaR model are presented. Section 3 describes data and
the empirical results. Section 4 performs the backtests of various PVaR versions and
8
2. Methodology
value theory (EVT) have been well established. The basic properties and limitations
of these methods are discussed by the earlier literature. Recently, copula method has
Cherubini and Luciano (2001) used copula functions to evaluate tail probabilities
and market risk (VaR) trade-offs at a given confidence level, dropping the joint
normality assumption on returns. Ané and Kharoubi (2003) presented a new approach
misspecification of the dependence structure can account for up to 20% of the error in
portfolio VaR estimates. Miller and Liu (2006) employed a Gaussian copula to
estimate the PVaR of a hedge portfolio. However, all of these studies are limited to a
9
static copula model and may be subjected to the potential estimation bias because the
We assume that the marginal distribution for each portfolio asset return (index
model since the asymmetric information impact is a well-known effect for financial
2
assets 4. Let 𝑅𝑅𝑖𝑖,𝑡𝑡 and ℎ𝑖𝑖,𝑡𝑡 denote asset i’s return (spot (s) or futures (f)) and its
conditional variance for period 𝑡𝑡, respectively. Ω𝑡𝑡−1 denotes previous information
𝜈𝜈
z𝑖𝑖,𝑡𝑡 |𝛺𝛺𝑡𝑡−1 = � 2 (𝜈𝜈𝑖𝑖 𝜀𝜀𝑖𝑖,𝑡𝑡 𝑧𝑧𝑖𝑖,𝑡𝑡 ~𝑖𝑖𝑖𝑖𝑖𝑖 𝑡𝑡𝜈𝜈 𝑖𝑖 (1c)
ℎ 𝑖𝑖 −2)
𝑖𝑖,𝑡𝑡
4
The conditional densities of equity index returns are leptokurtic, and their variances are asymmetric
functions of previous returns (Nelson, 1991; Engle and Ng, 1993; Glosten et al., 1993)
2
5
In Enders (2004), the Eq. (1c) can be alternatively expressed as 𝜀𝜀𝑖𝑖,𝑡𝑡 |Ω𝑡𝑡−1 ~𝑡𝑡𝜈𝜈 𝑖𝑖 (0, ℎ𝑖𝑖,𝑡𝑡 ).
10
𝑖𝑖 ∈ {𝑠𝑠, 𝑓𝑓}
with 𝑠𝑠𝑖𝑖,𝑡𝑡−1 = 1 when 𝜀𝜀𝑖𝑖,𝑡𝑡−1 is negative and otherwise 𝑠𝑠𝑖𝑖,𝑡𝑡−1 = 0. 𝜈𝜈𝑖𝑖 is the degree
of freedom.
Assume that the conditional cumulative distribution functions of 𝑧𝑧𝑠𝑠 and 𝑧𝑧𝑓𝑓 are
𝐹𝐹𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 � and 𝐹𝐹𝑓𝑓,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 �, respectively. The conditional copula function,
denoted as 𝐶𝐶𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝛺𝛺𝑡𝑡−1 ) , is defined by the two time-varying cumulative
𝑣𝑣𝑡𝑡 = 𝐹𝐹𝑓𝑓,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 �. Let Φ𝑡𝑡 be the bivariate conditional cumulative distribution
The bivariate conditional density function of 𝑧𝑧𝑠𝑠,𝑡𝑡 and 𝑧𝑧𝑓𝑓,𝑡𝑡 can be constructed by the
product of their copula density and their two marginal conditional densities,
= 𝑐𝑐𝑡𝑡 �𝐹𝐹𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 �, 𝐹𝐹𝑓𝑓,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 ��𝛺𝛺𝑡𝑡−1 � × 𝑓𝑓𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �Ω𝑡𝑡−1 � × 𝑓𝑓𝑓𝑓 ,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �Ω𝑡𝑡−1 � (3)
𝜕𝜕 2 C 𝑡𝑡 (u 𝑡𝑡 ,𝑣𝑣𝑡𝑡 |Ω𝑡𝑡−1 )
where 𝑐𝑐𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |Ω𝑡𝑡−1 ) = , 𝑓𝑓𝑠𝑠,𝑡𝑡 �𝑧𝑧𝑠𝑠,𝑡𝑡 �Ω𝑡𝑡−1 � is the conditional density of
𝜕𝜕𝑢𝑢 𝑡𝑡 𝜕𝜕𝑣𝑣𝑡𝑡
𝑧𝑧𝑠𝑠,𝑡𝑡 and 𝑓𝑓𝑓𝑓 ,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 � is the conditional density of 𝑧𝑧𝑓𝑓,𝑡𝑡 .
Cherubini et al. (2004) claimed that copula functions with upper (or lower) tail
quite capable of calculating portfolio VaR. This study employs the Gaussian, the
11
Gumbel and the Clayton copula for specification and calibration. The Gaussian copula
is generally viewed as a benchmark for comparison, while the Gumbel and the
Clayton copula are used to capture the upper and lower tail dependence, respectively.
The Clayton copula is especially prevalent because the evidence indicates that equity
returns exhibit more joint negative extremes than joint positive extremes, leading to
the observation that stocks tend to crash together but not boom together (Poon et al.,
The conditional Gaussian copula function is the density of joint standard uniform
variables (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 ) because the random variables are bivariate normal with the
time-varying correlation ρt . Let 𝑥𝑥𝑡𝑡 = 𝛷𝛷−1 (𝑢𝑢𝑡𝑡 ) and 𝑦𝑦𝑡𝑡 = 𝛷𝛷−1 (𝑣𝑣𝑡𝑡 ) , where 𝛷𝛷−1 (.)
denotes the inverse of the cumulative density function of the standard normal
The Gumbel and the Clayton copula can efficiently capture the tail dependence
arising from the extreme observations caused by asymmetry. The density of the
12
2
1−𝛿𝛿𝑡𝑡 1−2 𝛿𝛿𝑡𝑡
� � � �
�−�(− 𝑙𝑙𝑙𝑙 𝑢𝑢𝑡𝑡 )𝛿𝛿𝑡𝑡 −1 + (− 𝑙𝑙𝑙𝑙 𝑣𝑣𝑡𝑡 )𝛿𝛿𝑡𝑡 −1 � 𝛿𝛿𝑡𝑡 + (𝛿𝛿𝑡𝑡 − 1)�(− 𝑙𝑙𝑙𝑙 𝑢𝑢𝑡𝑡 )𝛿𝛿𝑡𝑡−1 + (− 𝑙𝑙𝑙𝑙 𝑣𝑣𝑡𝑡 )𝛿𝛿𝑡𝑡 −1 � 𝛿𝛿𝑡𝑡 � (5)
where 𝛿𝛿𝑡𝑡 ∈ [1, ∞) measures the degree of dependence between 𝑢𝑢𝑡𝑡 and 𝑣𝑣𝑡𝑡 . 𝛿𝛿𝑡𝑡 = 1
2𝜃𝜃 𝑡𝑡+1
−
𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 −𝜃𝜃 −𝜃𝜃 𝜃𝜃 𝑡𝑡 −𝜃𝜃𝑡𝑡 −1 −𝜃𝜃𝑡𝑡 −1
𝑐𝑐𝜃𝜃𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜃𝜃𝑡𝑡 ) = (𝜃𝜃𝑡𝑡 + 1)�𝑢𝑢𝑡𝑡 𝑡𝑡 + 𝑣𝑣𝑡𝑡 𝑡𝑡 − 1� 𝑢𝑢𝑡𝑡 𝑣𝑣𝑡𝑡 (6)
where 𝜃𝜃𝑡𝑡 ∈ [0, ∞) measures the degree of dependence between 𝑢𝑢𝑡𝑡 and 𝑣𝑣𝑡𝑡 . 𝜃𝜃𝑡𝑡 = 0
To find the copula that best estimates the PVaR, we consider some possible
credit, and operational) such as skewness and fat-tails while allowing for a rich
dependence structure.
Regardless of the initial risk source in financial collapses (such as the outbreaks
13
futures are subject to at least two types of risk: market, credit and operational risk 6.
The distributional shape of each risk type varies considerably. Market risk typically
generates portfolio value distributions that are nearly symmetric and often
Hu (2006) pointed out that empirical applications so far have been limited to
using individual copulas, however, there is no single copula that applies to all
situations. Thus, a mixture model is better able to generate dependence structures that
do not belong to one particular existing copula family. By carefully choosing the
component copulas in the mixture, a model that is simple yet flexible enough to
generate most dependence patterns in financial data can be constructed 7. Beyond Hu’s
generate more flexible dependence structures than existing copula families. Our
6
According to the definition of the Basle Committee, operational risk is defined as losses due to failure
of internal processes, people, and systems, or from external events.
7
Hu (2006) suggests some implications for risk management. First, the use of multivariate normality
and correlation coefficients to measure dependence may significantly underestimate the downside risk,
while that computed using a mixed copula is much more realistic. Second, in risk measurement, the
valuation model should include both the structure and the degree of dependence.
14
conditional Gumbel copula, and a conditional Clayton copula to capture all possibility
𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶
𝑤𝑤𝑡𝑡 𝐶𝐶𝑡𝑡 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜃𝜃𝑡𝑡 ) + 𝑤𝑤𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 𝐶𝐶𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝛿𝛿𝑡𝑡 ) + (1 − 𝑤𝑤𝑡𝑡𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 − 𝑤𝑤𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 )𝐶𝐶𝑡𝑡𝐺𝐺𝐺𝐺𝐺𝐺 (𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 ) (7)
Clay
Where wt is the time-varying weight of the conditional Clayton copula, and
Clay Clay
wt , wtGum ∈ [0,1] and wt + wtGum ≤ 1 . These weights, named as shape
Clay
dependence. For instance, after an increase in wt , the copula assigns more
probability mass to the left tail. Compared to the models of Hu (2006), Li (2000), and
Lai et al. (2007), our mixture copula model is not restricted to static weights as theirs
structures. Even though Rodriguez (2007) modeled the weights changing over time
according to a Markov Switching process, only two states, high variance and low
variance states, are allowed and remaining parameters of the mixture copula are
assumed constant 9.
8
Hu (2006) defined these weights as shape parameters to reflect dependence structures.
9
The other restriction is that two-step estimation cannot be used because copula and marginal
parameters change simultaneously according to a Markov Switching process, which burdens the
estimation process.
15
2.4. Parameterizing time-series variation in the conditional copula
Bartram et al., 2007; Jondeau and Rochinger, 2006; Rodriguez, 2007). Following the
studies of Patton (2006a) and Bartram et al.(2007), we assume that the dependence
returns 10.
because autoregressive parameters over lag one are rarely different from zero
(Bartram et al. 11, 2007; Samitas et al., 2007). The dependence process of a Gaussian
copula is therefore:
The conditional dependence, 𝜌𝜌𝑡𝑡 , depends on its previous dependence, 𝜌𝜌𝑡𝑡−1 , and
historical absolute difference, |𝑢𝑢𝑡𝑡−1 − 𝑣𝑣𝑡𝑡−1 |. This formulation captures both the
10
There are different ways of capturing possible time variation in a conditional copula. This paper
assumes that the functional form of the copula remains fixed over the sample whereas the parameters
vary according to some evolution equation, as in Patton (2006a).
11
Bartram et al. (2007) assumed that the time-varying dependence process follows an AR(2) model.
16
𝑥𝑥
𝑒𝑒 −𝑥𝑥 )(1 + 𝑒𝑒 −𝑥𝑥 ) = 𝑡𝑡𝑡𝑡𝑡𝑡ℎ �2 � , which is the modified logistic transformation to keep 𝜌𝜌𝑡𝑡
in (-1,1) at all times (Patton, 2006a). Time-varying dependence processes for the
Gumbel copula and the Clayton copula are described as Eq. (9) and (10), respectively.
where δ𝑡𝑡 ∈ [1, ∞) measures the degree of dependence in the Gumbel copula and has
The Monte Carlo simulation is widely used to generate draws from stochastic
Given a chosen copula function and its estimated time-varying parameter in the
previous section, this study can generate multivariate random variables {𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 }. For
each copula function, we generate 200 pairs of {𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 , 𝛿𝛿𝑡𝑡 , 𝜃𝜃𝑡𝑡 } conditional on
dynamic dependence coefficients 𝜌𝜌𝑡𝑡 , 𝛿𝛿𝑡𝑡 , 𝑜𝑜𝑜𝑜 𝜃𝜃𝑡𝑡 . Therefore, at time t, conditional joint
distributions such as 𝑐𝑐(𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 ), 𝑐𝑐(𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 | 𝛿𝛿𝑡𝑡 ) and 𝑐𝑐(𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 | 𝜃𝜃𝑡𝑡 ) can be obtained.
{𝑢𝑢𝑡𝑡 , 𝑣𝑣𝑡𝑡 |𝜌𝜌𝑡𝑡 , 𝛿𝛿𝑡𝑡 , 𝜃𝜃𝑡𝑡 }, generated from conditional joint distributions, to portfolio
17
component returns by constructing empirical distributions for each sample day. We
use historical data from the previous 60 and 90 trading days and roll it over. Thus,
given the estimated conditional joint distribution of asset returns, replicated samples
we constructed a hedged portfolio comprised of the S&P 500 index and its index
futures. The sample period covers 1 January 2004 to 29 October 2007, including the
period of the outbreak of the U.S. subprime market collapse from August to October
2007. 998 daily observations for the index and index futures are obtained.
The minimum variance hedge ratio (MVHR), which is the ratio of futures
contracts to a specific spot position that minimizes the variance of hedged portfolio
returns, has been broadly used as a futures hedging strategy. Following Johnson (1960)
and Stein (1961), the MVHR is calculated as the ratio of covariance between spot and
futures returns to the variance of futures returns. Early research utilized regression
12
covariance and subsequently generate dynamic MVHRs . Specifically, the
12
Dynamic MVHRs often outperform those estimated from the regression models (Baillie and Myers
(1991), Kroner and Sultan (1993), Brooks et al. (2002)), among many others.
18
conditional variance-covariance matrix of residual series from (𝜀𝜀𝑠𝑠,𝑡𝑡 , 𝜀𝜀𝑓𝑓,𝑡𝑡 ) in (1a), is
denoted by
2
ℎ𝑠𝑠,𝑡𝑡 ℎ𝑠𝑠𝑠𝑠 ,𝑡𝑡
𝑉𝑉𝑉𝑉𝑉𝑉�𝜀𝜀𝑠𝑠,𝑡𝑡 , 𝜀𝜀𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 � = � 2 �,
ℎ𝑠𝑠𝑠𝑠 ,𝑡𝑡 ℎ𝑓𝑓,𝑡𝑡
Assume that the optimal weight of this hedge portfolio is its conditional
minimum-variance hedge ratio to keep all variables under a time-varying version. The
optimal conditional minimum-variance hedge ratio, {𝐻𝐻𝑡𝑡∗ |𝜌𝜌𝑡𝑡 }, can then be defined as
ℎ�𝑠𝑠𝑠𝑠 ,𝑡𝑡
𝐻𝐻𝑡𝑡∗ = (11)
ℎ�2𝑓𝑓,𝑡𝑡
ℎ�
= 𝜌𝜌𝑡𝑡 ℎ� 𝑠𝑠,𝑡𝑡
𝑓𝑓,𝑡𝑡
and
∞ ∞
𝜌𝜌𝑡𝑡 = ∫−∞ ∫−∞ zs,t zf,t φt �zs,t zf,t �Ωt−1 � dw1 dw2
2 2
where ℎ𝑠𝑠𝑠𝑠 and ℎ𝑓𝑓𝑓𝑓 are conditional variances of index and index futures, respectively.
𝜌𝜌𝑡𝑡 , estimated from the bivariate conditional Gaussian density function described in
Section 2.4, is the copula-based conditional correlation between index and index
future 13. Therefore, the distributions of portfolio returns, {𝑝𝑝𝑡𝑡 |𝐻𝐻𝑡𝑡∗ }, conditional on
dynamic hedge ratios, can be obtained. Finally, use α quantile of the conditional
13
The dynamic hedge ratio in this study is similar to that of Hsu et al. (2008). They found that the
hedge effectiveness of a time-varying copula-based model is better than that of conventional static
models and even other dynamic hedge models such as CCC-GARCH and DCC-GARCH.
19
conditional on a time-varying dependence between portfolio components is estimated
20
3. Empirical results of PVaR estimation
Table 1 reports the summary statistics for the S&P 500 index returns and its
futures returns. Table 2 shows the estimated parameters of the marginal distributions
indicates, all parameters are at least significant at the 5 percent level. Furthermore,
enhance estimation efficiency. Given that the marginal distributions are estimated, the
reported in Panel A of Table 3 15. In Eq. (8), the parameter β captures the degree of
14
We test whether transformed series are Unif(0,1) using the Kolmogorov-Smirnov test (see appendix
in Patton (2006a)).
15
The appendix A describes parameter estimation of the conditional copula.
21
asymmetric dependences specified by the Gumbel and the Clayton copula,
respectively. Eq. (9) and (10) are their time-varying dependence processes. For each
sample day, there are three types of time-varying dependences specified by the
mixture copulas. These weights are estimated by MLE according to Eq. (7). Panel A
reports the weight estimates across the entire sample period, while Panel B focuses on
the period of the U.S. subprime market crash from August to October 2007. The
statistics of weight estimates in the conditional Clayton copula are generally higher
than those in the conditional Gumbel and the conditional Gaussian copula, indicating
the conditional mixture copulas allocate more weights on the left tail dependence to
reflect the fact that markets are more likely to crash together than to boom together,
especially during U.S. subprime market crash. Figure 1 depicts a time series plot of
22
Implementing the Monte Carlo simulation generates replicated samples for the
index and futures returns given the estimated parameters of the time-varying
dependence models in each sample day. As the optimal weight of a hedge portfolio is
distributions for portfolio returns are generated for each sample day. Furthermore, 1%
and 5 % quantiles of the conditional portfolio distributions are used to estimate the
conditional PVaR. Table 5 summarizes the statistics of the conditional PVaR across
the sample period. D60 and D90 are the rolling horizons, and indicate that the
empirical distributions are constructed using historical data from the previous 60 and
sample period can be obtained. As Table 5 shows, regardless of the significance level
or the rolling horizon, the conditional PVaR estimates specified by the Clayton copula
are the most severe for each statistic, whereas the Gaussian copula produces the most
tolerant estimates.
Violation occurs if the actual portfolio return is worse than the PVaR estimate.
Numbers of violation measures the frequency of violation, and mean violation refers
to the loss in excess of the PVaR estimate. The frequency of violation and the
magnitude of exceedances are both largest the Gaussian copula. As the portfolio
16
The empirical distributions are constructed by previous 60 and 90 trading days to convert uniform
23
assets exhibit asymmetric dependence, especially for lower tail dependence, the
stricter PVaR estimate of the Clayton copula should be used. Table 5 shows that
conditional PVaR estimates from the Clayton copula exhibit a lower frequency of
violation and magnitude of exceedances than other copulas. In particular, the number
of violations in the mixture copula is lowest, and its statistics are more modest than
others.
For comparison, Figure 2 shows the time series plots of the conditional PVaR
estimates with different significance levels (99% and 95%) and different rolling
horizons (60 and 90 trading days). In general, the conditional PVaR estimates from
the Clayton copula are more severe than others, as is quite evident during the U.S.
subprime market crash period. Note that the time series for the PVaR estimates of the
24
4. Do Copulas Improve the Estimation of PVaR: The Backtests
4.1. Backtests
This section proposes three backtest methods to examine the accuracy of the
unconditional and conditional coverage tests, (2) dynamic quantile test (Engle and
Unconditional and conditional coverage tests are statistical backtests based on the
whole of the distribution of exceedances, while distribution and tail forecast test is
based on the whole of the distribution of portfolio returns. Later two tests use more
information than the frequency-based tests, and are generally more reliable. Moreover,
we compare the performance of our conditional PVaR model with that of the
targeted violation rate (𝛼𝛼) to the observed violation rate. The Kupiec (1995) LR test
assess the equality between the proportion of VaR violations and the expected α
with the small samples involved, unconditional coverage tests are less robust than
25
alternative hypotheses (Kupiec (1995), Christoffersen (1998) and Berkowitz (2001)).
To backtest the VaR results, we use the powerful tests developed by Christoffersen
(1998), who enables us to test both coverage and independence hypotheses at the
same time. More precisely, this study uses the conditional coverage test to determine
if the PVaR violations are not only violated at α percent of the time, but they should
sequence of VaR violations as {𝐼𝐼𝑡𝑡 }, where 𝐼𝐼𝑡𝑡 is a dummy variable equal to 1 if a VaR
violation occurs at time t and 0 if there is no VaR violation at time t. If 𝜋𝜋𝑖𝑖,𝑗𝑗 is the
𝜋𝜋𝑖𝑖,𝑗𝑗 = P(𝐼𝐼𝑡𝑡 = 𝑗𝑗|𝐼𝐼𝑡𝑡−1 = 𝑖𝑖), then the likelihood function for the sequence of 𝐼𝐼𝑡𝑡 is equal
to:
𝑛𝑛 𝑛𝑛 𝑛𝑛 𝑛𝑛
𝐿𝐿1 = 𝜋𝜋0,00,0 𝜋𝜋0,10,1 𝜋𝜋1,01,0 𝜋𝜋1,11,1
then 𝜋𝜋0,0 = 1 − π, 𝜋𝜋0,1 = π, 𝜋𝜋1,0 = 1 − π, and 𝜋𝜋1,1 = π. This gives the likelihood
The LR test statistic for (first-order) independence in the VaR violations is:
If we condition on the first observant in the test for unconditional coverage, the
where 𝐿𝐿𝐿𝐿𝑢𝑢𝑢𝑢 is the 𝐿𝐿𝐿𝐿 statistic for unconditional coverage (computed above for the
previous information, Engle and Manganelli (2004) provide the dynamic quantile
(DQ) test to correct for the insufficiency in the conditional coverage test of
Christoffersen (1998). Testing model is built by regressing {𝐼𝐼𝑡𝑡 } on its past values and
𝑝𝑝
𝐼𝐼𝑡𝑡 = α0 + � 𝛽𝛽𝑖𝑖 𝐼𝐼𝑡𝑡−𝑖𝑖 + 𝛽𝛽𝑝𝑝+1 𝑉𝑉𝑉𝑉𝑉𝑉𝑡𝑡 + 𝑢𝑢𝑡𝑡
𝑖𝑖=1
27
where, under the null hypothesis, α0 = α, the target violation rate, and 𝛽𝛽𝑖𝑖 = 0,
𝐇𝐇 − 𝛂𝛂𝛂𝛂 = 𝐗𝐗𝐗𝐗 + 𝐮𝐮
where 1 is a vector of ones. A good model should produce a sequence of unbiased and
uncorrelated 𝐼𝐼𝑡𝑡 variables, and the regressors should have no explanatory power, that
is H0 : 𝛃𝛃 = 𝟎𝟎. This regression model is estimated by ordinary least squares (OLS) and
𝛽𝛽̂𝐿𝐿𝐿𝐿
′
𝑋𝑋 ′ 𝑋𝑋𝛽𝛽̂𝐿𝐿𝐿𝐿 2
DQ = ∼ 𝜒𝜒𝑝𝑝+2
𝛼𝛼(1 − 𝛼𝛼)
where 𝛽𝛽̂𝐿𝐿𝐿𝐿 is the OLS estimates of 𝛃𝛃. The DQ test statistic has an asymptotic
application in this study, regressor matrix X contains constant, four lagged hits,
Berkowitz (2001) criticized the conditional coverage test model for requiring
large sample size and regarding violation as an iid Bernoulli sequence 17. He, however,
claimed that density evaluation methods make use of the full distribution of outcomes
and thus extract a greater amount of information from the available data. Rather than
focusing on rare violations, it is possible to transform all the realizations into a series
17
The key problem argued by Berkowitz (2001) is that Bernoulli variables take on only two values ( 0
and 1) and take on the value 1 very rarely.
28
of independent and identically distributed random variables. This idea is implemented
where 𝑦𝑦𝑡𝑡 is the ex post portfolio return realization and 𝑓𝑓̂(∙) is the ex ante forecasted
portfolio density. If the underlying PVaR model is valid, then its {𝑥𝑥𝑡𝑡 } series should
forecast 18
. For this test, the {𝑥𝑥𝑡𝑡 } series is transformed using the inverse of the
standard normal cumulative distribution function, that is, 𝑧𝑧𝑡𝑡 = 𝛷𝛷−1 (𝑥𝑥𝑡𝑡 ). If the PVaR
model is correctly specified, the {𝑧𝑧𝑡𝑡 } series should be an iid N(0,1). This hypothesis
where the parameter 𝜇𝜇 and 𝜌𝜌 are, respectively, the conditional mean and AR(1)
coefficient corresponding to the {𝑧𝑧𝑡𝑡 } series, and 𝜀𝜀𝑡𝑡 is a normal random variable
with mean zero and variance 𝜎𝜎 2 . Under the null hypothesis 𝜇𝜇 = 0, 𝜌𝜌 = 0, and
29
Berkowitz (2001) specially allowed the user to evaluate tail forecast due to the
fact that risk managers are exclusively interested in an accurate description of large
losses or tail behavior. By considering a tail that is defined by the user, any
observations that do not fall in the tail will be intentionally truncated. Letting the
desired cutoff point PVaR = Φ−1 (α) , we choose PVaR = −1.64 and PVaR =
−2.33 to focus on the 5% and 1% lower tail, respectively. Then the log-likelihood
1 𝑧𝑧 − 𝜇𝜇 PVaR − 𝜇𝜇
L(μ, σ|𝐙𝐙) = � ln 𝜙𝜙 � �+� ln �1 − Φ � ��
𝑧𝑧<𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 σ 𝜎𝜎 𝑧𝑧≥PVaR 𝜎𝜎
which contains only observations falling in the tail. This test naturally provides a
middle ground between the full distribution approach and the conditional coverage
targeted violation rate to the observed violation rate. The first column in Table 6
18
Berkowitz (2001) claimed that his likelihood ratio test is more powerful than that of Christoffersen
(1998) because it evaluates the entire density distribution, rather than a scalar or interval.
30
reports the actual violation rates. The actual violation rates of our proposed model are
less than their targeted violation rates, 5% and 1%, on average. Irrespective of the
rolling horizon and significance level, the violation rate for the Mixture copula is the
lowest, whereas the violation rates of the conventional GARCH model approach 5%
unconditional likelihood ratio (LR) statistics for the null hypothesis at the 5% and 1%
violation rate, respectively. The p-value, shown in parentheses, indicates whether the
observed violation rate is significantly different from the targeted violation rate. Our
empirical studies have found that the conventional GARCH model is invalid under
independence test fails to reject the null hypothesis, indicating that there is no
clustering effect. Column four reports the results of conditional coverage test. It’s
evident that the copula-based PVaR model outperforms the conventional GARCH
model under backtesting criteria at the 1% desired violation rate regardless of the
rolling horizon. Although rejections arise at the 5% violation rate in the proposed
model, this does not imply that they have failed because the frequency of violations is
less than the desired five percent19. Under backtesting criteria at the 5% desired
violation rate, the conventional GARCH model is acceptable, but it becomes invalid if
19
This finding is the consistent with that of Miller and Liu(2006).
31
we choose an extreme violation rate. Generally speaking, the copula-based PVaR
Figure 3 displays the time series of actual portfolio returns and corresponding
conditional PVaR estimates. The conditional PVaR estimate appears to track actual
portfolio return well. In addition, violations rarely occur during the U.S. subprime
market crash from August to October 2007. In Figure 4, we demonstrate the degree of
comparing its estimates with those from the various conditional copula models.
Positive difference means that the conventional GARCH estimates are underestimated
while negative difference indicates that they are overestimated. As can be found, the
Table 7 presents the results of the PVaR estimates evaluation using the dynamic
quantile test. Under the 1% quantile, the results are mainly consistent with those of
32
conditional coverage test. Under the 5% quantile level, however, the time-varying
copula models exhibit the superior performance relative to the benchmark model,
except for the Gaussian copula. Most test statistics generated from the time-varying
copula models do not reject null hypothesis. In particular, the time-varying Clayton
copula presents the best performance at 1% quantile due to its smallest test statistics.
Table 8 reports the results of distribution and tail forecast test. If we focus on the
interior of the distribution, most models predict the portfolio distribution well, except
for the Clayton copula model. If we turn to the 5% and 1% lower tail, all copula
model are relatively outstanding in comparison with the benchmark model. This
finding indicates that the time-varying copula models have a superior ability for
models in PVaR estimates can be attributed to its superiority in dealing with changes
33
A popular issue in risk management practice concerns the coverage rate that
adequate coverage, the Basle Committee chose to focus on the first percentile (1%) of
return distributions. In other words, risk managers are required to hold sufficient
capital to absorb all but 1 percent of expected losses, rather than the 5 percent level
Brooks and Persand (BP) (2002) sought to determine whether a more accurate
VaR estimate could be derived by using the fifth percentile instead of the first
factor)21. They found that when the actual data is fat tailed, using critical values from
particular, the model risk measured by standard errors could be more severe for first
percentile of the normal return distribution22. Additionally, they suggested that the
closer the quantile are to the mean of the distribution (say, normal), the more
20
The actual coverage rate is even greater than 99% due to a scaling factor of at least 3.
21
An alternative viewpoint indicates that given a limited amount of data, the farther out in the tails the
cutoff is set, the fewer observations are available to estimate the required parameters and the larger the
standard error is around those estimates.
22
Kupiec(1995) showed that for a normal distribution, the standard error of the first percentile is about
50 percent larger than that of fifth percentile, and it will be double if the distributions of market returns
are fat-tailed.
34
accurately they can be estimated. Therefore, if a regulator has the desirable objective
of ensuring that virtually all probable losses can be covered, the use of a smaller
nominal coverage probability (say, 95% instead of 99%) combined with a larger
multiplier is preferable.
the copula-based PVaR model performs better than the traditional model only under
backtesting criteria at the 1% (not for 5%) desired violation rate, from the results of
dynamic quantile test (Table 7) and tail forecast test (Table 8), our proposed copula
models outperforms the benchmark model at all coverage rates. In addition, both
dynamic quantile test and distribution forecast test are more powerful and robust than
conditional coverage test. Engle and Manganelli (2004) argue that the most difficulty
findings coincides with above arguments that the model risk plays an important role,
and this risk may be mainly attributed to model’s inability to specify tail dependences
and take the nature of time variation into account. Also, these results support BP’s
35
suggested for the conventional risk management models to avoid the model risk
An interval estimate is often more useful and informative than just a point
estimate (Efron and Tibshirani, 1993). Jorion (1996) suggested that VaR should be
reported with confidence intervals. Christoffersen and Goncalves (2005) claim that
accurate confidence intervals reported along with the VaR point estimate will facilitate
the use of VaR in active portfolio management. Most importantly, the intervals widen
substantially as one moves to more extreme quantiles because fewer observations are
intervals arise from the estimation error from the GARCH model. In terms of the
possibility of the estimation error from our copula model, the 90% confidence
bootstrapping the data 23 . The appendix B reports the bootstrap algorithm for
23
Due to the analytic estimation of the standard error of a PVaR estimator is too difficult to calculate,
the bootstrap technique is then used. The bootstrap method estimates the sampling distribution of a test
statistic on the basis of the observed data. The advantage of the bootstrap method is that it proceeds as
if the sample is the population for the purpose of estimating the sampling distribution. Unlike the
Monte Carlo method, it does not have to known the underlying data generation process.
36
Table 9 shows the results of bootstrap. Lower limit means that the average of
lower bound of interval across sample day, while upper limit indicates that the
average of upper bound of interval across sample day. The upper limits of PVaR
estimates from the GARCH model are generally higher than those from time-varying
copula models, implying that PVaR estimates from the GARCH model tend to be
The uncertainty of the PVaR point estimate arising from estimation risk needs to
demonstrating estimation error from the GARCH model, we also quantify the
estimation risk (as measured by the length of confidence interval) from various
copula-based models and find that the copula-based models outperform the
conventional GARCH model. In Table 9, we also found that the estimation risk of
coverage ratio under various copula-based models and the conventional GARCH
model. The result is consistent with the findings by BP and Kupiec(1995)24. Figure 5
presents the plot of 90% bootstrapped confidence intervals for various PVaR models.
24
The results of BP and Kupiec(1995) are for normal distribution only.
37
We also attempt to highlight that the upper limit of PVaR interval estimates may
fail in the backtest, even its point estimate passes the backtest. By taking estimation
error into account, we conduct backtest again for upper limit of PVaR interval
estimates and report results in Table 10. Apparently, the conventional GARCH has the
greatest violation rate. Under backtesting criteria at the 5% desired violation rate, the
independent and conditional coverage tests. Our proposed copula models, however,
are acceptable even with the estimation error problem. If we turn to backtesting
criteria at the 1% desired violation rate, the Gaussian, the Gumbel copula and the
GARCH models are failed to backtest, whereas the Clayton and the mixture copula
5. Conclusion
used in current practice exhibits considerable biases due to model specification errors.
25
As for other two backtests (dynamic quantile and distribution and tail forecast), the results of using
the upper limit of PVaR interval remain same.
38
This study uses PVaR estimation to illustrate that the model risk is attributable to the
attempt to improve the PVaR estimation and thus reduce model risk by relaxing the
PVaR estimates for the optimal hedged portfolios are computed from various
copula models, and backtesting diagnostics indicate that the copula-based PVaR
outperforms the conventional PVaR estimator at the 99% and 95% significance level.
Our results imply that model risk may be more severe under the nominal coverage
PVaR point estimate with 99 percent coverage rate is quite uncertain due to estimation
risk. The copula-based model is acceptable even with estimation risk, whereas the
Our findings have significant implications for regulators. First, the benefit of
applying the copula model to PVaR estimation can be identified even after
39
considering model risk. Second, to reduce the model risk, PVaR estimation using a
40
Reference
Ané, T. and C. Kharoubi, 2003. Dependence structure and risk measure. Journal of
Business 76, 411-438.
Bae, K. H., Karolyi, G. A., Stulz, R. M., 2003. A new approach to measuring financial
contagion. Review of Financial Studies 16 (3), 717-763.
Baillie, R., Myers, R., 1991. Bivariate GARCH estimation of the optimal commodity
futures Hedge. Journal of Applied Econometrics 6, 109-124.
Bartram, S. M., Taylor, S. J., Wang, Y. H., 2007. The Euro and European financial
market dependence. Journal of Banking and Finance 31, 1461-1481.
Berkowitz, J., 2001. Testing density forecasts with application to risk management.
Journal of Business and Economic Statistics 19, 465-474.
Brooks, C., Henry, O., Persand, G., 2002. The effect of asymmetries on optimal hedge
ratios. Journal of Business 75, 333-352.
Brooks, C., G. Persand, 2002. Model choice and value-at-risk performance. Financial
Analysts Journal, September/October, 87-97.
Chatfield, C., 1993. Calculating interval forecasts. Journal of Business and Economic
Statistics 11, 121-135.
Cherubini, U., Luciano, E., 2001. Value-at-risk trade-off and capital allocation with
copulas. Economic Notes 30(2), 2001.
Cherubini, U., Luciano, E., Vecchiato, W. 2004. Copula Method in Finance, John Wiley
& Sons, Ltd.
Christoffersen, P., Goncalves, S., 2005. Estimation risk in financial risk management.
Journal of Risk 7(3), 1-28.
41
Diebold, F.X., Gunther, T.A., Tay, A.S., 1998. Evaluating density forecasts.
International Economic Review 39(4), 863-883.
Efron, B., Tibshirani, R.J., 1993. An introduction to the bootstrap. Chapman & Hall,
New York.
Enders, W., 2004. Applied Econometric Time Series, 2nd ed., Wiley.
Engle, R.F., Maganelli, S., 2004. CAViaR: conditional autoregressive value at risk by
regression quantiles. Journal of Business & Economic Statistics 22 (4), 367-381.
Engle, R.F., Ng, V.K., 1993. Measuring and testing the impact of news on volatility.
Journal of Finance 48, 1749-1778.
Hsu, C. C., Wang, Y. H., Tseng, C. P., 2008. Dynamic hedging with futures: A
copula-based GARCH model. Journal of Futures Markets 28, 1095-1116.
Hu, L., 2006. Dependence patterns across financial markets: A mixed copula approach.
Applied Financial Economics 16, 717-729.
J.P. Morgan, 1996. Riskmetrics-Technical Document. 4th ed. New York: J.P. Morgan
and Reuters. (Available at www.riskmetrics.com/rmcovv.html)
Johnson, L., 1960. The theory of hedging and speculation in commodity futures.
Review of Economic Studies 27, 139-151.
Joe, H., 1997. Multivariate models and dependence concepts, Chapman & Hall,
42
London.
Jorion, P., 1996. Risk2: Measuring the risk in value at risk. Financial Analysts Journal,
November/December, 47-56.
Kroner, K. F., Sultan, J., 1993. Time varying distribution and dynamic hedging with
foreign currency futures. Journal of Financial Quantitative Analysis 28, 535-551.
Kupiec, P., 1995. Techniques for verifying the accuracy of risk measurement models.
Journal of Derivatives Winter, 73-84.
Lai, Y. Chen, C. W. S., Gerlach, R., 2007. Optimal dynamic hedging using
copula-threshold-garch models. Working paper in Feng Chia University.
Li, D. X., 2000. On default correlation: A copula function approach. The Journal of
Fixed Income 9(4), 43-54.
Longin, F., Solnik, B., 2001. Extreme correlation of international equity market.
Journal of Finance 56 (2), 649-676.
Miller, D. J., Liu, W. H., 2006. Improved estimation of portfolio value-at-risk under
copula models with mixed marginals. Journal of Futures Markets 26(10), 997-1018.
Patton, A. J., 2006b. Estimation of multivariate models for time series of possibly
different lengths. Journal of Applied Econometrics 21, 147-173.
43
Poon, S. H., Rockinger, M., Tawn, J., 2004. Extreme value dependence in financial
markets: Diagnostics, models, and financial implications. The Review of Financial
Studies 17(2), 581-610.
Rich, D., 2003. Second generation VAR and risk-adjusted return on capital. Journal of
Derivatives 10, 51-61.
Samitas, A., Kenourgios, D., Paltalidis, N., 2007. Financial crises and stock market
dependence. Working paper.
Smith, R. L., 2000. Measuring risk with extreme value theory. In P. Embrechts
(edited), Extremes and integrated risk management, London: Risk Books.
Stein, J. L., 1961. The simultaneous determination of spot and futures prices. American
Economic Review 51, 1012-1025.
44
Appendix A: Parameter Estimation of Conditional Copula
(3):
Let the parameters in 𝑓𝑓𝑠𝑠,𝑡𝑡 and 𝑓𝑓𝑓𝑓 ,𝑡𝑡 be denoted as 𝜃𝜃𝑠𝑠 and 𝜃𝜃𝑓𝑓 , respectively, and
the other parameters in 𝑐𝑐𝑡𝑡 be denoted as 𝜃𝜃𝑐𝑐 . These parameters can be estimated by
where 𝜃𝜃 = (𝜃𝜃𝑠𝑠 , 𝜃𝜃𝑓𝑓 , 𝜃𝜃𝑐𝑐 ) and 𝐿𝐿𝑘𝑘 represent the sum of the log-likelihood function
Since the dimensions of the estimated equation may be quite large, it is quite
In the first stage, the parameters of the marginal distribution are estimated from
𝜃𝜃�𝑓𝑓 ≡ arg max ∑𝑇𝑇𝑡𝑡=1 log 𝑓𝑓𝑓𝑓 ,𝑡𝑡 �𝑧𝑧𝑓𝑓,𝑡𝑡 �𝛺𝛺𝑡𝑡−1 ; 𝜃𝜃𝑓𝑓 �. (A3)
𝜃𝜃�c ≡ arg max ∑𝑇𝑇𝑡𝑡=1 log c𝑡𝑡 �u𝑡𝑡 , 𝑣𝑣𝑡𝑡 �Ω𝑡𝑡−1 ; 𝜃𝜃�𝑠𝑠 , 𝜃𝜃�𝑓𝑓 , 𝜃𝜃𝑐𝑐 �. (A4)
45
Patton (2006a) shows that the two-stage ML estimates 𝜃𝜃� = �𝜃𝜃�𝑠𝑠 ; 𝜃𝜃�𝑓𝑓 ; 𝜃𝜃�𝑐𝑐 � are
of 𝜃𝜃� must be obtained from numerical derivatives. We have only been able to obtain
satisfactory first derivatives, from which the fully efficient two-stage estimator
where the score vector 𝜙𝜙�𝑡𝑡 = 𝜕𝜕log 𝜑𝜑𝑡𝑡 /𝜕𝜕𝜕𝜕 is evaluated at 𝜃𝜃 = 𝜃𝜃�.
46
Appendix B: Bootstrap Algorithm for Copula-based Risk Measures
by resampling with replacement from the conditional portfolio returns generated from
Step 3 Repeat Step 1 and 2 a large number of times, say 1000 times, and obtain a
∗α(i)
sequence of bootstrapped copula-based risk measures, �𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t : i = 1, … , 1000�
1000 1000
∗α(i) ∗α(i)
�Qα/2 ��𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t � � , Q1−α/2 ��𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t � � �
i=1 i=1
∗α(i)
where Qα (⋅) is the α quantile of the empirical distribution of �𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃t �
47
Table 1 Summary statistics
This table shows summary statistics for the percentage log returns of the S&P 500 index and S&P 500
index futures. The sample period covers 1 January 2004 to 29 October 2007. 998 daily observations for
the index and index futures are collected.
This table reports the estimated parameters of the marginal distributions for index and futures returns.
They are assumed to be characterized by an GJR-GARCH(1,1)-AR-t model given by Eq. (1). The
symbol * denotes significance at the 5% levels.
48
Table 3 Estimated parameters of time-varying copula functions
This table shows the estimated parameters of time-varying dependencies in the chosen copulas. The
time-varying dependence models in Eq. (8), (9), (10) are estimated and calibrated. The parameter β
captures the degree of persistence in the dependence and γ captures the adjustment in the dependence
process. LLF(c) is the maximum copula component of the log-likelihood function. The symbol *
denotes significance at the 5% levels.
β ω γ LLF(c)
Panel A: Gaussian copula
0.94990* 0.71938* -0.96763* 907.88
Panel B: Gumbel copula
0.94743* 0.25587 -0.95841* 826.09
Panel C: Clayton copula
0.93638* 0.30843* -0.94724* 729.68
This table summarizes minimum, maximum, mean, 25% quantile, median, 75% quantile, and standard
deviation of weight estimates for conditional mixtures copulas. These weights are estimated by MLE
according to Eq. (7). Panel A reports the weight estimates across the entire sample period, while Panel
B focuses on the period of the U.S. subprime market crash from August to October 2007.
49
Table 5 Summary statistics of conditional PVaR estimates
This table summarizes the minimum, maximum, mean, 25% quantile, median, and 75% quantile of the
conditional PVaR estimates across sample period. The number of violations measures the frequency of
violation, and mean violation refers to the loss in excess of the PVaR estimate. D60 and D90 are the
rolling horizons, which indicate that the empirical distributions are constructed using historical data
from the previous 60 and 90 trading days, respectively. Also, SL0.05 and SL0.01 denote that the
conditional PVaR is estimated at the 95% and 99 % significance level, respectively. 998 daily
conditional PVaR estimates are summarized below.
50
Table 6 Unconditional and conditional coverage tests
Our backtest comprises unconditional coverage, independence and conditional coverage test, which
takes into account not only if violations occur, but they should also be independent and identically
distributed over time. Likelihood ratio statistics are reported in each test, and the p-values are displayed
in parentheses. The symbol * denotes significance at the 5% levels.
Violation Unconditional Independence Conditional
Rate Coverage (LRuc) (LRind) Coverage (LRcc)
Panel A: D90_SL0.05
Gaussian 0.02405 7.54228* 0.11650 7.67994*
(0.00603) (0.73286) (0.02149)
Gumbel 0.02104 9.69396* 0.39248 10.10492*
(0.00185) (0.53100) (0.00639)
Clayton 0.01904 11.33691* 0.34766 11.70127*
(0.00076) (0.55544) (0.00288)
Mixture 0.01804 12.22717* 0.22654 9.12764*
(0.00047) (0.63410) (0.01042)
Conventional 0.05711 0.57255 0.02101 0.64560
GARCH (0.47924) (0.88475) (0.72411)
Panel B: D90_SL0.01
Gaussian 0.00802 0.18488 0.05620 0.24808
(0.66721) (0.81260) (0.88335)
Gumbel 0.00401 2.03328 0.01399 2.05077
(0.15389) (0.90584) (0.35866)
Clayton 0.00301 2.95206 0.00786 2.96254
(0.08588) (0.92935) (0.22735)
Mixture 0.00301 2.95206 0.00786 2.96254
(0.08588) (0.92935) (0.22735)
Conventional 0.02605 7.82245* 0.60475 8.45015*
GARCH (0.00516) (0.43677) (0.01462)
Panel C: D60_SL0.05
Gaussian 0.02705 5.72521* 0.65285 6.40189*
(0.01672) (0.41910) (0.04072)
Gumbel 0.02204 8.93692* 0.19167 9.14796*
(0.00279) (0.66153) (0.01032)
Clayton 0.01403 16.31016* 0.17318 16.49562*
(0.00005) (0.67730) (0.00026)
Mixture 0.01303 17.47862* 0.14917 17.63918*
(0.00003) (0.69933) (0.00015)
Conventional 0.05311 0.86370 0.73701 0.87079
GARCH (0.35270) (0.39062) (0.64701)
Panel D: D60_SL0.01
Gaussian 0.00802 0.18488 0.05620 0.24808
(0.66721) (0.81260) (0.88335)
Gumbel 0.00401 2.03328 0.01399 2.05077
(0.15389) (0.90584) (0.35866)
Clayton 0.00301 2.95206 0.00786 2.96254
(0.08577) (0.92935) (0.22735)
Mixture 0.00301 2.95206 0.00786 2.96254
(0.08577) (0.92935) (0.22735)
Conventional 0.02305 5.44529* 0.47177 5.93733*
GARCH (0.01956) (0.49298) (0.05000)
51
Table 7 Dynamic quantile test
This table presents the results of the PVaR estimates evaluation using the dynamic quantile (DQ) test
proposed by Engle and Maganelli (2004). The DQ statistics and its asymptotic p-values for the
alternative PVaR models under 1%, 5% significance level are reported. The DQ statistics are
asymptotically distributed χ2 (6). The cells in bold font indicate rejection of null hypothesis of correct
PVaR estimates at the 5% significance level.
DQ p-value
statistics
Panel A: D90_SL0.05
Gaussian 13.22377 0.03961
Gumbel 10.34363 0.11104
Clayton 11.57783 0.07207
Mixture 11.26154 0.08062
Conventional GARCH 77.50367 0.00000
Panel B: D90_SL0.01
Gaussian 7.02197 0.31881
Gumbel 1.65849 0.94828
Clayton 1.49195 0.96002
Mixture 1.70745 0.94454
Conventional GARCH 109.1632 0.00000
Panel C: D60_SL0.05
Gaussian 16.51582 0.01123
Gumbel 11.84296 0.06556
Clayton 5.55835 0.47443
Mixture 11.71187 0.06871
Conventional GARCH 77.50367 0.00000
Panel D: D60_SL0.01
Gaussian 9.42000 0.15129
Gumbel 1.99757 0.91992
Clayton 1.29356 0.97201
Mixture 2.04019 0.91596
Conventional GARCH 109.16320 0.00000
52
Table 8 Distribution and tail forecast test
This table reports the results of distribution and tail forecast tests developed by Berkowitz (2001). The
LRdist test statistics evaluate the interior of the distribution, while LR5%tail and LR1%tail evaluate 5% and
1% lower tail of distribution, respectively. Their asymptotic p-values are shown in parentheses. The
cells in bold font indicate rejection of null hypothesis of correct PVaR estimates at the 5% significance
level.
53
Table 9 Bootstrapped interval properties for the conditional PVaR models
This table shows the results of bootstrap. Lower limit means that the average of lower bound of interval
across sample day, while upper limit indicates that the average of upper bound of interval across
sample day. Length is defined as the difference between upper limit and lower limit.
Lower Upper Limit Length
Limit
Panel A: D90_SL0.05
Gaussian -0.00404 -0.00264 0.00140
Gumbel -0.00414 -0.00245 0.00169
Clayton -0.00559 -0.00290 0.00269
Mixture -0.00478 -0.00267 0.00211
Conventional GARCH -0.00316 -0.00189 0.00127
Panel B: D90_SL0.01
Gaussian -0.00668 -0.00379 0.00289
Gumbel -0.00798 -0.00369 0.00429
Clayton -0.01057 -0.00548 0.00509
Mixture -0.00901 -0.00450 0.00451
Conventional GARCH -0.00452 -0.00257 0.00195
Panel C: D60_SL0.05
Gaussian -0.00413 -0.00270 0.00143
Gumbel -0.00423 -0.00250 0.00173
Clayton -0.00565 -0.00302 0.00263
Mixture -0.00486 -0.00275 0.00211
Conventional GARCH -0.00327 -0.00172 0.00155
Panel D: D60_SL0.01
Gaussian -0.00687 -0.00387 0.00300
Gumbel -0.00803 -0.00387 0.00416
Clayton -0.01024 -0.00551 0.00473
Mixture -0.00890 -0.00461 0.00429
Conventional GARCH -0.00432 -0.00255 0.00177
54
Table 10 Backtests of the upper limit of conditional PVaR model
The backtest which comprises unconditional coverage, independence and conditional coverage test is
conducted for upper limit of PVaR interval estimates. Likelihood ratio statistics are reported in each
test, and the p-values are displayed in parentheses. The symbol * denotes significance at the 5% levels.
Violation Unconditional Independence Conditional
Rate Coverage (LRuc) (LRind) Coverage (LRcc)
Panel A: D90_SL0.05
Gaussian 0.06012 0.88023 0.46737 0.97803
(0.34814) (0.49420) (0.61323)
Gumbel 0.06513 1.91498 1.71047 3.68397
(0.16642) (0.19093) (0.15850)
Clayton 0.05311 0.08637 0.73701 0.87079
(0.76893) (0.39062) (0.64701)
Mixture 0.05311 0.08637 0.73701 0.87079
(0.76893) (0.39062) (0.64701)
Conventional 0.10621 22.11063* 4.71424* 23.24240*
GARCH (0.00001) (0.02991) (0.00001)
Panel B: D90_SL0.01
Gaussian 0.03607 17.81594* 0.11861 15.12604*
(0.00001) (0.73056) (0.00051)
Gumbel 0.04409 27.66562* 0.64352 23.04649*
(0.00001) (0.42245) (0.00001)
Clayton 0.01303 0.36601 0.86830 1.24570
(0.54519) (0.35143) (0.53641)
Mixture 0.02004 3.41700 0.28938 3.72397
(0.06453) (0.59062) (0.15537)
Conventional 0.05511 43.33641* 0.01028 38.49426*
GARCH (0.00001) (0.91924) (0.00001)
Panel C: D60_SL0.05
Gaussian 0.05812 0.57255 1.09694 1.72152
(0.44925) (0.29495) (0.42284)
Gumbel 0.06413 1.67859 0.71246 0.12846
(0.19511) (0.39863) (0.93779)
Clayton 0.04709 0.07850 0.39401 0.51444
(0.77934) (0.53020) (0.77320)
Mixture 0.05511 0.23097 0.87303 1.15326
(0.63081) (0.35012) (0.56179)
Conventional 0.11924 32.05510* 5.33564* 19.09897*
GARCH (0.00001) (0.02089) (0.00001)
Panel D: D60_SL0.01
Gaussian 0.02906 10.50900* 0.46580 7.96872*
(0.00119) (0.49493) (0.01860)
Gumbel 0.03607 17.81594* 0.69016 15.69760*
(0.00001) (0.40616) (0.00039)
Clayton 0.01603 1.34672 2.02871 0.15858
(0.24586) (0.15435) (0.92377)
Mixture 0.01503 0.95960 0.65661 1.62937
(0.32729) (0.41776) (0.44278)
Conventional 0.05611 44.86769* 0.94503 45.86291*
GARCH (0.00001) (0.33099) (0.00001)
55
Figure 1 Time-varying weight estimates of conditional mixture copulas
Figure 1 depicts the time series of the weight estimates of conditional mixture copulas across the
sample period. The Gaussian weight estimates are displayed as a red line. The weight estimates of the
Gumbel and Clayton copulas are represented by green and blue lines, respectively.
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
2004/01/01
2004/04/01
2004/07/01
2004/10/01
2005/01/01
2005/04/01
2005/07/01
2005/10/01
2006/01/01
2006/04/01
2006/07/01
2006/10/01
2007/01/01
2007/04/01
2007/07/01
2007/10/01
GaussianWeight GumbelWeight ClaytonWeight
56
Figure 2 Comparisons between conditional Gaussian, Gumbel, Clayton and Mixture PVaR estimates
Conditional Gaussian (red line), Gumbel (green line), Clayton (blue line) and Mixture (black line) PVaR estimates are compared under a 90 or 60 rolling horizon and 95% or 99%
significance level.
D90_SL0.05 D90_SL0.01
0.000 0.000
-0.002
-0.005
-0.004
-0.010
-0.006
-0.015
-0.008
-0.020
-0.010
-0.012 -0.025
-0.014 -0.030
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
D60_SL0.05 0.000
D60_SL0.01
0.000
-0.002
-0.005
-0.004
-0.010
-0.006
-0.008 -0.015
-0.010
-0.020
-0.012
-0.025
-0.014
-0.016 -0.030
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
57
Figure 3 Comparisons between realized portfolio return and conditional PVaR estimates
Time series of daily portfolio return from January 2004 to October 2007(solid lines) are plotted with conditional Gaussian, Gumbel, Clayton and Mixture PVAR estimates (dotted
lines).
Panel A: D90_SL0.05
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional ClaytonPVaR Conditional Mixture PVaR
0.008 0.008 0.010 0.008
0.006 0.006 0.006
0.004 0.004 0.005 0.004
0.002 0.002 0.002
0.000 0.000 0.000
0.000
-0.002 -0.002 -0.002
-0.005
-0.004 -0.004 -0.004
-0.006 -0.006 -0.006
-0.010
-0.008 -0.008 -0.008
-0.010 -0.010 -0.015 -0.010
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
Panel B: D90_SL0.01
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional Clayton PVaR Conditional Mixture PVaR
0.010 0.010 0.010 0.010
0.005 0.005
0.005 0.005
0.000
0.000
0.000 0.000
-0.005
-0.005
-0.005 -0.010 -0.005
-0.010
-0.015 -0.010
-0.010
-0.015
-0.020
-0.015 -0.015
-0.025 -0.020
-0.020 -0.030 -0.025 -0.020
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
58
Panel C: D60_SL0.05
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional Clayton PVaR Conditional Mixture PVaR
0.008 0.008 0.010 0.008
0.006 0.006 0.006
0.004 0.004 0.005 0.004
0.002 0.002
0.002
0.000 0.000
0.000 0.000
-0.002
-0.002 -0.002
-0.004 -0.005
-0.004 -0.006 -0.004
-0.006 -0.008 -0.010 -0.006
-0.008 -0.010 -0.008
-0.010 -0.012 -0.015 -0.010
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
Panel D: D60_SL0.01
Conditional Gaussian PVaR Conditional Gumbel PVaR Conditional Clayton PVaR Conditional Mixture PVaR
0.010 0.010 0.010 0.010
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
59
Figure 4 Differences between conditional PVaR estimates and the conventional GARCH estimates.
Conditional Gaussian (red line), Gumbel (green line), Clayton (blue line) and Mixture (black line) PVaR estimates are compared with that of the conventional GARCH estimates
under a 90 or 60 rolling horizon and 95% or 99% significance level. Positive difference means that the conventional GARCH estimates are underestimated while negative difference
indicates that they are overestimated.
D90_SL0.05 0.025
D90_SL0.01
0.025
0.020 0.020
0.015 0.015
0.010 0.010
0.005 0.005
0.000 0.000
-0.005 -0.005
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
D60_SL0.05 D60_SL0.01
0.025 0.025
0.020 0.020
0.015 0.015
0.010 0.010
0.005 0.005
0.000 0.000
-0.005 -0.005
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
04/1
04/5
04/9
05/1
05/5
05/9
06/1
06/5
06/9
07/1
07/5
07/9
60
Figure 5 Bootstrapped confidence intervals for various PVaR models.
The bootstrapped 90% confidence intervals around the point estimates of time-varying PVaR are depicted in this figure. The estimation risk, measured by the length of confidence
interval, of first-percentile coverage rate is higher (around double) than that of fifth-percentile coverage ratio under various copula-based models and the conventional GARCH
model.
0.000
D90_SL0.05 D90_SL0.01
0.000
-0.002 -0.002
-0.004 -0.004
-0.006 -0.006
-0.008 -0.008
-0.010 -0.010
-0.012 -0.012
Gaussian Gumbel Clayton Mixture GARCH Gaussian Gumbel Clayton Mixture GARCH
D60_SL0.05 D60_SL0.01
0.000 0.000
-0.002 -0.002
-0.004 -0.004
-0.006 -0.006
-0.008 -0.008
-0.010 -0.010
-0.012 -0.012
Gaussian Gumbel Clayton Mixture GARCH Gaussian Gumbel Clayton Mixture GARCH
61