You are on page 1of 6

Covariance and Correlation

c
( Robert J. Serfling Not for reproduction or distribution)

We have seen how to summarize a data-based relative frequency distribution by mea-


sures of location and spread, such as the sample mean and sample variance. Likewise,
we have seen how to summarize probability distribution of a random variable X by
similar measures of location and spread, the mean and variance parameters. Now
we ask,

For a pair of random variables X and Y having joint probability distri-


bution
p(x, y) = P (X = x, Y = y)
how might we summarize the distribution?

For location features of a joint distribution, we simply use the means X and Y of
the corresponding marginal distributions for X and Y . Likewise, for spread features
we use X 2 and 2 . For joint distributions, however, we can go further and explore
Y
a further type of feature: the manner in which X and Y are interrelated or manifest
dependence. For example, consider the joint distribution given by the following table
for p(x, y):

Y
0 1
0 .7 .1
X
1 .1 .1

We see that there is indeed some dependence here: if a pair (X, Y ) is selected
at random according to this distribution, the probability that (X, Y ) = (0, 0) is
selected is .70, whereas the product of the probabilities that X = 0 and Y = 0 is
.8 .8 = .64 6= .70. So the events X = 0 and Y = 0 are dependent events. But
we can go further, asking: How might we characterize the extent or quantity of this
dependence feature?
There are in fact a variety of possible ways to formulate a suitable measure of
dependence. We shall consider here one very useful approach: covariance.

Covariance
One way that X and Y can exhibit dependence is to vary together i.e., the
distribution p(x, y) might attach relatively high probability to pairs (x, y) for which
the deviation of x above its mean, x X , and the deviation of y above its mean,
y Y , are either both positive or both negative and relatively large in magnitude.
Thus, for example, the information that a pair (x, y) had an x with positive deviation
x X would suggest that, unless something unusual had occurred, the y of the

1
given pair also had a positive deviation above its mean. A natural numerical measure
which takes account of this type of information is the sum of terms
XX
(1) (x X )(y Y )p(x, y) .
x y

For the kind of dependence just described, this sum would tend to be dominated by
large positive terms.
Another way that X and Y could exhibit dependence is to vary oppositely, in
which case pairs (x, y) such that one of x X and y Y is positive and the other
negative would receive relatively high probability. In this case the sum (1) would
tend to be dominated by negative terms.
Consequently, the sum (1) tends to indicate something about the kind of depen-
dence between X and Y . We call it the covariance of X and Y and use the following
notation and representations:
XX
Cov(X, Y ) = XY = (x X )(y Y )p(x, y) = E [(X X )(Y Y )] .
x y

It is easily checked that an equivalent formula for computing the covariance is:

XY = E(XY ) E(X)E(Y ).

For the example of p(x, y) considered above, we find:

X = Y = .2, E(XY ) = .1, XY = .06 .

Does this indicate a strong relationship between X and Y? What kind of relationship?
How can we tell whether a particular value for XY is meaningfully large or not?
Well return to these questions below, but for now let us explore some other aspects
of covariance.
Independence of X and Y implies Covariance = 0. This important fact is seen
(for the discrete case) as follows. If X and Y are independent, then their joint
distribution factors into the product of marginals. Using this, we have
XX
XY = (x X )(y Y )p(x, y)
x y
XX
= (x X )(y Y )pX (x)pY (y)
x y
" #" #
X X
= (x X )pX (x) (y Y )pY (y)
x y
= [X X ][Y Y ]
= 00= 0 .

Of course, we should want any reasonable measure of dependence to reduce to the


value 0 in the case of an absence of dependence. 2

2
For some measures of dependence which have been proposed in the literature, the
converse holds as well: if the measure has value 0, then the variables are independent.
However, for the covariance measure, this converse is not true.
EXAMPLE. Consider the joint probability distribution
Y
1 0 1
0 0 1/3 0
X
1 1/3 0 1/3
Note that for this distribution we have a very strong relationship between X and Y :
ignoring pairs (x, y) with 0 probability, we have

X =Y2.

On the other hand,

XY = E(XY ) E(X)E(Y ) = E(Y 3) E(X) 0 = E(Y 3 ) = 0.

Thus the covariance measure fails to detect the dependence structure. 2


Preliminary Conclusions about Covariance:

a) Independence of X and Y implies XY = 0;


b) XY = 0 does not necessarily indicate independence;
c) XY 6= 0 indicates some kind of dependence is present (what kind?).

In order to be able to interpret the meaning of a nonzero covariance value, we ask

How big can XY be in value?

It turns out that the covariance always lies between two limits which may be ex-
pressed in terms of the variances of X and Y :

X Y XY +X Y .

(This follows from the Cauchy-Schwarz inequality: for any two r.v.s W and Z,
|E(W Z)| [E(W 2)]1/2[E(Z 2)]1/2, with equality if and only if W and Z are propor-
tional.)
Moreover, the covariance can attain one of these limits only in the case that
X X and Y Y are proportional: i.e., for some constant c,

X X = c(Y Y ) ,

i.e.,
1 X
Y = X + (Y ),
c c
i.e., X and Y satisfy a linear relationship, Y = aX + b for some choice of a and b.

3
Interpretation of covariance. The above considerations lead to an interpretation
of covariance: XY measures the degree of linear relationship between X and Y .
(This is why XY = 0 for the example in which X = Y 2 with X symmetric about
0. This is a purely quadratic relationship, quite nonlinear.) 2
Thus we see that covariance measures a particular kind of dependence, the de-
gree of linear relationship. We can assess the strength of a covariance measure by
comparing its magnitude with the largest possible value, X Y . If XY attains
this magnitude, then the variables X and Y have a purely linear relationship. If
XY = 0, however, then either the relationship between X and Y can be assumed to
be of some nonlinear type, or else the variables are independent. If XY lies between
these values in magnitude, then we conclude that X and Y have a relationship which
is a mixture of linear and other components.
A somewhat undesirable aspect of the covariance measure is that its value
changes if we transform the variables involved to other units. For example, if the
variables are X and Y and we transform to new variables

X = cX, Y = dY,

note that

Cov(X , Y ) = E [(X E(X ))(Y E(Y ))]


= E [c(X E(X))d(Y E(Y ))]
= cd Cov(X, Y ) .

Thus, counter to our intuition that dependence relationships should not be altered
by simply rescaling the variables, the covariance measure indeed is affected.
It is preferable to have a dependence measure which is not sensitive to irrelevant
details such as units of measurement. Next we see how to convert to an equivalent
measure which is dimensionless in this sense.

Correlation
Recall that the upper and lower limits for the possible values which the covariance
can take are given in terms of the variances, which also change with rescaling of
variables. Consequently, in order to decide whether a particular value for the co-
variance is big or small, we need to assess it relative to the variances of the
two variables. One way to do this is to divide the covariance by the product of the
standard deviations of the variables, producing the quantity
  
X X Y Y Cov(X, Y ) XY
XY = E =p p = ,
X Y Var(X) Var(Y ) X Y

which we call the correlation of X and Y. It is easily seen (check) that this measure
does not change when we rescale the variables X and Y to X = cX, Y = dY

4
as considered above. Also, the upper and lower limits we saw for the covariance
translate to the following limits for the value of the correlation:

1 XY 1 .

Thus we can judge the strength of the dependence by how close the correlation
measure comes to either of its extreme values +1 or 1 (and away from the value
0 that suggests an absence of dependence). However, keep in mind that like the
covariance the correlation is especially sensitive to a certain kind of dependence,
namely linear dependence, and can have a small value (near 0) even when there
is strong dependence of other kinds. (We saw in the previous lecture an example
when the covariance and hence the correlation is 0 even when there is strong
dependence, but of a nonlinear kind.)
Let us now illustrate the use of the correlation parameter to judge the degree
of (linear) dependence in the following probability distribution which we considered
above:
Y
0 1
0 .7 .1
X
1 .1 .1
We had calculated its covariance to be XY = .06, but at that time we did not
assess how meaningful this value is. Now we do so, by converting to correlation.
2
First finding the variances X = .16 = Y2 , we then compute
.06
XY = = .36 ,
.16 .16
which we might interpret as moderate (not close to 0, but not close to 1, either).
Moreover, note that a positive value of correlation is obtained in this example,
indicating that deviations of X from its mean tend to be associated with deviations
of Y from its mean in the same direction.
In what cases does correlation attain the value +1? 1? Suppose that Y is
related to X exactly as a linear transformation of X: Y = a + bX, for some choice
of a, b. Then we have the following analysis:

E(XY ) = E[X(a + bX)] = aE(X) + bE(X 2)

and so

XY = E(XY ) E(X)E(Y ) = [aE(X) + bE(X 2)] E(X)E(a + bX)


=
2
= bX

and thus, finally,


2
bX b
XY = = ,
X Y |b|

5
which we see takes value +1 if b > 0 and value 1 if b < 0. It can be shown,
also, that the values +1 and 1 are attainable only in the case of exactly linear
relationships. This is the basis for characterizing correlation as a measure of the
degree of linear relationship between X and Y . Despite this intuitive appeal of
the correlation measure, note that it really doesnt leave us with a precise meaning
in the case of a value intermediate between 0 and 1. intermediate values. 2
Despite any shortcomings the correlation measure may have, it has very wide
application. This is based on

its fairly successful intuitive appeal as a measure of dependence,

the central importance of confirming (or disproving) linear relationships,

its application to calculating variances of linear combinations of r.v.s,

its application in analyzing and interpreting regression models.

As a final illustration, let us consider the special case that Y is defined to be just
X itself, i.e., Y = X. This is a special case of the linear transformation with a = 0
and b = 1, so we have: = 1, i.e.,

Corr(X, X) = 1 .

Likewise, we can talk of the covariance of a r.v. X with itself, and we readily find
that it reduces to the variance of X:

Cov(X, X) = Var(X) .

(To see this immediately, just go back and check the definition of covariance.)

You might also like