\(\newcommand{\var}{\text{var}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\cov}{\text{cov}}\) \(\newcommand{\cor}{\text{cor}}\) \(\newcommand{\mse}{\text{mse}}\) \(\renewcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\bs}{\boldsymbol}\)
  1. Virtual Laboratories
  2. 3. Expected Value
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8
  11. 9
  12. 10

5. Covariance and Correlation

Recall that by taking the expected value of various transformations of a random variable, we can measure many interesting characteristics of the distribution of the variable. In this section, we will study an expected value that measures a special type of relationship between two real-valued variables. This relationship is very important both in probability and statistics.

Basic Theory

Definitions

As usual, our starting point is a random experiment with probability measure \(\P\) on an underlying sample space. Unless otherwise noted, we assume that all expected values mentioned in this section exist. Suppose now that \(X\) and \(Y\) are real-valued random variables for the experiment with means \(\E(X)\), \(\E(Y)\) and variances \(\var(X)\), \(\var(Y)\), respectively. The covariance of \((X, Y)\) is defined by \[ \cov(X, Y) = \E\left([X - \E(X)][Y - \E(Y)]\right) \] and (assuming the variances are positive, so that the random variables really are random) the correlation of \( (X, Y)\) is defined by \[ \cor(X, Y) = \frac{\cov(X, Y)}{\sd(X) \sd(Y)} \] Correlation is a scaled version of covariance; note that the two parameters always have the same sign (positive, negative, or 0). When the sign is positive, the variables are said to be positively correlated; when the sign is negative, the variables are said to be negatively correlated; and when the sign is 0, the variables are said to be uncorrelated. Note also that correlation is dimensionless, since the numerator and denominator have the same physical units, namely the product of the units of \(X\) and \(Y\).

As these terms suggest, covariance and correlation measure a certain kind of dependence between the variables. One of our goals is a deep understanding of this dependence. As a start, note that \((\E(X), \E(Y))\) is the center of the joint distribution of \((X, Y)\), and the vertical and horizontal lines through this point separate \(\R\) into four quadrants. The function \((x, y) \mapsto [x - \E(X)][y - \E(Y)]\) is positive on the first and third of these quadrants and negative on the second and fourth.

Covariance graphic
A joint distribution with \( \left(\E(X), \E(Y)\right) \) as the center of mass

Properties of Covariance

The following theorems give some basic properties of covariance. The main tool that we will need is the fact that expected value is a linear operation. Other important properties will be derived below, in the subsection on the best linear predictor. As usual, be sure to try the proofs yourself before reading the ones in the text.

Our first result is a formula that is better than the definition for computational purposes

\(\cov(X, Y) = \E(X Y) - \E(X) \, \E(Y)\).

Proof:

Let \( \mu = \E(X) \) and \( \nu = \E(Y) \). Then

\[ \cov(X, Y) = \E[(X - \mu)(Y - \nu)] = \E(X Y - \mu Y - \nu X + \mu \nu) = \E(X Y) - \mu \E(Y) - \nu \E(X) + \mu \nu = \E(X Y) - \mu \nu \]

By the previous result, we see that \(X\) and \(Y\) are uncorrelated if and only if \(\E(X Y) = \E(X) \E(Y)\). In particular, if \(X\) and \(Y\) are independent, then they are uncorrelated. However, the converse fails with a passion: an exercise below gives an example of two variables that are functionally related (the strongest form of dependence), yet uncorrelated. The computational exercises give other examples of dependent yet uncorrelated variables also. Note also that if one of the variables has mean 0, then the covariance is simply the expected product.

Trivially, covariance is a symmetric operation.

\(\cov(X, Y) = \cov(Y, X)\).

As the name suggests, covariance generalizes variance.

\(\cov(X, X) = \var(X)\).

Proof:

Let \( \mu = \E(X) \). Then \( \cov(X, X) = \E[(X - \mu)^2] = \var(X) \).

Covariance is a linear operation in the first argument, if the second argument is fixed.

If \(X\), \(Y\), \(Z\) are real-valued random variables for the experiment, and \(c\) is a constant, then

  1. \(\cov(X + Y, Z) = \cov(X, Z) + \cov(Y, Z)\)
  2. \(\cov(c X, Y) = c \cov(X, Y)\)
Proof:
  1. We use the computational formula above: \[ \begin{align} \cov(X + Y, Z) & = \E[(X + Y) Z] - \E(X + Y) \E(Z) = \E(X Z + Y Z) - [\E(X) + \E(Y)] \E(Z) \\ & = [\E(X Z) - \E(X) \E(Z)] + [\E(Y Z) - \E(Y) \E(Z)] = \cov(X, Z) + \cov(Y, Z) \end{align} \]
  2. Similarly, \[ \cov(c X, Y) = \E(c X Y) - \E(c X) \E(Y) = c \E(X Y) - c \E(X) \E(Y) = c [\E(X Y) - \E(X) \E(Y) = c \cov(X, Y) \]

By symmetry, covariance is also a linear operation in the second argument, with the first argument fixed. Thus, the covariance operator is bi-linear. The general version of this property is given in the following theorem.

Suppose that \((X_1, X_2, \ldots, X_n)\) and \((Y_1, Y_2, \ldots, Y_m)\) are sequences of real-valued random variables for the experiment, and that \((a_1, a_2, \ldots, a_n)\) and \((b_1, b_2, \ldots, b_m)\) are constants. Then \[ \cov\left(\sum_{i=1}^n a_i \, X_i, \sum_{j=1}^m b_j \, Y_j\right) = \sum_{i=1}^n \sum_{j=1}^m a_i \, b_j \, \cov(X_i, Y_j) \]

The following result shows how covariance is changed under a linear transformation of one of the variables. This is an important special case of the basic properties.

If \( a, \; b \in \R \) then \(\cov(a + bX, Y) = b \, \cov(X, Y)\).

Proof:

A constant is independent of any random variable. Hence \( \cov(a + b X, Y) = \cov(a, Y) + b \, \cov(X, Y) = b \, \cov(X, Y) \).

Properties of Correlation

Next we will establish some basic properties of correlation. Most of these follow easily from corresponding properties of covariance above. We assume that \(\var(X) \gt 0\) and \(\var(Y) \gt 0\), so that the random variable really are random and hence the correlation is well defined.

The correlation between \(X\) and \(Y\) is the covariance of the corresponding standard scores: \[ \cor(X, Y) = \cov\left(\frac{X - \E(X)}{\sd(X)}, \frac{Y - \E(Y)}{\sd(Y)}\right) = \E\left(\frac{X - \E(X)}{\sd(X)} \frac{Y - \E(Y)}{\sd(Y)}\right) \]

Proof:

From the definitions and the linearity of expected value, \[ \cor(X, Y) = \frac{\cov(X, Y)}{\sd(X) \sd(Y)} = \frac{\E([X - \E(X)][Y - \E(Y)])}{\sd(X) \sd(Y)} = \E\left(\frac{X - \E(X)}{\sd(X)} \frac{Y - \E(Y)}{\sd(Y)}\right) \] Since the standard scores have mean 0, this is also the covariance of the standard scores.

This shows again that correlation is dimensionless, since of course, the standard scores are dimensionless. Also, correlation is symmetric:

\(\cor(X, Y) = \cor(Y, X)\).

Under a linear transformation of one of the variables, the correlation is unchanged if the slope is positve and changes sign if the slope is negative:

If \(a, \; b \in \R\) and \( b \ne 0 \) then

  1. \(\cor(a + b X, Y) = \cor(X, Y)\) if \(b \gt 0\)
  2. \(\cor(a + b X, Y) = - \cor(X, Y)\) if \(b \lt 0\)
Proof:

Let \( Z \) denote the standard score of \( X \). If \( b \gt 0 \), the standard score of \( a + b X \) is also \( Z \). If \( b \lt 0 \), the standard score of \( a + b X \) is \( -Z \). Hence the result follows from the result above for standard scores.

This result reinforces the fact that correlation is a standardized measure of association, since multiplying the variable by a positive constant is equivalent to a change of scale, and adding a contant to a variable is equivalent to a change of location. For example, in the Challenger data, the underlying variables are temperature at the time of launch (in degrees Fahrenheit) and O-ring erosion (in millimeters). The correlation between these two variables is of fundamental importance. If we decide to measure temperature in degrees Celsius and O-ring erosion in inches, the correlation is unchanged.

The most important properties of covariance and correlation will emerge from our study of the best linear predictor below.

The Variance of a Sum

We will now show that the variance of a sum of variables is the sum of the pairwise covariances. This result is very useful since many random variables with common distributions can be written as sums of simpler random variables (see in particular the binomial distribution and hypergeometric distribution below).

If \((X_1, X_2, \ldots, X_n)\) is a sequence of real-valued random variables for the experiment, then \[ \var\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \cov(X_i, X_j) = \sum_{i=1}^n \var(X_i) + 2 \sum_{\{\{i, j\}: \; i \lt j\}} \cov(X_i, X_j) \]

Proof:

From the results above on variance, and linearity, \[ \var\left(\sum_{i=1}^n X_i\right) = \cov\left(\sum_{i=1}^n X_i, \sum_{j=1}^n X_j\right) = \sum_{i=1}^j \sum_{j=1}^n \cov(X_i, X_j) \] The second expression follows since \( \cov(X_i, X_i) = \var(X_i) \) for each \( i \) and \( \cov(X_i, X_j) = \cov(X_j, X_i) \) for \( i \ne j \).

Note that the variance of a sum can be greater, smaller, or equal to the sum of the variances, depending on the pure covariance terms. As a special case of the previous result, when \(n = 2\), we have \[ \var(X + Y) = \var(X) + \var(Y) + 2 \, \cov(X, Y) \]

If \((X_1, X_2, \ldots, X_n)\) is a sequence of pairwise uncorrelated, real-valued random variables then \[ \var\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \var(X_i) \]

Proof:

This follows immediately from the previous theorem, since \( \cov(X_i, X_j) = 0 \) for \( i \ne j \).

Note that the last result holds, in particular, if the random variables are independent.

If \(X\) and \(Y\) are real-valued random variables then \(\var(X + Y) + \var(X - Y) = 2 \, [\var(X) + \var(Y)]\).

Proof:

From the last result, \[ \var(X + Y) = \var(X) + \var(Y) + 2 \cov(X, Y) \] Similarly, \[ \var(X - Y) = \var(X) + \var(-Y) + 2 \cov(X, - Y) = \var(X) + \var(Y) - 2 \cov(X, Y) \] Adding gives the result.

If \(X\) and \(Y\) are real-valued random variables with \(\var(X) = \var(Y)\) then \(X + Y\) and \(X - Y\) are uncorrelated.

Proof:

From the bilinear and symmetry properties, \( \cov(X + Y, X - Y) = \cov(X, X) - \cov(X, Y) + \cov(Y, X) - \cov(Y, Y) = \var(X) - \var(Y) \)

Random Samples

In the following exercises, suppose that \((X_1, X_2, \ldots)\) is a sequence of independent, real-valued random variables with a common distribution that has mean \(\mu\) and standard deviation \(\sigma \gt 0\). In statistical terms, the variables form a random sample from the common distribution.

Let \(Y_n = \sum_{i=1}^n X_i\).

  1. \(\E\left(Y_n\right) = n \, \mu\)
  2. \(\var\left(Y_n\right) = n \, \sigma^2\)
Proof:
  1. This result follows from the additive property of expected value.
  2. This result follows from the additive property of variance for independent variables

Let \(M_n = Y_n \big/ n = \frac{1}{n} \sum_{i=1}^n X_i\). Thus, \(M_n\) is the sample mean.

  1. \(\E\left(M_n\right) = \mu\)
  2. \(\var\left(M_n\right) = \sigma^2 / n\)
  3. \(\var\left(M_n\right) \to 0\) as \(n \to \infty\)
  4. \(\P\left(\left|M_n - \mu\right| \gt \epsilon\right) \to 0\) as \(n \to \infty\) for every \(\epsilon \gt 0\).
Proof:
  1. This result follows from part (a) of the previous theorem and the scaling property of expected value.
  2. This result follows from part (b) of the previous theorem and the scaling property of variance.
  3. This result is an immediate consequence of (c).
  4. This result follows from (c) and Chebyshev's inequality: \( \P\left(\left|M_n - \mu\right| \gt \epsilon\right) \le \var(M_n) \big/ \epsilon^2 \to 0 \) as \( n \to \infty \)

Part (c) of the last exercise means that \(M_n \to \mu\) as \(n \to \infty\) in mean square. Part (d) means that \(M_n \to \mu\) as \(n \to \infty\) in probability. These are both versions of the weak law of large numbers, one of the fundamental theorems of probability.

The standard score of the sum \( Y_n \) and the standard score of the sample mean \( M_n \) are the same: \[ Z_n = \frac{Y_n - n \, \mu}{\sqrt{n} \, \sigma} = \frac{M_n - \mu}{\sigma / \sqrt{n}} \]

  1. \(\E(Z_n) = 0\)
  2. \(\var(Z_n) = 1\)
Proof:

The equality of the standard score of \( Y_n \) and of \( Z_n \) is a result of simple algebra. But recall more generally that the standard score of a variable is unchanged by a linear transformation of the variable with positive slope (a location-scale transformation of the distribution). Of course, parts (a) and (b) are true for any standard score.

The central limit theorem, the other fundamental theorem of probability, states that the distribution of \(Z_n\) converges to the standard normal distribution as \(n \to \infty\).

Events

Suppose that \(A\) and \(B\) are events in a random experiment. The covariance and correlation of \(A\) and \(B\) are defined to be the covariance and correlation, respectively, of their indicator random variables \(\bs{1}_A\) and \(\bs{1}_B\).

If \(A\) and \(B\) are events then

  1. \(\cov(A, B) = \P(A \cap B) - \P(A) \P(B)\)
  2. \(\cor(A, B) = [\P(A \cap B) - \P(A) \P(B)] \big/ \sqrt{\P(A)[1 - \P(A)] \P(B)[1 - \P(B)]}\)
Proof:

Recall that if \( X \) is an indicator variable with \( \P(X = 1) = p \), then \( \E(X) = p \) and \( \var(X) = p (1 - p) \). Also, if \( X \) and \( Y \) are indicator variables then \( X Y \) is an indicator variable and \( \P(X Y = 1) = \P(X = 1, Y = 1) \). The results then follow from the definitions.

In particular, note that \(A\) and \(B\) are positively correlated, negatively correlated, or independent, respectively (as defined in the section on conditional probability) if and only if the indicator variables of \(A\) and \(B\) are positively correlated, negatively correlated, or uncorrelated, as defined in this section.

If \(A\) and \(B\) are events then

  1. \(\cov(A, B^c) = - \cov(A, B)\)
  2. \(\cov(A^c, B^c) = \cov(A, B)\)
Proof:

These results follow from linear property and the fact that that \( \bs{1}_{A^c} = 1 - \bs{1}_A \).

If \(A \subseteq B\) then

  1. \(\cov(A, B) = \P(A)[1 - \P(B)]\)
  2. \(\cor(A, B) = \sqrt{\frac{\P(A)[1 - \P(B)]}{\P(B)[1 - \P(A)]}}\)
Proof:

These results follow from theorem above on events, since \( A \cap B = A \).

The Best Linear Predictor

What linear function of \(X\) is closest to \(Y\) in the sense of minimizing mean square error? The question is fundamentally important in the case where random variable \(X\) (the predictor variable) is observable and random variable \(Y\) (the response variable) is not. The linear function can be used to estimate \(Y\) from an observed value of \(X\). Moreover, the solution will show that covariance and correlation measure the linear relationship between \(X\) and \(Y\). To avoid trivial cases, let us assume that \(\var(X) \gt 0\) and \(\var(Y) \gt 0\), so that the random variables really are random.

Linear Predictor Graphic
The distribution regression line

Let \(\mse(a, b)\) denote the mean square error when \(a + b \, X\) is used as an estimator of \(Y\) (as a function of the parameters \(a, \; b \in \R\): \[ \mse(a, b) = \E\left([Y - (a + b \, X)]^2 \right) \]

\(\mse(a, b)\) is minimized when \[ b = \frac{\cov(X, Y)}{\var(X)}, \quad a = \E(Y) - \frac{\cov(X, Y)}{\var(X)} \E(X) \]

Proof:

In the definition of the mean square error function, expanding the square and using the linearity of expected value gives \[ \mse(a, b) = \E(Y^2) - 2 \: b \: \E(X \: Y) - 2 \: a \: \E(Y) + b^2 \: \E(X^2) + 2 \: a \: b \: \E(X) + a^2 \] Thus, the graph of \( \mse \) is a paraboloid opening upward. Specifically, setting the first derivatives of \( \mse \) to 0 we have \[ \begin{align} -2 \E(Y) + 2 b \E(X) + 2 a & = 0 \\ -2 \E(X Y) + 2 b \E(X^2) + 2 a \E(X) & = 0 \end{align} \] Solving the first equation for \( a \) gives \( a = \E(Y) - b \E(X) \). Substituting this into the second equation and solving gives \( b = \cov(X, Y) \big/ \var(X) \). Finally, the second derivative matrix is \[ \left[ \begin{matrix} 2 & 2 \E(X) \\ 2 \E(X) & 2 \E(X^2) \end{matrix} \right] \] The diagonal entries are postive and the determinant is \( 4 \var(X) \gt 0 \) so the matrix is positive definite. It follows that the minimum of \( \mse \) occurs at the single critical point.

Thus, the best linear predictor of \(Y\) given \(X\) is the random variable \(L(Y \mid X)\) given by \[ L(Y \mid X) = \E(Y) + \frac{\cov(X, Y)}{\var(X)} [X - \E(X)] \]

The minimum value of the mean square error function \(\mse\), is \[ \E\left([Y - L(Y \mid X)]^2 \right) = \var(Y)\left[1 - \cor^2(X, Y)\right] \]

Proof:

This follows from substituting \( a = \E(Y) - \E(X) \cov(X, Y) \big/ \var(X) \) and \( b = \cov(X, Y) \big/ \var(X) \) into \( \mse(a, b) \) and simpliftying.

Our solution to the best linear perdictor problems yields important properties of covariance and correlation.

Correlation satisfies the following propeties:

  1. \(-1 \le \cor(X, Y) \le 1\)
  2. \(-\sd(X) \: \sd(Y) \le \cov(X, Y) \le \sd(X) \: \sd(Y)\)
  3. \(\cor(X, Y) = 1\) if and only if \(Y = a + b \: X\) with probability 1, for some constants \(a\) and \(b \gt 0\).
  4. \(\cor(X, Y) = - 1\) if and only if \(Y = a + b \: X\) with probability 1, for some constants \(a\) and \(b \lt 0\).
Proof:

Since mean square error is nonnegative, it follows from the mean square error formula above that \(\cor^2(X, Y) \le 1\). This gives parts (a) and (b). For parts (c) and (d), note that if \(\cor^2(X, Y) = 1\) then \(Y = L(Y \mid X)\) with probability 1, and that the slope in \( L(Y \mid X) \) has the same sign as \( \cor(X, Y) \).

The last two theorems show clearly that \(\cov(X, Y)\) and \(\cor(X, Y)\) measure the linear association between \(X\) and \(Y\).

Recall from our previous discussion of variance that the best constant predictor of \(Y\), in the sense of minimizing mean square error, is \(\E(Y)\) and the minimum value of the mean square error for this predictor is \(\var(Y)\). Thus, the difference between the variance of \(Y\) and the mean square error above for \( L(Y \mid X) \) is the reduction in the variance of \(Y\) when the linear term in \(X\) is added to the predictor:

\(\var(Y) - \E\left([Y - L(Y \mid X)]^2\right) = \var(Y) \, \cor^2(X, Y)\).

Thus \(\cor^2(X, Y)\) is the proportion of reduction in \(\var(Y)\) when \(X\) is included as a predictor variable. This quantity is called the (distribution) coefficient of determination. Now let

\[ L(Y \mid X = x) = \E(Y) + \frac{\cov(X, Y)}{\var(X)}[x - \E(X)], \quad x \in \R \]

The function \(x \mapsto L(Y \mid X = x)\) is known as the distribution regression function for \(Y\) given \(X\), and its graph is known as the distribution regression line. Note that the regression line passes through \((\E(X), \E(Y))\), the center of the joint distribution.

\(\E[L(Y \mid X)] = \E(Y)\).

Proof:

From the linearity of expected value,

\[ \E[L(Y \mid X)] = \E(Y) + \frac{\cov(X, Y)}{\var(X)}[\E(X) - \E(X)] = \E(Y) \]

However, the choice of predictor variable and response variable is crucial.

The regression line for \(Y\) given \(X\) and the regression line for \(X\) given \(Y\) are not the same line, except in the trivial case where the variables are perfectly correlated. However, the coefficient of determination is the same, regardless of which variable is the predictor and which is the response.

Proof:

The two regression lines are

\[ \begin{align} y - \E(Y) & = \frac{\cov(X, Y)}{\var(X)}[x - \E(X)] \\ x - \E(X) & = \frac{\cov(X, Y)}{\var(Y)}[y - \E(Y)] \end{align} \]

The two lines are the same if and only if \( \cov^2(X, Y) = \var(X) \var(Y) \). But this is equivalent to \( \cor^2(X, Y) = 1 \).

Suppose that \(A\) and \(B\) are events in a random experiment with \(0 \lt \P(A) \lt 1\) and \(0 \lt \P(B) \lt 1\). Then

  1. \(\cor(A, B) = 1\) if and only \(\P(A \setminus B) + \P(B \setminus A) = 0\). (That is, \(A\) and \(B\) are equivalent events.)
  2. \(\cor(A, B) = - 1\) if and only \(\P(A \setminus B^c) + \P(B^c \setminus A) = 0\). (That is, \(A\) and \(B^c\) are equivalent events.)

The concept of best linear predictor is more powerful than might first appear, because it can be applied to transformations of the variables. Specifically, suppose that \(X\) and \(Y\) are random variables for our experiment, taking values in general spaces \(S\) and \(T\), respectively. Suppose also that \(g\) and \(h\) are real-valued functions defined on \(S\) and \(T\), respectively. We can find \(L[h(Y) \mid g(X)]\), the linear function of \(g(X)\) that is closest to \(h(Y)\) in the mean square sense. The results of this subsection apply, of course, with \(g(X)\) replacing \(X\) and \(h(Y)\) replacing \(Y\).

Suppose that \(Z\) is another real-valued random variable for the experiment and that \(c\) is a constant. Then

  1. \(L(Y + Z \mid X) = L(Y \mid X) + L(Z \mid X)\)
  2. \(L(c \, Y \mid X) = c \, L(Y \mid X)\)
Proof:

These results follow easily from the linearity of expected value and covariance.

  1. \[ \begin{align} L(Y + Z \mid X) & = \E(Y + Z) + \frac{\cov(X, Y + Z)}{\var(X)}[X - \E(X)] \\ &= \left(\E(Y) + \frac{\cov(X, Y)}{\var(X)} [X - \E(X)]\right) + \left(\E(Z) + \frac{\cov(X, Z)}{\var(X)}[X - \E(X)]\right) \\ & = \E(Y \mid X) + \E(Z \mid X) \end{align} \]
  2. \[ L(c Y \mid X) = \E(c Y) + \frac{\cov(X, cY)}{\var(X)}[X - \E(X)] = c \E(Y) + c \frac{\cov(X, Y)}{\var(X)}[X - \E(X)] = c L(Y \mid X) \]

There are several extensions and generalizations of the ideas in the subsection:

Examples and Applications

Uniform Distributions

Suppose that \(X\) is uniformly distributed on the interval \([-1, 1]\) and \(Y = X^2\). Then \(X\) and \(Y\) are uncorrelated even though \(Y\) is a function of \(X\) (the strongest form of dependence).

Proof:

Note that \( \E(X) = 0 \) and \( \E(Y) = \E(X^2) = 1 / 3 \) and \( \E(X Y) = E(X^3) = 0 \). Hence \( \cov(X, Y) = \E(X Y) - \E(X) \E(Y) = 0 \).

Suppose that \((X, Y)\) is uniformly distributed on the region \(S \subseteq \R^2\). Find \(\cov(X, Y)\) and \(\cor(X, Y)\) and determine whether the variables are independent in each of the following cases:

  1. \(S = [a, b] \times [c, d]\) where \(a \lt b\) and \(c \lt d\), so \( S \) is a rectangle.
  2. \(S = \{(x, y) \in \R^2: -a \le y \le x \le a\}\) where \(a \gt 0\), so \( S \) is a triangle
  3. \(S = \{(x, y) \in \R^2: x^2 + y^2 \le r^2\}\) where \(r \gt 0\), so \( S \) is a circle
Answer:
  1. \( \cov(X, Y) = 0\), \(\cor(X, Y) = 0\). \( X \) and \( Y \) are independent.
  2. \(\cov(X, Y) = \frac{a^2}{9}\), \(\cor(X, Y) = \frac{1}{2}\). \( X \) and \( Y \) are dependent.
  3. \(\cov(X, Y) = 0\), \(\cor(X, Y) = 0\). \( X \) and \( Y \) are dependent.

In the bivariate uniform experiment, select each of the regions below in turn. For each region, run the simulation 2000 times and note the value of the correlation and the shape of the cloud of points in the scatterplot. Compare with the results in the last exercise.

  1. Square
  2. Triangle
  3. Circle

Suppose that \(X\) is uniformly distributed on the interval \((0, 1)\) and that given \(X = x \in (0, 1)\), \(Y\) is uniformly distributed on the interval \((0, x)\). Find each of the following:

  1. \(\cov(X, Y)\)
  2. \(\cor(X, Y)\)
  3. \(L(Y \mid X)\)
  4. \(L(X \mid Y)\)
Answer:
  1. \(\frac{1}{24}\)
  2. \(\sqrt{\frac{3}{7}}\)
  3. \(\frac{1}{2} X\)
  4. \(\frac{2}{7} + \frac{6}{7} Y\)

Dice

Recall that a standard die is a six-sided die. A fair die is one in which the faces are equally likely. An ace-six flat die is a standard die in which faces 1 and 6 have probability \(\frac{1}{4}\) each, and faces 2, 3, 4, and 5 have probability \(\frac{1}{8}\) each.

A pair of standard, fair dice are thrown and the scores \((X_1, X_2)\) recorded. Let \(Y = X_1 + X_2\) denote the sum of the scores, \(U = \min\{X_1, X_2\}\) the minimum scores, and \(V = \max\{X_1, X_2\}\) the maximum score. Find the covariance and correlation of each of the following pairs of variables:

  1. \((X_1, X_2)\)
  2. \((X_1, Y)\)
  3. \((X_1, U)\)
  4. \((U, V)\)
  5. \((U, Y)\)
Answer:
  1. \(0\), \(0\)
  2. \(\frac{35}{12}\), \(\frac{1}{\sqrt{2}} = 0.7071\)
  3. \(\frac{35}{24}\), \(0.6082\)
  4. \(\frac{1369}{1296}\), \(\frac{1369}{2555} = 0.5358\)
  5. \(\frac{35}{12}\), \(0.8601\)

Suppose that \(n\) fair dice are thrown. Find the mean and variance of each of the following variables:

  1. \( Y_n \), the sum of the scores.
  2. \( M_n \), the average of the scores.
Answer:
  1. \(\E\left(Y_n\right) = \frac{7}{2} n\), \(\var\left(Y_n\right) = \frac{35}{12} n\)
  2. \(\E\left(M_n\right) = \frac{7}{2}\), \(\var\left(M_n\right) = \frac{35}{12 n}\)

In the dice experiment, select fair dice, and select the following random variables. In each case, increase the number of dice and observe the size and location of the probability density function and the mean \( \pm \) standard deviation bar. With \(n = 20\) dice, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.

  1. The sum of the scores.
  2. The average of the scores.

Repeat computational exercise above for ace-six flat dice.

Answer:
  1. \(n \frac{7}{2}\), \(n \frac{15}{4}\)
  2. \(\frac{7}{2}\), \(\frac{15}{4 n}\)

Repeat simulation exercise above for ace-six flat dice.

A pair of fair dice are thrown and the scores \((X_1, X_2)\) recorded. Let \(Y = X_1 + X_2\) denote the sum of the scores, \(U = \min\{X_1, X_2\}\) the minimum score, and \(V = \max\{X_1, X_2\}\) the maximum score. Find each of the following:

  1. \(L(Y \mid X_1)\)
  2. \(L(U \mid X_1)\)
  3. \(L(V \mid X_1)\)
Answer:
  1. \(\frac{7}{2} + X_1\)
  2. \(\frac{7}{9} + \frac{1}{2} X_1\)
  3. \(\frac{49}{19} + \frac{1}{2} X_1\)

Bernoulli Trials

Recall that a Bernoulli trials process is a sequence \(\boldsymbol{X} = (X_1, X_2, \ldots)\) of independent, identically distributed indicator random variables. In the usual language of reliability, \(X_i\) denotes the outcome of trial \(i\), where 1 denotes success and 0 denotes failure. The probability of success \(p = \P(X_i = 1)\) is the basic parameter of the process. The process is named for Jacob Bernoulli. A separate chapter on the Bernoulli Trials explores this process in detail.

The number of successes in the first \(n\) trials is \(Y = \sum_{i=1}^n X_i\). Recall that this random variable has the binomial distribution with parameters \(n\) and \(p\), which has probability density function

\[ f(y) = \binom{n}{y} p^y (1 - p)^{n - y}, \quad y \in \{0, 1, \ldots, n\} \]

The mean and variance of \(Y\) are

  1. \(\E(Y) = n p\)
  2. \(\var(Y) = n p (1 - p)\)
Proof:

These results could be derived from the PDF of \( Y \), of course, but a derivation based on the sum of IID variables is much better. Recall that \( \E(X_i) = p \) and \( \var(X_i) = p (1 - p) \) so the results follow immediately from theorem above on random samples

In the binomial coin experiment, select the number of heads. Vary \(n\) and \(p\) and note the shape of the probability density function and the size and location of the mean \( \pm \) standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.

The proportion of successes in the first \(n\) trials is \(M = Y / n\). This random variable is sometimes used as a statistical estimator of the parameter \(p\), when the parameter is unknown.

The mean and variance of \(M_n\) are

  1. \(\E(M) = p\)
  2. \(\var(M) = p (1 - p) / n\)
Proof:

These results follow immediately from the previous theorem and the theorem above on random samples.

In the binomial coin experiment, select the proportion of heads. Vary \(n\) and \(p\) and note the shape of the probability density function and the size and location of the mean \( \pm \) standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.

The Hypergeometric Distribution

Suppose that a population consists of \(m\) objects; \(r\) of the objects are type 1 and \(m - r\) are type 0. A sample of \(n\) objects is chosen at random, without replacement. Let \(X_i\) denote the type of the \(i\)th object selected. Recall that \((X_1, X_2, \ldots, X_n)\) is a sequence of identically distributed (but not independent) indicator random variables.

Let \(Y\) denote the number of type 1 objects in the sample, so that \(Y = \sum_{i=1}^n X_i\). Recall that this random variable has the hypergeometric distribution, which has probability density function.

\[ f(y) = \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\} \]

For distinct \(i\) and \(j\),

  1. \( \E(X_i) = \frac{r}{m} \)
  2. \( \var(X_i) = \frac{r}{m} \left(1 - \frac{r}{m}\right) \)
  3. \(\cov(X_i, X_j) = -\frac{r}{m}\left(1 - \frac{r}{m}\right) \frac{1}{m - 1}\)
  4. \(\cor(X_i, X_j) = -\frac{1}{m - 1}\)
Proof:

Recall that \( \E(X_i) = \P(X_i = 1) = \frac{r}{m} \) for each \( i \) and \( \E(X_i X_j) = \P(X_i = 1, X_j = 1) = \frac{r}{m} \frac{r - 1}{m - 1} \) for each \( i \ne j \). Technically, the sequence of indicator variables is exchangeable. The results now follow from the definitions and simple algebra.

Note that the event of a type 1 object on draw \(i\) and the event of a type 1 object on draw \(j\) are negatively correlated, but the correlation depends only on the population size and not on the number of type 1 objects. Note also that the correlation is perfect if \(m = 2\). Think about these result intuitively.

The mean and variance of \(Y\) are

  1. \(\E(Y) = n \frac{r}{m}\)
  2. \(\var(Y) = n \frac{r}{m}(1 - \frac{r}{m}) \frac{m - n}{m - 1}\)
Proof:

Again, a derivation from the representation of \( Y \) as a sum of indicator variables is far preferable to a derivation based on the PDF of \( Y \). These results follow immediately from the previous theorem, the additiviity of expected value, and the theorem above on the variance of a sum.

In the ball and urn experiment, select sampling without replacement. Vary \(m\), \(r\), and \(n\) and note the shape of the probability density function and the size and location of the mean \( \pm \) standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.

Basic Properties

Suppose that \(X\) and \(Y\) are real-valued random variables with \(\cov(X, Y) = 3\). Find \(\cov(2 X - 5, 4 Y + 2)\).

Answer:

24

Suppose \(X\) and \(Y\) are real-valued random variables with \(\var(X) = 5\), \(\var(Y) = 9\), and \(\cov(X, Y) = - 3\). Find \(\var(2 X + 3 Y - 7)\).

Answer:

65

Suppose that \(X\) and \(Y\) are independent, real-valued random variables with \(\var(X) = 6\) and \(\var(Y) = 8\). Find \(\var(3 X - 4 Y + 5)\).

Answer:

182

Suppose that \(A\) and \(B\) are events in an experiment with \(\P(A) = \frac{1}{2}\), \(\P(B) = \frac{1}{3}\), and \(\P(A \cap B) = \frac{1}{8}\). Find the covariance and correlation between \(A\) and \(B\).

Answer:
  1. \(\frac{1}{24}\)
  2. \(\approx 0.1768\)

Simple Continuous Distributions

Suppose that \((X, Y)\) has probability density function \(f(x, y) = x + y\) for \(0 \le x \le 1\), \(0 \le y \le 1\). Find each of the following

  1. \(\cov(X, Y)\)
  2. \(\cor(X, Y)\)
  3. \(L(Y \mid X)\)
  4. \(L(X \mid Y)\)
Answer:
  1. \(-\frac{1}{144}\)
  2. \(-\frac{1}{11} = -0.0909\)
  3. \(\frac{7}{11} - \frac{1}{11} X\)
  4. \(\frac{7}{11} = \frac{1}{11} Y\)

Suppose that \((X, Y)\) has probability density function \(f(x, y) = 2 (x + y)\) for \(0 \le x \le y \le 1\). Find each of the following:

  1. \(\cov(X, Y)\)
  2. \(\cor(X, Y)\)
  3. \(L(Y \mid X)\)
  4. \(L(X \mid Y)\)
Answer:
  1. \(\frac{1}{48}\)
  2. \(\frac{5}{\sqrt{129}} \approx 0.4402\)
  3. \(\frac{26}{43} + \frac{15}{43} X\)
  4. \(\frac{5}{9} Y\)

Suppose again that \((X, Y)\) has probability density function \(f(x, y) = 2 (x + y)\) for \(0 \le x \le y \le 1\).

  1. Find \(\cov\left(X^2, Y\right)\).
  2. Find \(\cor\left(X^2, Y\right)\).
  3. Find \(L\left(Y \mid X^2\right)\).
  4. Which predictor of \(Y\) is better, the one based on \(X\) or the one based on \(X^2\)?
Answer:
  1. \(\frac{7}{360}\)
  2. \(0.448\)
  3. \(\frac{1255}{1920} + \frac{245}{634} X\)
  4. The predictor based on \(X^2\) is slightly better.

Suppose that \((X, Y)\) has probability density function \(f(x, y) = 6 x^2 y\) for \(0 \le x \le 1\), \(0 \le y \le 1\). Find each of the following:

  1. \(\cov(X, Y)\)
  2. \(\cor(X, Y)\)
  3. \(L(Y \mid X)\)
  4. \(L(X \mid Y)\)
Answer:

Note that \(X\) and \(Y\) are independent.

  1. \(0\)
  2. \(0\)
  3. \(\frac{2}{3}\)
  4. \(\frac{3}{4}\)
>

Suppose that \((X, Y)\) has probability density function \(f(x, y) = 15 x^2 y\) for \(0 \le x \le y \le 1\). Find each of the following:

  1. \(\cov(X, Y)\)
  2. \(\cor(X, Y)\)
  3. \(L(Y \mid X)\)
  4. \(L(X \mid Y)\)
Answer:
  1. \(\frac{5}{336}\)
  2. \(0.05423\)
  3. \(\frac{30}{51} + \frac{20}{51} X\)
  4. \(\frac{3}{4} Y\)

Suppose again that \((X, Y)\) has probability density function \(f(x, y) = 15 x^2 y\) for \(0 \le x \le y \le 1\).

  1. Find \(\cov\left(\sqrt{X}, Y\right)\).
  2. Find \(\cor\left(\sqrt{X}, Y\right)\).
  3. Find \(L\left(Y \mid \sqrt{X}\right)\).
  4. Which of the predictors of \(Y\) is better, the one based on \(X\) of the one based on \(\sqrt{X}\)?
Answer:
  1. \(\frac{10}{1001}\)
  2. \(\frac{24}{169} \sqrt{14}\)
  3. \(\frac{5225}{13\;182} + \frac{1232}{2197} X\)
  4. The predictor based on \(X\) is slightly better.