\( \renewcommand{\P}{\mathbb{P}} \) \( \newcommand{\R}{\mathbb{R}} \) \( \newcommand{\N}{\mathbb{N}} \) \( \newcommand{\bs}{\boldsymbol} \)
  1. Virtual Laboratories
  2. 2. Distributions
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8
  11. 9
  12. 10
  13. 11
  14. 12
  15. 13

4. Joint Distributions

Basic Theory

Joint and Marginal Distributions

As usual, we start with a random experiment with probability measure \(\P\) on an underlying sample space. Suppose now that \(X\) and \(Y\) are random variables for the experiment, and that \(X\) takes values in \(S\) while \(Y\) takes values in \(T\). We can think of \((X, Y)\) as a random variable taking values in (a subset of) the product set \(S \times T\). The purpose of this section is to study how the distribution of \((X, Y)\) is related to the distributions of \(X\) and \(Y\) individually. In this context, the distribution of \((X, Y)\) is called the joint distribution, while the distributions of \(X\) and of \(Y\) are referred to as marginal distributions. As always, we assume that the sets and functions that we mention are measurable in the appropriate spaces. If you are a beginning student of probability, you can safely ignore this statement.

More specifically, recall that the distribution of \((X, Y)\) is the probability measure \(C \mapsto \P\left[(X, Y) \in C\right] \) for \(C \subseteq S \times T\). The distribution of \(X\) is the probability measure \( A \mapsto \P(X \in A) \) for \( A \subseteq S \) and the distribution of \( Y \) is the probability measure \( B \mapsto \P(Y \in B) \) for \( B \subseteq T \). The first simple, but very important point, is that the marginal distributions can be obtained from the joint distribution, but not conversely in general.

Note that

  1. \(\P(X \in A) = \P\left[(X, Y) \in A \times T\right]\) for \(A \subseteq S\)
  2. \(\P(Y \in B) = \P\left[(X, Y) \in S \times B\right]\) for \(B \subseteq T\)

If \(X\) and \(Y\) are independent, then by definition, \[\P\left[(X, Y) \in A \times B\right] = \P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B) \quad A \subseteq S, \, B \subseteq T \] and as we have noted before, this completely determines the distribution \((X, Y)\) on \(S \times T\). However, if \(X\) and \(Y\) are dependent, the joint distribution cannot be determined from the marginal distributions. Thus in general, the joint distribution contains much more information than the marginal distributions individually.

Joint and Marginal Densities

Recall that probability distributions are often described in terms of probability density functions. So we need to know how the marginal probability density functions can be obtained from the joint probability density function. The discrete case is easy.

Suppose that \((X, Y)\) has a discrete distribution with probability density function \(f\) on a countable set \(S \times T\). Then \(X\) and \(Y\) have discrete distributions, with probability density functions \(g\) and \(h\), respectively, given by

  1. \(g(x) = \sum_{y \in T} f(x, y), \quad x \in S\)
  2. \(h(y) = \sum_{x \in S} f(x, y), \quad y \in T\)
Proof:

Note that since \( S \times T \) is countable, \( S \) and \( T \) are countable. The two results are symmetric, so we will prove (a). Note that the countable collection of events \(\left\{ \{Y = y\}: y \in T\right\} \) partitions the sample space. For \( x \in S \), by the countable additivity of probability, \[ \P(X = x) = \sum_{y \in T} \P(X = x, Y = y) = \sum_{y \in T} \P\left[(X, Y) = (x, y)\right] = \sum_{y \in T} f(x, y) \]

For the continuous case, suppose that \(S \subseteq \R^j\) and \(T \subseteq \R^k\) for some \(j, \; k \in \N_+\), so that \(S \times T \subseteq \R^{j + k}\). .

Suppose that \((X, Y)\) has a continuous distribution on \(S \times T\) with probability density function \(f\). Then \(X\) and \(Y\) have continuous distributions with probability density functions \(g\) and \(h\), respectively, given by

  1. \(g(x) = \int_T f(x, y) \, dy, \quad x \in S\)
  2. \(h(y) = \int_S f(x, y) \, dx, \quad y \in T\)
Proof:

Again, the results are symmetric, so we show (a). If \( A \subseteq S \) then \[ \P(X \in A) = \P(X \in A, Y \in T) = \P\left[(X, Y) \in A \times T\right] = \int_{A \times T} f(x, y) \, d(x, y) = \int_A \int_T f(x, y) \, dy \, dx = \int_A g(x) \, dx \] Hence by the very meaning of the term, \( X \) has probability density function \( g \).

In the context of the previous two theorems, \(f\) is called the joint probability density function of \((X, Y)\), while \(g\) and \(h\) are called the marginal density functions of \(X\) and of \(Y\), respectively.

Independence

When the variables are independent, the joint density is the product of the marginal densities.

Suppose that \(X\) and \(Y\) are independent, either both with discrete distributions or both with continuous distributions. Suppose that \( X \) has probability density function \(g\) and \( Y \) has probability density function \(h\). Then \((X, Y)\) has probability density function \(f\) given by \[f(x, y) = g(x) h(y), \quad (x, y) \in S \times T\]

Proof:

In the discrete case, the events \(\{X = x\}\) and \(\{Y = y\}\) are independent for \(x \in S\) and \(y \in T\). Hence \[ \P\left[(X, Y) = (x, y)\right] = \P(X = x, Y = y) = \P(X = x) \P(Y = y) = g(x) g(y) \] For the continuous case, Let \( A \subseteq S \) and \( B \subseteq T \). Then \[ \P\left[(X, Y) \in A \times B\right] = \P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B) = \int_A g(x) \, dx \, \int_B h(y) \, dy = \int_{A \times B} g(x) h(y) \, d(x, y) \] A probability measure on \( S \times T \) is completely determined by its values on product sets (see the advanced section on existence and uniqueness of measures for details), so it follows that \(\P\left[(X, Y) \in C\right] = \int_C f(x, y) \, d(x, y)\) for general \(C \subseteq S \times T\). Hence \( (X, Y) \) has PDF \( f \).

The following result gives a converse to previous theorem. If the joint probability density factors into a function of \(x\) only and a function of \(y\) only, then \(X\) and \(Y\) are independent, and we can almost identify the individual probability density functions just from the factoring.

Suppose that \((X, Y)\) has either a discrete or continuous distribution, with probability density function \(f\). Suppose that \[f(x, y) = u(x) v(y), \quad (x, y) \in S \times T\] where \(u: S \to [0, \infty)\) and \(v: T \to [0, \infty)\). Then \(X\) and \(Y\) are independent, and there exists a positve constant \(c\) such that \(X\) and \(Y\) have probability density functions \(g\) and \(h\), respectively, given by \[ \begin{align} g(x) = & c \, u(x), \quad x \in S \\ h(y) = & \frac{1}{c} v(y), \quad y \in T \end{align} \]

Proof:

We will consider the continuous case and leave the discrete case as an exercise. For \( A \subseteq S \) and \( B \subseteq T \), \[ \P(X \in A, Y \in B) = \P\left[(X, Y) \in A \times B\right] = \int_{A \times B} f(x, y) \, d(x, y) = \int_A u(x) \, dx \, \int_B v(y) dy \] Letting \( B = T \) in the displayed equation gives \( \P(X \in A) = \int_A c \, u(x) \, dx \) for \( A \subseteq S \), where \( c = \int_T v(y) \, dy \). It follows that \( X \) has PDF \( g = c \, u \). Next, letting \( A = S \) in the displayed equation gives \( \P(Y \in B) = \int_B k \, v(y) \, dy \) for \( B \subseteq T \), where \( k = \int_S u(x) \, dx \). Thus, \( Y \) has PDF \( g = k \, v \). Next, letting \( A = S \) and \( B =T \) in the displayed equation gives \( 1 = c \, k \), so \( k = 1 / c \). Now note that the displayed equation holds with \( u \) replaced by \( g \) and \( v \) replaced by \( h \), and this in turn gives \( \P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B) \), so \( X \) and \( Y \) are independent.

Again, the results in the last two exercise extend to more than two random variables, because \(X\) and \(Y\) themselves may be random vectors. To state the extension explicitly, suppose that \(X_i\) is a random variable taking values in a set \(R_i\) with probability density funcion \(g_i\) for \(i \in \{1, 2, \ldots, n\}\), and that this collection of random variables is independent. Then the random vector \(\bs{X} = (X_1, X_2, \ldots, X_n)\) taking values in \(S = R_1 \times R_2 \times \cdots \times R_n\) has probability density function \(f\) given by \[f(x_1, x_2, \ldots, x_n) = g_1(x_1) g_2(x_2) \cdots g_n(x_n), \quad (x_1, x_2, \ldots, x_n) \in S\] The special case where \(X_i\) has the same distribution for each \(i \in \{1, 2, \ldots, n\}\) is particularly important. In this case \(R_i = R\) and \(g_i = g\) for each \(i\), so that the probability density function of \(\bs{X}\) on \(S = R^n\) is \[f(x_1, x_2, \ldots, x_n) = g(x_1) g(x_2) \cdots g(x_n), \quad (x_1, x_2, \ldots, x_n) \in S\] In probability jargon, \(\bs{X}\) is a sequence of independent, identically distributed variables, a phrase that comes up so often that it is often abbreviated as IID. In statistical jargon, \(\bs{X}\) is a random sample of size \(n\) from the common distribution. As is evident from the special terminology, this situation is very impotant in both branches of mathematics. In statistics, the joint probability density function \(f\) plays an important role in procedures such as maximum likelihood and the identification of uniformly best estimators.

Recall that (mutual) independence of random variables is a very strong property. If a collection of random variables is independent, then any subcollection is also independent. New random variables formed from disjoint subcollections are independent. For a simple example, suppose that \(X\), \(Y\), and \(Z\) are independent real-valued random variables. Then

  1. \(\sin(X)\), \(\cos(Y)\), and \(e^Z\) are independent.
  2. \((X, Y)\) and \(Z\) are independent.
  3. \(X^2 + Y^2\) and \(\arctan(Z)\) are independent.
  4. \(X\) and \(Z\) are independent.
  5. \(Y\) and \(Z\) are independent.

In particular, note that statement 2 in the list above is much stronger than the conjunction of statements 4 and 5. Contrapositively, if \(X\) and \(Z\) are dependent, then \((X, Y)\) and \(Z\) are also dependent.

Random Variables with Mixed Coordinates

The results of this section have natural analogies in the case that \(X\) and \( Y \) have different distribution types, as discussed in the section on mixed distributions. The results in the subsections above on joint and marginal density functions and independence hold, with sums for the coordinate with the discrete distribution, and integrals for the coordinate with the continuous distribution.

Examples and Applications

Dice

Suppose that two standard, fair dice are rolled and the sequence of scores \((X_1, X_2)\) recorded. Our standard assumption is that the variables \(X_1\) and \(X_2\) are independent. Let \(Y = X_1 + X_2\) and \(Z = X_1 - X_2\) denote the sum and difference of the scores, respectively.

  1. Find the probability density function of \((Y, Z)\).
  2. Find the probability density function of \(Y\).
  3. Find the probability density function of \(Z\).
  4. Are \(Y\) and \(Z\) independent?
Answer:

Let \(f\) denote the PDF of \((Y, Z)\), \(g\) the PDF of \(Y\) and \(h\) the PDF of \(Z\). \(Y\) and \(Z\) are dependent

\(f(y, z)\) \(y = 2\) 3 4 5 6 7 8 9 0 11 12 \(h(z)\)
\(z = -5\) 0 0 0 0 0 \(\frac{1}{36}\) 0 0 0 0 0 \(\frac{1}{36}\)
\(-4\) 0 0 0 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 0 0 0 \(\frac{2}{36}\)
\(-3\) 0 0 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 0 0 \(\frac{3}{36}\)
\(-2\) 0 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 0 \(\frac{4}{36}\)
\(-1\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{5}{36}\)
0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) \(\frac{6}{36}\)
1 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{5}{36}\)
2 0 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 0 \(\frac{4}{36}\)
3 0 0 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 0 0 \(\frac{3}{36}\)
4 0 0 0 0 \(\frac{1}{36}\) 0 \(\frac{1}{36}\) 0 0 0 0 \(\frac{2}{36}\)
5 0 0 0 0 0 \(\frac{1}{36}\) 0 0 0 0 0 \(\frac{1}{36}\)
\(g(y)\) \(\frac{1}{36}\) \(\frac{2}{36}\) \(\frac{3}{36}\) \(\frac{4}{36}\) \(\frac{5}{36}\) \(\frac{6}{36}\) \(\frac{5}{36}\) \(\frac{4}{36}\) \(\frac{3}{36}\) \(\frac{2}{36}\) \(\frac{1}{36}\) 1

Suppose that two standard, fair dice are rolled and the sequence of scores \((X_1, X_2)\) recorded. Let \(U = \min\{X_1, X_2\}\) and \(V = \max\{X_1, X_2\}\) denote the minimum and maximum scores, respectively.

  1. Find the probability density function of \((U, V)\).
  2. Find the probability density function of \(U\).
  3. Find the probability density function of \(V\).
  4. Are \(U\) and \(V\) independent?
Answer:

Let \(f\) denote the PDF of \((U, V)\), \(g\) the PDF of \(U\), and \(h\) the PDF of \(V\). \(U\) and \(V\) are dependent.

\(f(u, v)\) \(u = 1\) 2 3 4 5 6 \(h(v)\)
\(v = 1\) \(\frac{1}{36}\) 0 0 0 0 0 \(\frac{1}{36}\)
2 \(\frac{2}{36}\) \(\frac{1}{36}\) 0 0 0 0 \(\frac{3}{36}\)
3 \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{1}{36}\) 0 0 0 \(\frac{5}{36}\)
4 \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{1}{36}\) 0 0 \(\frac{7}{36}\)
5 \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{1}{36}\) 0 \(\frac{9}{36}\)
6 \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{2}{36}\) \(\frac{1}{36}\) \(\frac{11}{36}\)
\(g(u)\) \(\frac{11}{36}\) \(\frac{9}{36}\) \(\frac{7}{36}\) \(\frac{5}{36}\) \(\frac{3}{36}\) \(\frac{1}{36}\) 1

Simple Continuous Distributions

Suppose that \((X, Y)\) has probability density function \(f(x, y) = x + y\) for \(0 \le x \le 1\), \(0 \le y \le 1\).

  1. Find the probability density function of \(X\).
  2. Find the probability density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = x + \frac{1}{2}\), \(0 \le x \le 1\)
  2. \(h(y) = y + \frac{1}{2}\), \(0 \le y \le 1\)
  3. \(X\) and \(Y\) are dependent.

Suppose that \((X, Y)\) has probability density function \(f(x, y) = 2 ( x + y)\) for \(0 \le x \le y \le 1\).

  1. Find the probability density function of \(X\).
  2. Find the probability density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = (1 + 3 x)(1 - x)\), \(0 \le x \le 1\)
  2. \(h(h) = 3 y^2\)
  3. \(X\) and \(Y\) are dependent.

Suppose that \((X, Y)\) has probability density function \(f(x, y) = 6 x^2 y\) for \(0 \le x \le 1\), \(0 \le y \le 1\).

  1. Find the probability density function of \(X\).
  2. Find the probability density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = 3 x^2\), \(0 \le x \le 1\)
  2. \(h(y) = 2 y\), \(0 \le y \le 1\)
  3. \(X\) and \(Y\) are independent.

Suppose that \((X, Y)\) has probability density function \(f(x, y) = 15 x^2 y\) for \(0 \le x \le y \le 1\).

  1. Find the probability density function of \(X\).
  2. Find the probability density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = \frac{15}{2} x^2 \left(1 - x^2\right)\), \(0 \le x \le 1\)
  2. \(h(y) = 5 y^4\), \(0 \le y \le 1\)
  3. \(X\) and \(Y\) are dependent.

Suppose that \((X, Y, Z)\) has probability density function \(f(x, y, x) = 2 (x + y) z\) for \(0 \le x \le 1\), \(0 \le y \le 1\), \(0 \le z \le 1\).

  1. Find the probability density function of each pair of variables.
  2. Find the probability density function of each variable.
  3. Determine the dependency relationships between the variables.
Proof:

We use subscripts for the PDFs: 1, 2, and 3 refer to \(X\), \(Y\), and \(Z\) respectively.

  1. \(f_{1,2}(x,y) = x + y\); \(0 \le x \le 1\), \(0 \le y \le 1\)
  2. \(f_{1,3}(x,z) = 2 z \left(x + \frac{1}{2}\right)\); \(0 \le x \le 1\), \(0 \le z \le 1\)
  3. \(f_{2,3}(y,z) = 2 z \left(y + \frac{1}{2}\right)\); \(0 \le y \le 1\), \(0 \le z \le 1\)
  4. \(f_1(x) = x + \frac{1}{2}\), \(0 \le x \le 1\)
  5. \(f_2(y) = y + \frac{1}{2}\), \(0 \le y \le 1\)
  6. \(f_3(z) = 2 z\), \(0 \le z \le 1\)
  7. \(Z\) and \((X, Y)\) are independent; \(X\) and \(Y\) are dependent.

Suppose that \((X, Y)\) has probability density function \(f(x, y) = 2 e^{-x} e^{-y}\) for \(0 \le x \le y \lt \infty\).

  1. Find the probability density function of \(X\).
  2. Find the probability density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = 2 e^{-2 x}\), \(0 \le x \lt \infty\).
  2. \(h(y) = 2 \left(e^{-y} - e^{-2y}\right)\), \(0 \le y \lt \infty\)
  3. \(X\) and \(Y\) are dependent.

In the previous exercise, \( X \) has an exponential distribution with rate parameter 2. Recall that exponential distributions are widely used to model random times, particularly in the context of the Poisson model.

Suppose that \(X\) and \(Y\) are independent, and that \(X\) has probability density function \(g(x) = 6 x (1 - x)\) for \(0 \le x \le 1\), and that \(Y\) has probability density function \(h(y) = 12 y^2 (1 - y)\) for \(0 \le y \le 1\).

  1. Find the probability density function of \((X, Y)\).
  2. Find \(\P(X + Y \le 1)\).
Answer:
  1. \(f(x, y) = 72 x (1 - x) y^2 (1 - y)\); \(0 \le x \le 1\), \(0 \le y \le 1\)
  2. \(\P(X + Y \le 1) = \frac{13}{35}\)
In the previous exercise, \( X \) and \( Y \) have beta distributions, which are widely used to model random probabilities and proportions. Beta distributions are studied in more detail in the chapter on Special Distributions. In addition, \( Y \) in Exercise 9, \( X \) and \( Y \) in Exercise 10, \( Y \) in Exercise 11, and \( Z \) in Exercise 12 all have beta distributions.

Suppose that \(\Theta\) and \(\Phi\) are independent random angles, with common probability density function \(g(t) = \sin(t)\) for \(0 \le t \le \frac{\pi}{2}\).

  1. Find the probability density function of \((\Theta, \Phi)\).
  2. Find \(\P(\Theta \le \Phi)\).
Answer:
  1. \(f(\theta, \phi) = \sin(\theta) \sin(\phi)\); \(0 \le \theta \le \frac{\pi}{2}\), \(0 \le \phi \le \frac{\pi}{2}\)
  2. \(\P(\Theta \le \Phi) = \frac{1}{2}\)

The common distribution of \( X \) and \( Y \) in the previous exercise governs a random angle in Bertrand's problem.

Suppose that \(X\) and \(Y\) are independent, and that \(X\) has probability density function \(g(x) = \frac{2}{x^3}\) for \(1 \le x \lt \infty\), and that \(Y\) has probability density function \(h(y) = \frac{3}{y^4}\) for \(1 \le y \lt\infty\).

  1. Find the probability density function of \((X, Y)\).
  2. Find \(\P(X \le Y)\).
Answer:
  1. \(f(x, y) = \frac{6}{x^3 y^4}\); \(1 \le x \lt \infty\), \(1 \le y \lt \infty\)
  2. \(\P(X \le Y) = \frac{2}{5}\)

Both \(X\) and \(Y\) in the previous exercise have Pareto distributions, named for Vilfredo Pareto. Recall that Pareto distributions are used to model certain economic variables and are studied in more detail in the chapter on Special Distributions.

Suppose that \((X, Y)\) has probability density function \(g\) given by \(g(x, y) = 15 x^2 y\) for \(0 \le x \le y \le 1\), and that \(Z\) has probability density function \(h\) given by \(h(z) = 4 z^3\) for \(0 \le z \le 1\), and that \((X, Y)\) and \(Z\) are independent.

  1. Find the probability density function of \((X, Y, Z)\).
  2. Find the probability density function of \((X, Z)\).
  3. Find the probability density function of \((Y, Z)\).
  4. Find \(\P(Z \le X Y)\).
Answer:
  1. \(f(x, y, z) = 60 x^2 y z^3\); \(0 \le x \le y \le 1\), \(0 \le z \le 1\)
  2. \(f_{1,2}(x, z) = 30 x^2 \left(1 - x^2\right) z^3\); \(0 \le x \le 1\), \(0 \le z \le 1\)
  3. \(f_{2,3}(y, z) = 20 y^4 z^3\); \(0 \le y \le 1\), \(0 \le z \le 1\)
  4. \(\P(Z \le X Y) = \frac{15}{92}\)

Multivariate Uniform Distributions

Multivariate uniform distributions give a geometric interpretation of some of the concepts in this section. Recall first that the standard measure on \(\R^n\) is \[\lambda_n(A) = \int_A 1 dx, \quad A \subseteq \R^n\] In particular

  1. \(\lambda_1(A)\) is the length of \(A\) for \(A \subseteq \R\).
  2. \(\lambda_2(A)\) is the area of \(A\) for \(A \subseteq \R^2\).
  3. \(\lambda_3(A)\) is the volume of \(A\) for \(A \subseteq \R^3\).

More technically, \( \lambda_n \) is \( n \)-dimensional Lebesgue measure on the measurable subsets of \( \R^n \), and is named for Henri Lebesgue. This should not be of concern if you are a new student of probability. On the other hand, if you are interested in the advanced theory, read the following sections:

Suppose now that \(X\) takes values in \(\R^j\), \(Y\) takes values in \(\R^k\), and that \((X, Y)\) is uniformly distributed on a set \(R \subseteq \R^{j+k}\). Thus, by definition, we must have \() \lt \lambda_{j+k}(R) \lt \infty \), and then the joint probability density function \( f \) of \((X, Y)\) is given by \( f(x, y) = 1 \big/ \lambda_{j+k}(R) \) for \( (x, y) \in R \). Recall that uniform distributions always have constant density functions. Now let \(S\) and \(T\) be the projections of \(R\) onto \(\R^j\) and \(\R^k\) respectively, defined as follows: \[\begin{align} S & = \left\{x \in \R^j: (x, y) \in R \text{ for some } y \in \R^k\right\} \\ T & = \left\{y \in \R^k: (x, y) \in R \text{ for some } x \in \R^j\right\} \end{align}\] Note that \(R \subseteq S \times T\). Next we denote the cross-sections at \(x \in S\) and at \(y \in T\), respectively by \[\begin{align} T_x & = \{y \in T: (x, y) \in R\} \\ S_y & = \{x \in S: (x, y) \in R\} \end{align}\]

Cross sections at x and y
The projections \( S \) and \( T \), and the cross sections at \( x \) and \( y \)

\(X\) takes values in \(S\) and \( Y \) takes values in \( T \). The probability density functions \(g\) and \( h \) of \(X\) and \( Y \) are proportional to the cross-sectional measures:

  1. \( g(x) = \lambda_k\left(T_x\right) \big/ \lambda_{j+k}(R) \) for \( x \in S \)
  2. \( h(y) = \lambda_j\left(S_y\right) \big/ \lambda_{j+k}(R) \) for \( y \in T \)
Proof:

From our general theory, \( X \) has PDF \( g \) given by \[ g(x) = \int_{T_x} f(x, y) \, dy = \int_{T_x} \frac{1}{\lambda_{j+k}(R)} = \frac{\lambda_k\left(T_x\right)}{\lambda_{j+k}(R)}, \quad x \in S \] The result for \( Y \) is analogous.

In particular, note from previous theorem that \(X\) and \(Y\) are not in general either independent nor uniformly distributed. However, these properties do hold if \(R\) is a Cartesian product set.

Suppose that \(R = S \times T\).

  1. \(X\) is uniformly distributed on \(S\).
  2. \(Y\) is uniformly distributed on \(T\).
  3. \(X\) and \(Y\) are independent.
Proof:

In this case, \( T_x = T \) and \( S_y = S \) for every \( x \in S \) and \( y \in T \). Also, \( \lambda_{j+k}(R) = \lambda_j(S) \lambda_k(T) \), so for \( x \in S \) and \( y \in T \), \( f(x, y) = 1 \big/ \lambda_j(S) \lambda_k(T) \), \( g(x) = 1 \big/ \lambda_j(S) \), \( h(y) = 1 \big/ \lambda_k(T) \).

In each of the following cases, find the joint and marginal probabilit density functions, and determine if \(X\) and \(Y\) are independent.

  1. \((X, Y)\) is uniformly distributed on the square \(R = [-6, 6]^2\).
  2. \((X, Y)\) is uniformly distributed on the triangle \(R = \{(x, y): -6 \le y \le x \le 6\}\).
  3. \((X, Y)\) is uniformly distributed on the circle \(R = \left\{(x, y): x^2 + y^2 \le 36\right\}\).
Answer:
    • \(f(x, y) = \frac{1}{144}\); \(-6 \le x \le 6\), \(-6 \le y \le 6\)
    • \(g(x) = \frac{1}{12}\), \(-6 \le x \le 6\)
    • \(h(y) = \frac{1}{12}\), \(-6 \le y \le 6\)
    • \(X\) and \(Y\) are independent.
    • \(f(x, y) = \frac{1}{72}\), \(-6 \le y \le x \le 6\)
    • \(g(x) = \frac{1}{72}(x + 6)\), \(-6 \le x \le 6\)
    • \(h(y) = \frac{1}{72}(y + 6)\), \(-6 \le y \le 6\)
    • \(X\) and \(Y\) are dependent.
    • \(f(x, y) = \frac{1}{36 \pi}\), \(x^2 + y^2 \le 36\)
    • \(g(x) = \frac{1}{18 \pi} \sqrt{36 - x^2}\), \(-6 \le x \le 6\)
    • \(h(y) = \frac{1}{18 \pi} \sqrt{36 - y^2}\), \(-6 \le y \le 6\)
    • \(X\) and \(Y\) are dependent.

In the bivariate uniform experiment, run the simulation 1000 times for each of the following cases. Watch the points in the scatter plot and the graphs of the marginal distributions. Interpret what you see in the context of the discussion above.

  1. square
  2. triangle
  3. circle

Suppose that \((X, Y, Z)\) is uniformly distributed on the cube \([0, 1]^3\).

  1. Give the joint probability density function of \((X, Y, Z)\).
  2. Find the probability density function of each pair of variables.
  3. Find the probability density function of each variable
  4. Determine the dependency relationships between the variables.
Answer:
  1. \(f(x, y, z) = 1\); \(0 \le x \le 1\), \(0 \le y \le 1\), \(0 \le z \le 1\) (the uniform distribution on \([0, 1]^3\))
  2. \((X, Y)\), \((X, Z)\), and \((Y, Z)\) have common probability density function \(g(u, v) = 1\); \(0 \le u \le 1\), \(0 \le v \le 1\) (the uniform distribution on \([0, 1]^2\))
  3. \(X\), \(Y\), and \(Z\) have common probability density function \(h(u) = 1\), \(0 \le u \le 1\) (the uniform distribution on \([0, 1]\))
  4. \(X\), \(Y\), \(Z\) are independent.

Suppose that \((X, Y, Z)\) is uniformly distributed on \(\{(x, y, z): 0 \le x \le y \le z \le 1\}\).

  1. Give the joint density function of \((X, Y, Z)\).
  2. Find the probability density function of each pair of variables.
  3. Find the probability density function of each variable
  4. Determine the dependency relationships between the variables.
Answer:

We use the subscripts 1, 2, and 3 to refer to variables \(X\), \(Y\), and \(Z\) respectively.

  1. \(f(x, y, z) = 6\), \(0 \le x \le y \le z \le 1\)
    • \(f_{1,2}(x, y) = 6 (1 - y)\), \(0 \le x \le y \le 1\)
    • \(f_{1,3}(x, z) = 6 (z - x)\), \(0 \le x \le z \le 1\)
    • \(f_{2,3}(y, z) = 6 y\), \(0 \le y \le z \le 1\)
    • \(f_1(x) = 3 (1 - x)^2\), \(0 \le x \le 1\)
    • \(f_2(y) = 6 y (1 - y)\), \(0 \le y \le 1\)
    • \(f_3(z) = 3 z^2\), \(0 \le z \le 1\)
  2. Each pair of variables is dependent.

The Rejection Method

The following result shows how an arbitrary continuous distribution can be obtained from a uniform distribution. This result is useful for simulating certain continuous distributions, as we will see.

Suppose that \(g\) is a probability density function for a continuous distribution on \(S \subseteq \R^n\). Let \[R = \{(x, y): x \in S \text{ and } 0 \le y \le g(x)\} \subseteq \R^{n+1} \] If \((X, Y)\) is uniformly distributed on \(R\), then \(X\) has probability density function \(g\).

Proof:

Note that since \(g\) is a probability density function on \(S\). \[ \lambda_{n+1}(R) = \int_R 1 \, d(x, y) = \int_S \int_0^{g(x)} 1 \, dy \, dx = \int_S g(x) \, dx = 1 \] Hence the probability density function \( f \) of \((X, Y)\) is given by \(f(x, y) = 1\) for \((x, y) \in R\). Thus, the probability density function of \(X\) is \(x \mapsto \int_0^{g(x)} 1 \, dy = g(x)\) for \( x \in S \).

A picture in the case \(n = 1\) is given below:

JointUniform3.png
If \( (X, Y) \) is uniformly distributed on \( R \), then \( X \) has density function \( g \).

Suppose now that \( R \subseteq T \) where \( T \subseteq \R_{n+1} \) with \( \lambda_{n+1}(T) \lt \infty \). Note that we also have \( \lambda_{n+1}(T) \ge \lambda_{n+1}(R) = 1 \). Further, suppose that \(\left((X_1, Y_1), (X_2, Y_2), \ldots\right)\) is a sequence of independent random variables with \( X_k \in \R^n \), \( Y_k \in \R \), and \( \left(X_k, Y_k\right) \) uniformly distributed on \( T \) for each \( k \in \N_+ \). Now let \[N = \min\left\{k \in \N_+: \left(X_k, Y_k\right) \in R\right\} = \min\left\{k \in \N_+: X_k \in S, \; 0 \le Y_k \le g\left(X_k\right)\right\}\] the index of the first point to fall in \(R\). Since the sequence is independent, \( N \) has the geometric distribution with success parameter \( p = \lambda_{n+1}(R) \big/ \lambda_{n+1}(T) = 1 \big/ \lambda_{n+1}(T) \), so \( N \) has probability density function \( \P(N = k) = (1 - p)^{k-1} p \) for \( k \in \N_+ \). More importantly, from our past work on independence and the uniform distribution, we know that the first point \(\left(X_N, Y_N\right)\) to fall in \( R \) has a uniform distribution on \(R\), and therefore by the previous result, \(X_N\) has probability density function \(g\).

Rejection method
With a sequence of independent points, uniformly distributed on \( T \), the \( x \) coordinate of the first point to land in \( R \) has probability density function \( g \).

What's the point of all this? Well, if we can simulate a sequence of independent variables that are uniformly distributed on \( T \), then we can simulate a random variable with the given probability density function \( g \). This method of simulation is known as the rejection method. Suppose in particular that \( R \) is bounded as a subset of \( \R^{n+1} \), which would mean that the domain \( S \) is bounded as a subset of \( \R^n \) and that the probability density function \( g \) is bounded on \( S \). In this case, we can find \( T \) that is the Cartesian product of \( n + 1 \) bounded intervals with \( R \subseteq T \). It turns out to be very easy to simulate a sequence of independent variables, each uniformly distributed on such a product set, so the rejection method always works in this case. As you might guess, the rejection method works best if the size of \( T \), namely \( \lambda_{n+1}(T) \), is small, so that the success parameter \( p \) is large.

The rejection method applet simulates a number of continuous distributions via the rejection method. For each of the following distributions, vary the parameters and note the shape and location of the probability density function. Then run the experiment 1000 times and observe the results.

  1. The beta distribution
  2. The semicircle distribution
  3. The triangle distribution
  4. The U-power distribution

The Multivariate Hypergeometric Distribution

Suppose that a population consists of \(m\) objects, and that each object is one of four types. There are \(a\) type 1 objects, \(b\) type 2 objects, \(c\) type 3 objects and \(m - a - b - c\) type 0 objects. The parameters \(a\), \(b\), and \(c\) are nonnegative integers with \(a + b + c \le m\). We sample \(n\) objects from the population at random, and without replacement. Denote the number of type 1, 2, and 3 objects in the sample by \(X\), \(Y\), and \(Z\), respectively. Hence, the number of type 0 objects in the sample is \(n - X - Y - Z\). In the problems below, the variables \(x\), \(y\), and \(z\) take values in \(\N\).

\((X, Y, Z)\) has a (multivariate) hypergeometric distribution with probability density function \(f\) given by \[f(x, y, z) = \frac{\binom{a}{x} \binom{b}{y} \binom{c}{z} \binom{m - a - b - c}{n - x - y - z}}{\binom{m}{n}}, \quad x + y + z \le n\]

Proof:

From the basic theory of combinatorics, the numerator is the number of ways to select an unordered sample of size \( n \) from the population with \( x \) objects of type 1, \( y \) objects of type 2, \( z \) objects of type 3, and \( n - x - y - z \) objects of type 0. The denominator is the total number of ways to select the unordered sample.

\((X, Y)\) also has a (multivariate) hypergeometric distribution, with the probability density function \(g\) given by \[g(x, y) = \frac{\binom{a}{x} \binom{b}{y} \binom{m - a - b}{n - x - y}}{\binom{m}{n}}, \quad x + y \le n\]

Proof:

This result could be obtained by summing the joint PDF in (26) over \( z \) for fixed \( (x, y) \). However, there is a much nicer combinatorial argument. Note that we are selecting a random sample of size \(n\) from a population of \(m\) objects, with \(a\) objects of type 1, \(b\) objects of type 2, and \(m - a - b\) objects of other types.

\(X\) has an ordinary hypergeometric distribution, with probability density function \(h\) given by \[h(x) = \frac{\binom{a}{x} \binom{m - a}{n - x}}{\binom{m}{n}}, \quad x \le n\]

Proof:

Again, the result could be obtained by summing the joint PDF in (26) over \( (y, z) \) for fixed \( x \), or by summing the joint PDF in (27) over \( y \) for fixed \( x \). But as before, there is a much more elegant combinatorial argument. Note that we are selecting a random sample of size \(n\) from a population of size \(m\) objects, with \(a\) objects of type 1 and \(m - a\) objects of other types.

These results generalize in a straightforward way to a population with any number of types. In brief, if a random vector has a hypergeometric distribution, then any sub-vector also has a hypergeometric distribution. In other words, all of the marginal distributions of a hypergeometric distribution are themselves hypergeometric. Note however, that it's not a good idea to memorize the formulas above explicitly. It's better to just note the patterns and recall the combinatorial meaning of the binomial coefficient. The hypergeometric distribution and the multivariate hypergeometric distribution are studied in more detail in the chapter on Finite Sampling Models.

Suppose that a population of voters consists of 50 democrats, 40 republicans, and 30 independents. A sample of 10 voters is chosen at random from the population (without replacement, of course). Let \(X\) denote the number of democrats in the sample and \(Y\) the number of republicans in the sample. Find the probability density function of each of the following:

  1. \((X, Y)\)
  2. \(X\)
  3. \(Y\)
Answer:

In the formulas below, the variables \(x\) and \(y\) are nonnegative integers.

  1. \(f(x, y) = \frac{1}{\binom{120}{10}} \binom{50}{x} \binom{40}{y} \binom{30}{10 - x - y}\), \(x + y \le 10\)
  2. \(g(x) = \frac{1}{\binom{120}{10}} \binom{50}{x} \binom{70}{10 - x}\), \(x \le 10\)
  3. \(h(y) = \frac{1}{\binom{120}{10}} \binom{40}{y} \binom{80}{10 - y}\), \(y \le 10\)

Suppose that the Math Club at Enormous State University (ESU) has 50 freshmen, 40 sophomores, 30 juniors and 20 seniors. A sample of 10 club members is chosen at random to serve on the \(\pi\)-day committee. Let \(X\) denote the number freshmen on the committee, \(Y\) the number of sophomores, and \(Z\) the number of juniors.

  1. Find the probability density function of \((X, Y, Z)\)
  2. Find the probability density function of each pair of variables.
  3. Find the probability density function of each individual variable.
Answer:

In the formulas below, the variables \(x\), \(y\), and \(z\) are nonnegative integers.

  1. \(f(x, y, z) = \frac{1}{\binom{140}{10}} \binom{50}{x} \binom{40}{y} \binom{30}{z} \binom{20}{10 - x - y - z}\), \(x + y + z \le 10\)
    • \(f_{1,2}(x, y) = \frac{1}{\binom{140}{10}} \binom{50}{x} \binom{40}{y} \binom{50}{10 - x - y}\), \(x + y \le 10\)
    • \(f_{1,3}(y, z) = \frac{1}{\binom{140}{10}} \binom{50}{x} \binom{30}{z} \binom{60}{10 - x - z}\), \(x + z \le 10\)
    • \(f_{2,3}(y, z) = \frac{1}{\binom{140}{10}} \binom{40}{y} \binom{30}{z} \binom{70}{10 - y - z}\), \(y + z \le 10\)
    • \(f_1(x) = \frac{1}{\binom{120}{10}} \binom{50}{x} \binom{90}{10 - x}\), \(x \le 10\)
    • \(f_2(y) = \frac{1}{\binom{120}{10}} \binom{40}{y} \binom{100}{10 - y}\), \(y \le 10\)
    • \(f_3(z) = \frac{1}{\binom{120}{10}} \binom{30}{z} \binom{110}{10 - z}\), \(z \le 10\)

Multinomial Trials

Suppose that we have a sequence of independent trials, each with 4 possible outcomes. On each trial, outcome 1 occurs with probability \(p\), outcome 2 with probability \(q\), outcome 3 with probability \(r\), and outcome 0 occurs with probability \(1 - p - q - r\). The parameters \(p\), \(q\), and \(r\) are nonnegative numbers with \(p + q + r \le 1\). Denote the number of times that outcome 1, outcome 2, and outcome 3 occurred in the \(n\) trials by \(X\), \(Y\), and \(Z\) respectively. Of course, the number of times that outcome 0 occurs is \(n - X - Y - Z\). In the problems below, the variables \(x\), \(y\), and \(z\) take values in \(\N\).

\((X, Y, Z)\) has a multinomial distribution with probability density function \(f\) given by \[f(x, y, z) = \binom{n}{x, \; y, \; z} p^x q^y r^z (1 - p - q - r)^{n - x - y - z}, \quad x + y + z \le n\]

Proof:

The multinomial coefficient is the number of sequences of length \( n \) with 1 occurring \( x \) times, 2 occurring \( y \) times, 3 occurring \( z \) times, and 0 occurring \( n - x - y - z \) times. The result then follows by independence.

\((X, Y)\) also has a multinomial distribution with the probability density function \(g\) given below. \[g(x, y) = \binom{n}{x, \; y} p^x q^y (1 - p - q)^{n - x - y}, \quad x + y \le n\]

Proof:

This result could be obtained from the PDF in (31), by summing over \( z \) for fixed \( (x, y) \). However there is a much better direct argument. Note that we have \(n\) independent trials, and on each trial, outcome 1 occurs with probability \(p\), outcome 2 with probability \(q\), and some other outcome with probability \(1 - p - q\).

\(X\) has a binomial distribution, with the probability density function \(h\) given below. \[h(x) = \binom{n}{x} p^x (1 - p)^{n - x}, \quad x \le n\]

Proof:

Again, the result could be obtained by summing the joint PDF in (31) over \( (y, z) \) for fixed \( x \) or by summing the PDF in (32) over \( y \) for fixed \( x \). But as before, there is a much better direct argument. Note that we have \(n\) independent trials, and on each trial, outcome 1 occurs with probability \(p\) and some other outcome with probability \(1 - p\).

These results generalize in a completely straightforward way to multinomial trials with any number of trial outcomes. In brief, if a random vector has a multinomial distribution, then any sub-vector also has a multinomial distribution. In other terms, all of the marginal distributions of a multinomial distribution are themselves multinomial. The binomial distribution and the multinomial distribution are studied in more detail in the chapter on Bernoulli Trials.

Suppose that a system consists of 10 components that operate independently. Each component is working with probability \(\frac{1}{2}\), idle with probability \(\frac{1}{3}\), or failed with probability \(\frac{1}{6}\). Let \(X\) denote the number of working components and \(Y\) the number of idle components. Give the probability density function of each of the following:

  1. \((X, Y)\)
  2. \(X\)
  3. \(Y\)
Answer:

In the formulas below, the variables \(x\) and \(y\) are nonnegative integers.

  1. \(f(x, y) = \binom{10}{x, \; y} \left(\frac{1}{2}\right)^x \left(\frac{1}{3}\right)^y \left(\frac{1}{6}\right)^{10 - x - y}\), \(x + y \le 10\)
  2. \(g(x) = \binom{10}{x} \left(\frac{1}{2}\right)^{10}\), \(x \le 10\)
  3. \(h(y) = \binom{10}{y} \left(\frac{1}{3}\right)^y \left(\frac{2}{3}\right)^{10 - y}\), \(y \le 10\)

Suppose that in a crooked, four-sided die, face \(i\) occurs with probability \(\frac{i}{10}\) for \(i \in \{1, 2, 3, 4\}\). The die is thrown 12 times; let \(X\) denote the number of times that score 1 occurs, \(Y\) the number of times that score 2 occurs, and \(Z\) the number of times that score 3 occurs.

  1. Find the probability density function of \((X, Y, Z)\)
  2. Find the probability density function of each pair of variables.
  3. Find the probability density function of each individual variable.
Answer:

In the formulas below, the variables \(x\), \(y\) and \(z\) are nonnegative integers. The subscripts 1, 2, and 3 refer to variables \( X \), \( Y \), and \( Z \) respectively.

  1. \(f(x, y, z) = \binom{12}{x, \; y, \; z} \left(\frac{1}{10}\right)^x \left(\frac{2}{10}\right)^y \left(\frac{3}{10}\right)^z \left(\frac{4}{10}\right)^{12 - x - y - z}\), \(x + y + z \le 12\)
    • \(f_{1,2}(x, y) = \binom{12}{x, \; y} \left(\frac{1}{10}\right)^{x} \left(\frac{2}{10}\right)^y \left(\frac{7}{10}\right)^{12 - x - y}\), \(x + y \le 12\)
    • \(f_{1,3}(x, z) = \binom{12}{x, \; z} \left(\frac{1}{10}\right)^{x} \left(\frac{3}{10}\right)^z \left(\frac{6}{10}\right)^{12 - x - z}\), \(x + z \le 12\)
    • \(f_{2,3}(y, z) = \binom{12}{y, \; z} \left(\frac{2}{10}\right)^{y} \left(\frac{3}{10}\right)^z \left(\frac{5}{10}\right)^{12 - y - z}\), \(y + z \le 12\)
    • \(f_1(x) = \binom{12}{x} \left(\frac{1}{10}\right)^x \left(\frac{9}{10}\right)^{12 - x}\), \(x \le 12\)
    • \(f_2(y) = \binom{12}{y} \left(\frac{2}{10}\right)^y \left(\frac{8}{10}\right)^{12 - y}\), \(y \le 12\)
    • \(f_3(z) = \binom{12}{z} \left(\frac{3}{10}\right)^z \left(\frac{7}{10}\right)^{12 - z}\), \(z \le 12\)

Bivariate Normal Distributions

Suppose that \((X, Y)\) has probability the density function given below:

\[f(x, y) = \frac{1}{12 \pi} \exp\left[-\left(\frac{x^2}{8} + \frac{y^2}{18}\right)\right], \quad (x, y) \in \R^2\]
  1. Find the probability density function of \(X\).
  2. Find the probability density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = \frac{1}{2 \sqrt{2 \pi}} e^{-x^2 / 8}\), \(x \in \R\).
  2. \(h(y) = \frac{1}{3 \sqrt{2 \pi}} e^{-y^2 /18}\), \(y \in \R\).
  3. \(X\) and \(Y\) are independent.

Suppose that \((X, Y)\) has the probability density function given below:

\[f(x, y) = \frac{1}{\sqrt{3} \pi} \exp\left[-\frac{2}{3}\left(x^2 - x y + y^2\right)\right], \quad(x, y) \in \R^2\]
  1. Find the density function of \(X\).
  2. Find the density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2 / 2}\), \(x \in \R\).
  2. \(h(y) = \frac{1}{\sqrt{2 \pi}} e^{-y^2 / 2}\), \(y \in \R\).
  3. \(X\) and \(Y\) are dependent.

The joint distributions in the last two exercises are examples of bivariate normal distributions. Normal distributions are widely used to model physical measurements subject to small, random errors. In both exercises, the marginal distributions of \( X \) and \( Y \) also have normal distributions, and this turns out to be true in general. The multivariate normal distribution is studied in more detail in the chapter on Special Distributions.

Exponential Distributions

Recall that the exponential distribution has probability density function \[f(x) = r e^{-r t}, \quad 0 \le x \lt \infty\] where \(r \gt 0\) is the rate parameter. The exponential distribution is widely used to model random times, and is studied in more detail in the chapter on the Poisson Process.

Suppose \(X\) and \(Y\) have exponential distributions with parameters \(a\) and \(b\), respectively, and are independent. Then \(\P(X \lt Y) = \frac{a}{a + b}\).

Suppose \(X\), \(Y\), and \(Z\) have exponential distributions with parameters \(a\), \(b\), and \(c\), respectively, and are independent. Then

  1. \(\P(X \lt Y \lt Z) = \frac{a}{a + b + c} \frac{b}{b + c}\)
  2. \(\P(X \lt Y, X \lt Z) = \frac{a}{a + b + c}\)

If \(X\), \(Y\), and \(Z\) are the lifetimes of devices that act independently, then the results in the previous two exercises give probabilities of various failure orders. Results of this type are also very important in the study of continuous-time Markov processes. We will continue this discussion in the section on transformations of random variables.

Mixed Distributions

Suppose \(X\) takes values in the finite set \(\{1, 2, 3\}\), \(Y\) takes values in the interval \([0, 3]\), and that the joint density function \(f\) is given by \[f(x, y) = \begin{cases} \frac{1}{3}, & \quad x = 1, \; 0 \le y \le 1 \\ \frac{1}{6}, & \quad x = 2, \; 0 \le y \le 2 \\ \frac{1}{9}, & \quad x = 3, \; 0 \le y \le 3 \end{cases}\]

  1. Find the probability density function of \(X\).
  2. Find the probability density function of \(Y\).
  3. Are \(X\) and \(Y\) independent?
Answer:
  1. \(g(x) = \frac{1}{3}\), \(x \in \{1, 2, 3\}\) (the uniform distribution on \(\{1, 2, 3\}\)).
  2. \(h(y) = \begin{cases} \frac{11}{18}, & 0 \lt y \lt 1 \\ \frac{5}{18}, & 1 \lt y \lt 2 \\ \frac{2}{18}, & 2 \lt y \lt 3 \end{cases}\)
  3. \(X\) and \(Y\) are dependent.

Suppose that \(P\) takes values in the interval \([0, 1]\), \(X\) takes values in the finite set \(\{0, 1, 2, 3\}\), and that \((P, X)\) has joint probability density function \(f\) given by \[f(p, x) = 6 \binom{3}{x} p^{x + 1} (1 - p)^{4 - x}, \quad (p, x) \in [0, 1] \times \{0, 1, 2, 3\}\]

  1. Find the probability density function of \(P\).
  2. Find the probability density function of \(X\).
  3. Are \(P\) and \(X\) independent?
Answer:
  1. \(g(p) = 6 p (1 - p)\), \(0 \le p \le 1\)
  2. \( h(0) = h(3) = \frac{1}{5} \), \( h(1) = \frac{3}{10} \)
  3. \(P\) and \(X\) are dependent.

As we will see in the section on conditional distributions, the distribution in the last exercise models the following experiment: a random probability \(P\) is selected, and then a coin with this probability of heads is tossed 3 times; \(X\) is the number of heads. Note that \( P \) has a beta distribution.

Random Samples

Recall that the Bernoulli distribution with parameter \(p \in [0, 1]\) has probability density function \(g\) given by \(g(x) = p^x (1 - p)^{1 - x}\) for \(x \in \{0, 1\}\). Let \(\bs{X}\) be a random sample of size \(n\) from the distribution. Give the probability density funcion of \(\bs{X}\) in simplified form.

Answer:

\(f(x_1, x_2, \ldots, x_n) = p^k (1 - p)^{n-k}\) for \((x_1, x_2, \ldots, x_n) \in \{0, 1\}^n\), where \(k = x_1 + x_2 + \cdots + x_n\)

The Bernoulli distribution is name for Jacob Bernoulli, and governs an indicator random varible. Hence if \(\bs{X}\) is a random sample of size \(n\) from the distribution then \(\bs{X}\) is a sequence of \(n\) Bernoulli trials. A separate chapter studies Bernoulli trials in more detail.

Recall that the geometric distribution on \(\N_+\) with parameter \(p \in (0, 1)\) has probability density function \(g\) given by \(g(x) = p (1 - p)^{x - 1}\) for \(x \in \N_+\). Let \(\bs{X}\) be a random sample of size \(n\) from the distribution. Give the probability density function of \(\bs{X}\) in simplified form.

Answer:

\(f(x_1, x_2, \ldots, x_n) = p^n (1 - p)^{k-n}\) for \((x_1, x_2, \ldots, x_n) \in \N_+^n\), where \(k = x_1 + x_2 + \cdots + x_n\)

The geometric distribution governs the trial number of the first success in a sequence of Bernoulli trials. Hence the variables in the random sample can be interpreted as the number of trials between successive successes.

Recall that the Poisson distribution with parameter \(a \in (0, \infty)\) has probability density function \(g\) given by \(g(x) = e^{-a} \frac{a^x}{x!}\) for \(x \in \N\). Let \(\bs{X}\) be a random sample of size \(n\) from the distribution. Give the probability density funcion of \(\bs{X}\) in simplified form.

Answer:

\(f(x_1, x_2, \ldots, x_n) = \frac{1}{x_1! x_2! \cdots x_n!} e^{-n a} a^{x_1 + x_2 + \cdots + x_n}\) for \((x_1, x_2, \ldots, x_n) \in \N^n\)

The Poisson distribution is named for Simeon Poisson, and governs the number of random points in a region of time or space under appropriate circumstances. The parameter \( a \) is proportional to the size of the region. The Poisson distribution is studied in more detail in the chapter on the Poisson process.

Recall again that the exponential distribution with rate parameter \(r \in (0, \infty)\) has probability density function \(g\) given by \(g(x) = r e^{-r x}\) for \(x \in (0, \infty)\). Let \(\bs{X}\) be a random sample of size \(n\) from the distribution. Give the probability density funcion of \(\bs{X}\) in simplified form.

Answer:

\(f(x_1, x_2, \ldots, x_n) = r^n e^{-r (x_1 + x_2 + \cdots + x_n)}\) for \((x_1, x_2, \ldots, x_n) \in [0, \infty)^n\)

The exponential distribution governs failure times and other types or arrival times under appropriate circumstances. The exponential distribution is studied in more detail in the chapter on the Poisson process. The variables in the random sample can be interpreted as the times between successive arrivals in the Poisson process.

Recall that the standard normal distribution has probability density function \(\phi\) given by \(\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2}\) for \(z \in \R\). Let \(\bs{Z}\) be a random sample of size \(n\) from the distribution. Give the probability density funcion of \(\bs{Z}\) in simplified form.

Answer:

\(f(z_1, z_2, \ldots, z_n) = \frac{1}{(2 \pi)^{n/2}} e^{-\frac{1}{2}(z_1^2 + z_2^2 + \cdots + z_n^2)}\) for \((z_1, z_2, \ldots, z_n) \in \R^n\)

The standard normal distribution governs physical quantities, properly scaled and centered, subject to small, random errors. The normal distribution is studied in more generality in the chapter on the Special Distributions.

Data Analysis Exercises

For the cicada data, \(G\) denotes gender and \(S\) denotes species type.

  1. Find the empirical density of \((G, S)\).
  2. Find the empirical density of \(G\).
  3. Find the empirical density of \(S\).
  4. Do you believe that \(S\) and \(G\) are independent?
Answer:

The empirical joint and marginal empirical densities are given in the table below. Gender and species are probably dependent (compare the joint density with the product of the marginal densities).

\(f(i, j)\) \(i = 0\) 1 \(h(j)\)
\(j = 0\) \(\frac{16}{104}\) \(\frac{28}{104}\) \(\frac{44}{104}\)
1 \(\frac{3}{104}\) \(\frac{3}{104}\) \(\frac{6}{104}\)
2 \(\frac{40}{104}\) \(\frac{14}{104}\) \(\frac{56}{104}\)
\(g(i)\) \(\frac{59}{104}\) \(\frac{45}{104}\) 1

For the cicada data, let \(W\) denote body weight (in grams) and \(L\) body length (in millimeters).

  1. Construct an empirical density for \((W, L)\).
  2. Find the corresponding empirical density for \(W\).
  3. Find the corresponding empirical density for \(L\).
  4. Do you believe that \(W\) and \(L\) are independent?
Answer:

The empirical joint and marginal densities, based on simple partitions of the body weight and body length ranges, are given in the table below. Body weight and body length are almost certainly dependent.

Density \((W, L)\) \(w \in (0, 0.1]\) \((0.1, 0.2]\) \((0.2, 0.3]\) \((0.3, 0.4]\) Density \(L\)
\(l \in (15, 20]\) 0 0.0385 0.0192 0 0.0058
\((20, 25]\) 0.1731 0.9808 0.4231 0 0.1577
\((25, 30]\) 0 0.1538 0.1731 0.0192 0.0346
\((30, 35]\) 0 0 0 0.0192 0.0019
Density \(W\) 0.8654 5.8654 3.0769 0.1923

For the cicada data, let \(G\) denote gender and \(W\) body weight (in grams).

  1. Construct and empirical density for \((W, G)\).
  2. Find the empirical density for \(G\).
  3. Find the empirical density for \(W\).
  4. Do you believe that \(G\) and \(W\) are independent?
Answer:

The empirical joint and marginal densities, based on a simple partition of the body weight range, are given in the table below. Body weight and gender are almost certainly dependent.

Density \((W, G)\) \(w \in (0, 0.1]\) \((0.1, 0.2]\) \((0.2, 0.3]\) \((0.3, 0.4]\) Density \(G\)
\(g = 0\) 0.1923 2.5000 2.8846 0.0962 0.5673
1 0.6731 3.3654 0.1923 0.0962 0.4327
Density \(W\) 0.8654 5.8654 3.0769 0.1923