
## 1. Discrete Distributions

### Basic Theory

As usual, we start with a random experiment with probability measure $$\P$$ on an underlying sample space $$\Omega$$. A random variable $$X$$ for the experiment that takes values in a countable set $$S$$ is said to have a discrete distribution. Typically, $$S \subseteq \R^n$$ for some $$n$$, so in particular, if $$n \gt 1$$, $$X$$ is vector-valued. In the picture below, the blue dots are intended to represent points of positive probability.

#### Discrete Probability Density Functions

The (discrete) probability density function (sometimes called the probability mass function) of $$X$$ is the function $$f$$ on $$S$$ that assigns probabilities to the points in $$S$$:

$f(x) = \P(X = x), \quad x \in S$

The funciton $$f$$ satisfies the following properties:

1. $$f(x) \ge 0, \; x \in S$$
2. $$\sum_{x \in S} f(x) = 1$$
3. $$\sum_{x \in A} f(x) = \P(X \in A), \; A \subseteq S$$
Proof:

These properties follow from the axioms of a probability measure. First, $$f(x) = \P(X = x) \ge 0$$. Next, $$\sum_{x \in A} f(x) = \sum_{x \in A} \P(X = x) = \P(X \in A)$$ for $$A \subseteq S$$. Letting $$A = S$$ in the last result gives $$\sum_{x \in S} f(x) = 1$$.

Property (c) is particularly important since it shows that the probability distribution of a discrete random variable is completely determined by its probability density function. Conversely, any function that satisfies properties (a) and (b) is a (discrete) probability density function, and then property (c) can be used to construct a discrete probability distribution on $$S$$. Technically, $$f$$ is the density of $$X$$ relative to counting measure $$\#$$ on $$S$$.

As noted before, $$S$$ is typically a countable subset of some larger set, such as $$\R^n$$ for some $$n \in \N_+$$. We can always extend $$f$$, if we want, to the larger set by defining $$f(x) = 0$$ for $$x \notin S$$. Sometimes this extension simplifies formulas and notation.

An element $$x \in S$$ that maximizes the probability density function $$f$$ is called a mode of the distribution. When there is only one mode, it is sometimes used as a measure of the center of the distribution.

#### Interpretation

A discrete probability distribution is equivalent to a discrete mass distribution, with total mass 1. In this analogy, $$S$$ is the (countable) set of point masses, and $$f(x)$$ is the mass of the point at $$x \in S$$. Property (c) in Exercise 1 simply means that the mass of a set $$A$$ can be found by adding the masses of the points in $$A$$.

For a probabilistic interpretation, suppose that we create a new, compound experiment by repeating the original experiment indefinitely. In the compound experiment, we have a sequence of independent random variables $$(X_1, X_2, \ldots)$$ each with the same distribution as $$X$$; in statistical terms, we are sampling from the distribution of $$X$$. Define

$f_n(x) = \frac{1}{n} \#\{ i \in \{1, 2, \ldots, n\}: X_i = x\} = \frac{1}{n} \sum_{i=1}^n \bs{1}(X_i = x), \quad x \in S$

This is the relative frequency of $$x$$ in the first $$n$$ runs. Note that for each $$x$$, $$f_n(x)$$ is a random variable for the compound experiment. By the law of large numbers, $$f_n(x)$$ should converge to $$f(x)$$, in some sense, as $$n \to \infty$$. The function $$f_n(x)$$ is called the empirical probability density function; such functions are displayed in most of the simulation applets that deal with discrete variables.

#### Constructing Probability Density Functions

Suppose that $$g$$ is a nonnegative function defined on a countable set $$S$$. Let

$c = \sum_{x \in S} g(x)$

If $$0 \lt c \lt \infty$$, then the function $$f$$ defined by $$f(x) = \frac{1}{c} g(x)$$ for $$x \in S$$ is a discrete probability density function on $$S$$.

Note that since we are assuming that $$g$$ is nonnegative, $$c = 0$$ if and only if $$g(x) = 0$$ for every $$x \in S$$. At the other extreme, $$c = \infty$$ could only occur if $$S$$ is infinite. When $$0 \lt c \lt \infty$$ (so that we can construct the probability density function $$f$$), $$c$$ is sometimes called the normalizing constant. This result is useful for constructing probability density functions with desired functional properties (domain, shape, symmetry, and so on).

#### Conditional Densities

The probability density function of a random variable $$X$$ is based, of course, on the underlying probability measure $$\P$$ on the sample space $$\Omega$$. This measure could be a conditional probability measure, conditioned on a given event $$E \subseteq \Omega$$ (with $$\P(E) \gt 0$$). The usual notation is

$f(x \mid E) = \P(X = x \mid E)$

The following theorems shows that, except for notation, no new concepts are involved. Therefore, all results that hold for discrete probability density functions in general have analogies for conditional discrete probability density functions.

For fixed $$E$$, the function $$x \mapsto f(x \mid E)$$ is a discrete probability density function. That is, properties (a) and (b) of Exercise 1, hold and property (c) becomes

$\P(X \in A \mid E) = \sum_{x \in A} f(x \mid E), \quad A \subseteq S$
Proof:

This is a consequence of the fact that $$A \mapsto \P(A \mid E)$$ is a probability measure. The function $$x \mapsto f(x \mid E)$$ plays the same role for the conditional probabliity measure that $$f$$ does for the original probability measure $$\P$$.

Suppose that $$B \subseteq S$$ and $$\P(X \in B) \gt 0$$. The conditional probability density function of $$X$$ given $$X \in B$$ is

$f(x \mid X \in B) = \begin{cases} \frac{f(x)}{\P(X \in B)}, & x \in B \\ 0, & x \in B^c \end{cases}$
Proof:

This follows from the previous theorem. $$f(x \mid X \in B) = \P(X = x, X \in B) / \P(X \in B)$$. The numerator is $$f(x)$$ if $$x \in B$$ and is 0 if $$x \notin B$$.

#### Conditioning and Bayes' Theorem

Suppose that $$X$$ is a random variable with a discrete distribution on a countable set $$S$$, and that $$E \subseteq \Omega$$ is an event in the experiment. Let $$f$$ denote the probability density function of $$X$$. The versions of the law of total probability and Bayes' theorem given in the following theorems follow immediately from the corresponding results in the section on Conditional Probability. Only the notation is different. We start with the law of total probability.

If $$E$$ is an event then

$\P(E) = \sum_{x \in S} f(x) \P(E \mid X = x)$
Proof:

Note that $$\{\{X = x\}: x \in S\}$$ is a countable partition of the sample space $$\Omega$$. That is, these events are disjoint and their union is the entire sample space $$\Omega$$. Hence

$\P(E) = \sum_{x \in S} \P(X = x) \P(E \mid X = x) = \sum_{x \in S} f(x) \P(E \mid X = x)$

This result is useful, naturally, when the distribution of $$X$$ and the conditional probability of $$E$$ given the values of $$X$$ are known. When we compute $$\P(E)$$ in this way, we say that we are conditioning on $$X$$. The next exercise gives Bayes' Theorem, named after Thomas Bayes.

If $$E$$ is an event with $$\P(E) \gt 0$$ then

$f(x \mid E) = \frac{f(x) \P(E \mid X = x)}{\sum_{y \in S} f(y) \P(E \mid X = y)}, \quad x \in S$
Proof:

Note that the numerator of the fraction on the right is $$\P(X = x) \P(E \mid X = x) = \P(X = x, E)$$. The denominator is $$\P(E)$$ by the previous theorem. Hence the ratio is $$\P(X = x \mid E) = f(x \mid E)$$.

Bayes' theorem is a formula for the conditional probability density function of $$X$$ given $$E$$. Again, it is useful when the quantities on the right are known. In the context of Bayes' theorem, the (unconditional) distribution of $$X$$ is referred to as the prior distribution and the conditional distribution as the posterior distribution. Note that the denominator in Bayes' formula is $$\P(E)$$ and is simply the normalizing constant for the function $$x \mapsto f(x) \P(E \mid X = x)$$.

### Examples and Special Cases

We start with some simple (albeit somewhat artificial) discrete distributions. After that, we study three special parametric models--the discrete uniform distribution, hypergeometric distributions, and Bernoulli trials. These models are very important, so when working the computational problems that follow, try to see if the problem fits one of these models.

#### Simple Discrete Distributions

Let $$f(n) = \frac{1}{n} - \frac{1}{n + 1}$$ for $$n \in \N_+$$.

1. Show that $$f$$ is a probability density function.
2. Find $$\P(3 \le N \le 7)$$, where $$N$$ is a random variable with density function $$f$$.
1. $$\P(3 \le N \le 7) = \frac{5}{24}$$

Let $$g(n) = n (10 - n)$$ for $$n \in \{1, 2, \ldots, 9\}$$.

1. Find the probability density function $$f$$ that is proportional to $$g$$.
2. Sketch the graph of $$f$$ and find the mode of the distribution.
3. Find $$\P(3 \le N \le 6)$$ where $$N$$ has probability density function $$f$$.
1. $$f(n) = \frac{1}{165} n (10 - n)$$ for $$n \in \{1, 2, \ldots, 9\}$$
2. mode $$n = 5$$
3. $$\P(3 \le N \le 6) = \frac{94}{165}$$

Let $$g(n) = n^2 (10 -n)$$ for $$n \in \{1, 2 \ldots, 10\}$$.

1. Find the probability density function $$f$$ that is proportional to $$g$$.
2. Sketch the graph of $$f$$ and find the mode of the distribution.
3. Find $$\P(3 \le N \le 6)$$ where $$N$$ has probability density function $$f$$.
1. $$f(n) = \frac{1}{825} n^2 (10 - n)$$ for $$n \in \{1, 2, \ldots, 9\}$$
2. Mode $$n = 7$$
3. $$\P(3 \le N \le 6) = \frac{428}{825}$$

Let $$g(x, y) = x + y$$ for $$(x, y) \in \{1, 2, 3\}^2$$.

1. Sketch the domain of $$g$$.
2. Find the probability density function $$f$$ that is proportional to $$g$$.
3. Find the mode of the distribution.
4. Find $$\P(X \gt Y)$$ where $$(X, Y)$$ is a random vector with the probability density function $$f$$.
1. $$f(x,y) = \frac{1}{36} (x + y)$$ for $$(x,y) \in \{1, 2, 3\}^2$$
2. mode $$(3, 3)$$
3. $$\P(X \gt Y) = \frac{2}{9}$$

Let $$g(x, y) = x y$$ for $$(x, y) \in \{(1, 1), (1,2), (1, 3), (2, 2), (2, 3), (3, 3)\}$$.

1. Sketch the domain of $$g$$.
2. Find the probability density function $$f$$ that is proportional to $$g$$.
3. Find the mode of the distribution.
4. Find $$\P[(X, Y) \in \{(1, 2), (1, 3), (2, 2), (2, 3)\}]$$ where $$(X, Y)$$ is a random vector with the probability density function $$f$$.
1. $$f(x,y) = \frac{1}{25} x \, y$$ for $$(x,y) \in \{(1,1), (1,2), (1,3), (2,2), (2,3), (3,3)\}$$
2. mode $$(3,3)$$
3. $$\P[(X,Y) \in \{(1,2), (1,3), (2,2), (2,3)\}] = \frac{3}{5}$$

#### Discrete Uniform Distributions

An element $$X$$ is chosen at random from a finite set $$S$$. The phrase at random means that all outcomes are equally likely.

1. $$X$$ has probability density function $$f$$ given by $$f(x) = 1 / \#(S)$$ for $$x \in S$$.
2. $$\P(X \in A) = \#(A) / \#(S)$$ for $$A \subseteq S$$.

The distribution in the last exercise is called the discrete uniform distribution on $$S$$. Many random variables that arise in sampling or combinatorial experiments are transformations of uniformly distributed variables.

Suppose that $$n$$ elements are chosen at random, with replacement from a set $$D$$ with $$m$$ elements. Let $$\bs{X}$$ denote the ordered sequence of elements chosen. Then $$\bs{X}$$ is uniformly distributed on the set $$S = D^n$$, and hence has probability density function

$f(\bs{x}) = \frac{1}{m^n}, \quad \bs{x} \in S$

Suppose that $$n$$ elements are chosen at random, without replacement from a set $$D$$ with $$m$$ elements. Let $$\bs{X}$$ denote the ordered sequence of elements chosen. Then $$\bs{X}$$ is uniformly distributed on the set $$S$$ of permutations of size $$n$$ chosen from $$D$$, and hence has probability density function

$f(\bs{x}) = \frac{1}{m^{(n)}}, \quad \bs{x} \in S$

Suppose that $$n$$ elements are chosen at random, without replacement, from a set $$D$$ with $$m$$ elements. Let $$\bs{W}$$ denote the unordered set of elements chosen. Then $$\bs{W}$$ is uniformly distributed on the set $$T$$ of combinations of size $$n$$ chosen from $$D$$, and hence has probability density function

$f(\bs{w}) = \frac{1}{\binom{m}{n}}, \quad \bs{w} \in T$

Suppose that $$X$$ is uniformly distributed on a finite set $$S$$ and that $$B$$ is a nonempty subset of $$S$$. Then the conditional distribution of $$X$$ given $$X \in B$$ is uniform on $$B$$.

#### Hypergeometric Models

Suppose that a population consists of $$m$$ objects; $$r$$ of the objects are type 1 and $$m - r$$ are type 0. Thus, the population is dichotomous; here are some typical examples:

• The objects are persons, each either male or female.
• The objects are voters, each either a democrat or a republican.
• The objects are devices of some sort, each either good or defective.
• The objects are fish in a lake, each either tagged or untagged.
• The objects are balls in an urn, each either red or green.

A sample of $$n$$ objects is chosen at random (without replacement) from the population. Recall that this means that the samples, either ordered or unordered are equally likely. Note that this probability model has three parameters: the population size $$m$$, the number of type 1 objects $$r$$, and the sample size $$n$$. Now, suppose that we keep track of order, and let $$X_i$$ denote the type of the $$i$$th object chosen, for $$i \in \{1, 2, \ldots, n\}$$. Thus, $$X_i$$ is an indicator variable (that is, a variable that just takes values 0 and 1).

Recall that

1. $$\P(X_i = 1) = \frac{r}{m}$$ for each $$i$$. Thus, the indicator variables are identically distributed.
2. $$\P(X_i = 1, X_j = 1) = \frac{r}{m} \frac{r - 1}{m - 1}$$ for $$i \ne j$$. Thus the indicator variables are dependent (in fact, any two are negatively correlated).

Now let $$Y$$ denote the number of type 1 objects in the sample. Note that $$Y = \sum_{i=1}^n X_i$$. Any counting variable can be written as a sum of indicator variables.

$$Y$$ has probability density function.

$f(y) = \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\}$
1. $$f(y - 1) \lt f(y)$$ if and only if $$y \lt t$$ where $$t = (r + 1) (n + 1) / (m + 2)$$. Thus the distribution is unimodal.
2. If $$t$$ is not a positive integer, there is a single mode at $$\lfloor t \rfloor$$.
3. If $$t$$ is a positive integer, then there are two modes, at $$t - 1$$ and $$t$$.

The distribution defined by the probability density function in the last exercise is the hypergeometric distributions with parameters $$m$$, $$r$$, and $$n$$. We can extend the model to a population of three types. Thus, suppose that our population consists of $$m$$ objects; $$r$$ of the objects are type 1, $$s$$ are type 2, and $$m - r - s$$ are type 0. Here are some examples:

• The objects are voters, each a democrat, a republican, or an independent.
• The objects are cicadas, each one of three species: tredecula, tredecassini, or tredecim
• The objects are peaches, each classified as small, medium, or large.
• The objects are faculty members at a university, each an assistant professor, or an associate professor, or a full professor.

Once again, a sample of $$n$$ objects is chosen at random (without replacement). But now we need two random variables to keep track of the counts for the three types in the sample. Let $$Y$$ denote the number of type 1 objects in the sample and $$Z$$ the number of type 2 objects in the sample.

$$(Y, Z)$$ has probability density function.

$g(y, z) = \frac{\binom{r}{y} \binom{s}{z} \binom{m - r - s}{n - y - z}}{\binom{m}{n}}, \quad (y, z) \in \{0, 1, \ldots, n\}^2$

The distribution defined by the density function in the last exericse is the bivariate hypergeometric distribution with parameters $$m$$, $$r$$, $$s$$, and $$n$$. Clearly, the same general pattern applies to populations with even more types. However, because of all of the parameters, the formulas are not worthing remembering in detail; rather, just note the pattern, and remember the combinatorial meaning of the binomial coefficient. The hypergeometric distribution and the multivariate hypergeometric distribution are studied in detail in the chapter on Finite Sampling Models. This chapter contains a rich variety of distributions that are based on discrete uniform distributions.

#### Bernoulli Trials

A Bernoulli trials sequence is a sequence $$(X_1, X_2, \ldots)$$ of independent, identically distributed indicator variables. Random variable $$X_i$$ is the outcome of trial $$i$$, and in the usual terminology of reliability, 1 denotes success while 0 denotes failure, The process is named for Jacob Bernoulli. Let $$p = \P(X_i = 1)$$ denote the success parameter of the process. Note that the indicator variables in the hypergeometric model satisfy one of the assumptions of Bernoulli trials (identical distributions) but not the other (independence).

$$(X_1, X_2, \ldots, X_n)$$ has probability density function

$f(x_1, x_2, \ldots, x_n) = p^{x_1 + x_2 + \cdots + x_n} (1 - p)^{n - (x_1 + x_2 + \cdots + x_n)}, \quad (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$

Now let $$Y$$ denote the number of successes in the first $$n$$ trials. Note that $$Y = \sum_{i=1}^n X_i$$, so we see again that a complicated random variable can be written as a sum of simpler ones.

$$Y$$ has probability density function

$g(y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$
1. $$g(y - 1) \lt g(y)$$ if and only if $$y \lt t$$, wher $$t = (n + 1) p$$. Thus the distribution is unimodal.
2. If $$t$$ is not a positive integer, there is a single mode at $$\lfloor t \rfloor$$.
3. If $$t$$ is a positive integer, then there are two modes, at $$t - 1$$ and $$t$$.

The distribution defined by the probability density function in the last exercise is called the binomial distribution with parameters $$n$$ and $$p$$. The binomial distribution is studied in detail in the chapter on Bernoulli Trials.

Now let $$N$$ denote the trial number of the first success. Then $$N$$ has probability density function $$h$$ given by

$h(n) = (1 - p)^{n-1} p, \quad n \in \N_+$

The distribution defined by the probability density function in the last exercise is the geometric distribution on $$\N_+$$ with parameter $$p$$. The geometric distribution is studied in detail in the chapter on Bernoulli Trials.

#### Sampling Problems

An urn contains 30 red and 20 green balls. A sample of 5 balls is selected at random, without replacement. Let $$Y$$ denote the number of red balls in the sample.

1. Compute the probability density function of $$Y$$ explicitly and identify the distribution by name and parameter values.
2. Graph the probability density function and identify the mode(s).
3. Find $$\P(Y \gt 3)$$.
1. $$f(0) = 0.0073$$, $$f(1) = 0.0686$$, $$f(2) = 0.2341$$, $$f(3) = 0.3641$$, $$f(4) = 0.2587$$, $$f(5) = 0.0673$$. Hypergeometric with $$m = 50$$, $$r = 30$$, $$n = 5$$
2. mode: $$y = 3$$
3. $$\P(Y \gt 3) = 0.3260$$

In the ball and urn experiment, select sampling without replacement and set $$m = 50$$, $$r = 30$$, and $$n = 5$$. Run the experiment 1000 times and note the apparent convergence of the empirical probability density function of $$Y$$ to the theoretical probability density function.

An urn contains 30 red and 20 green balls. A sample of 5 balls is selected at random, with replacement. Let $$Y$$ denote the number of red balls in the sample.

1. Compute the probability density function of $$Y$$ explicitly and identify the distribution by name and parameter values.
2. Graph the probability density function and identify the mode(s).
3. Find $$\P(Y \gt 3)$$.
1. $$f(0) = 0.0102$$, $$f(1) = 0.0768$$, $$f(2) = 0.2304$$, $$f(3) = 0.3456$$, $$f(4) = 0.2592$$, $$f(5) = 0.0778$$. Binomial with $$n = 5$$, $$p = 3/5$$
2. mode: $$y = 3$$
3. $$\P(Y \gt 3) = 0.3370$$

In the ball and urn experiment, select sampling with replacement and set $$m = 50$$, $$r = 30$$, and $$n = 5$$. Run the experiment 1000 times and note the apparent convergence of the empirical probability density function of $$Y$$ to the theoretical probability density function.

A group of voters consists of 50 democrats, 40 republicans, and 30 independents. A sample of 10 voters is chosen at random, without replacement. Let $$X$$ denote the number of democrats in the sample and $$Y$$ the number of republicans in the sample.

1. Give the probability density function of $$X$$.
2. Give the probability density function of $$Y$$.
3. Give the probability density function of $$(X, Y)$$.
4. Find the probability that the sample has at least 4 democrats and at least 4 republicans.
1. $$g(x) = \frac{\binom{50}{x} \binom{70}{10-x}}{\binom{120}{10}}$$ for $$x \in \{0, 1, \ldots, 10\}$$
2. $$h(y) = \frac{\binom{40}{y} \binom{80}{10-y}}{\binom{120}{10}}$$ for $$y \in \{0, 1, \ldots, 10\}$$
3. $$f(x,y) = \frac{\binom{50}{x} \binom{40}{y} \binom{30}{10 - x - y}}{\binom{120}{10}}$$ for $$(x,y) \in \{0, 1, \ldots, 10\}^2$$ with $$x + y \le 10$$
4. $$\P(X \ge 4, Y \ge 4) = \frac{15\;137;200}{75\;597\;113} \approx 0.200$$

The Math Club at Enormous State University has 20 freshmen, 40 sophomores, 30 juniors, and 10 seniors. A committee of 8 club members is chosen at random, without replacement to organize $$\pi$$-day activities. Let $$X$$ denote the number of freshman in the sample, $$Y$$ the number of sophomores, and $$Z$$ the number of juniors.

1. Give the probability density function of $$X$$.
2. Give the probability density function of $$Y$$.
3. Give the probability density function of $$Z$$.
4. Give the probability density function of $$(X, Y)$$.
5. Give the probability density function of $$(X, Y, Z)$$.
6. Find the probability that the committee has no seniors.
1. $$f_X(x) = \frac{\binom{20}{x} \binom{80}{8-x}}{\binom{100}{8}}$$ for $$x \in \{0, 1, \ldots, 8\}$$
2. $$f_Y(y) = \frac{\binom{40}{y} \binom{60}{8-y}}{\binom{100}{8}}$$ for $$y \in \{0, 1, \ldots, 8\}$$
3. $$f_Z(z) = \frac{\binom{30}{z} \binom{70}{8-z}}{\binom{100}{8}}$$ for $$z \in \{0, 1, \ldots, 8\}$$
4. $$f_{X,Y}(x,y) = \frac{\binom{20}{x} \binom{40}{y} \binom{40}{8-x-y}}{\binom{100}{8}}$$ for $$(x,y) \in \{0, 1, \ldots\}^2$$ with $$x + y \le 8$$
5. $$f_{X,Y,Z}(x,y,z) = \frac{\binom{20}{x} \binom{40}{y} \binom{30}{z} \binom{10}{8-x-y-z}}{\binom{100}{8}}$$ for $$(x,y,z) \in \{0, 1, \ldots\}^3$$ with $$x + y + z \le 8$$
6. $$\P(X + Y + Z = 8) = \frac{156\;597\;013}{275\;935\;140} \approx 0.417$$

#### Coins and Dice

Suppose that a coin with probability of heads $$p$$ is tossed repeatedly, and the sequence of heads and tails is recorded.

1. Identify the underlying probability model by name and parameter.
2. Let $$Y$$ denote the number of heads in the first $$n$$ tosses. Give the probability density function of $$Y$$ and identify the distribution by name and parameters.
3. Let $$N$$ denote the number of tosses needed to get the first head. Give the probability density function of $$N$$ and identify the distribution by name and parameter.
1. Bernoulli trials with success parameter $$p$$.
2. $$f(k) = \binom{n}{k} p^k (1 - p)^{n-k}$$ for $$k \in \{0, 1, \ldots, n\}$$. This is the binomial distribution with trial parameter $$n$$ and success parameter $$p$$.
3. $$g(n) = p (1 - p)^{n-1}$$ for $$n \in \N_+$$. This is the geometric distribution with success parameter $$p$$.

Suppose that a coin with probability of heads $$p = 0.4$$ is tossed 5 times. Let $$Y$$ denote the number of heads.

1. Compute the probability density function of $$Y$$ explicitly.
2. Graph the probability density function and identify the mode.
3. Find $$\P(Y \gt 3)$$.
1. $$f(0) = 0.0778$$, $$f(1) = 0.2592$$, $$f(2) = 0.3456$$, $$f(3) = 0.2304$$, $$f(4) = 0.0768$$, $$f(5) = 0.0102$$
2. mode: $$k = 2$$
3. $$\P(Y \gt 3) = 0.0870$$

In the binomial coin experiment, set $$n = 5$$ and $$p = 0.4$$. Run the experiment 1000 times and note the apparent convergence of the empirical probability density function of $$Y$$ to the probability density function.

Suppose that a coin with probability of heads $$p = 0.2$$ is tossed until heads occurs. Let $$N$$ denote the number of tosses.

1. Find the probability density function of $$N$$ and identify the distribution of $$N$$.
2. Find $$\P(N \le 5)$$.
1. $$f(n) = (0.8)^{n-1} 0.2$$ for $$n \in \N_+$$
2. $$\P(N \le 5) = 0.67232$$

In the negative binomial experiment, set $$k = 1$$ and $$p = 0.2$$. Run the experiment 1000 times and note the apparent convergence of the empirical probability density function of the number of trials to the probability density function.

Suppose that two fair, standard dice are tossed and the sequence of scores $$(X_1, X_2)$$ recorded. Let $$Y = X_1 + X_2$$ denote the sum of the scores, $$U = \min\{X_1, X_2\}$$ the minimum score, and $$V = \max\{X_1, X_2\}$$ the maximum score.

1. Find the probability density function of $$(X_1, X_2)$$. Identify the distribution by name.
2. Find the probability density function of $$Y$$.
3. Find the probability density function of $$U$$.
4. Find the probability density function of $$V$$.
5. Find the probability density function of $$(U, V)$$.

We denote the PDFs by $$f$$, $$g$$, $$h_1$$, $$h_2$$, and $$h$$ respectively.

1. $$f(x_1, x_2) = \frac{1}{36}$$ for $$(x_1,x_2) \in \{1, 2, 3, 4, 5, 6\}^2$$. This is the uniform distribution on $$\{1, 2, 3, 4, 5, 6\}^2$$.
2. $$g(2) = g(12) = \frac{1}{36}$$, $$g(3) = g(11) = \frac{2}{36}$$, $$g(4) = g(10) = \frac{3}{36}$$, $$g(5) = g(9) = \frac{4}{36}$$, $$g(6) = g(8) = \frac{5}{36}$$, $$g(7) = \frac{6}{36}$$
3. $$h_1(1) = \frac{11}{36}$$, $$h_1(2) = \frac{9}{36}$$, $$h(_1(3) = \frac{7}{36}$$, $$h_1(4) = \frac{5}{36}$$, $$h_1(5) = \frac{3}{36}$$, $$h_1(6) = \frac{1}{36}$$
4. $$h_2(1) = \frac{1}{36}$$, $$h_1(2) = \frac{3}{36}$$, $$h(_2(3) = \frac{5}{36}$$, $$h_2(4) = \frac{7}{36}$$, $$h_2(5) = \frac{9}{36}$$, $$h_2(6) = \frac{11}{36}$$
5. $$h(u,v) = \frac{2}{36}$$ if $$u \lt v$$, $$h(u, v) = \frac{1}{36}$$ if $$u = v$$ where $$(u, v) \in \{1, 2, 3, 4, 5, 6\}^2$$ with $$u \le v$$

Note that $$(U, V)$$ could serve as the outcome of the experiment that consists of throwing two standard dice if we did not bother to record order. Note from the previous exercise that this random vector does not have a uniform distribution when the dice are fair. The mistaken idea that this vector should have the uniform distribution was the cause of difficulties in the early development of probability.

In the dice experiment, select $$n = 2$$ fair dice. Select the following random variables and note the shape and location of the probability density function. Run the experiment 1000 times. For each of the following variables, note the apparent convergence of the empirical probability density function to the probability density function.

1. $$Y$$, the sum of the scores.
2. $$U$$, the minimum score.
3. $$V$$, the maximum score.

In the die-coin experiment, a fair, standard die is rolled and then a fair coin is tossed the number of times showing on the die. Let $$N$$ denote the die score and $$Y$$ the number of heads.

1. Find the probability density function of $$N$$. Identify the distribution by name.
2. Find the probability density function of $$Y$$.
1. $$g(n) = \frac{1}{6}$$ for $$n \in \{1, 2, 3, 4, 5, 6\}$$. This is the uniform distribution on $$\{1, 2, 3, 4, 5, 6\}$$.
2. $$h(0) = \frac{63}{384}$$, $$h(1) = \frac{120}{384}$$, $$h(2) = \frac{90}{384}$$, $$h(3) = \frac{64}{384}$$, $$h(4) = \frac{29}{384}$$, $$h(5) = \frac{8}{384}$$, $$h(6) = \frac{1}{384}$$

Run the die-coin experiment 1000 times. For the number of heads, note the apparent convergence of the empirical probability density function to the probability density function.

Suppose that a bag contains 12 coins: 5 are fair, 4 are biased with probability of heads $$\frac{1}{3}$$; and 3 are two-headed. A coin is chosen at random from the bag and tossed 5 times. Let $$V$$ denote the probability of heads of the selected coin and let $$Y$$ denote the number of heads.

1. Find the probability density function of $$V$$.
2. Find the probability density function of $$Y$$.
1. $$g(1/2) = 5/12$$, $$g(1/3) = 4/12$$, $$g(1) = 3/12$$
2. $$h(0) = 5311/93312$$, $$h(1) = 16315/93312$$, $$h(2) = 22390/93312$$, $$h(3) = 17270/93312$$, $$h(4) = 7355/93312$$, $$h(5) = 24671/93312$$

Compare Exercise 37 and Exercise 39. In the first exercise, we toss a coin with a fixed probability of heads a random number of times. In second exercise, we effectively toss a coin with a random probability of heads a fixed number of times. In both cases, we can think of starting with a binomial distribution and randomizing one of the parameters.

In the coin-die experiment, a fair coin is tossed. If the coin lands tails, a fair die is rolled. If the coin lands heads, an ace-six flat die is tossed (faces 1 and 6 have probability $$\frac{1}{4}$$ each, while faces 2, 3, 4, 5 have probability $$\frac{1}{8}$$ each). Find the probability density function of the die score $$Y$$.

$$f(y) = 5/24$$ for $$y \in \{1,6\}$$, $$f(y) = 7/24$$ for $$y \in \{2, 3, 4, 5\}$$

Run the coin-die experiment 1000 times. Compare the empirical probability density of the die score with the theoretical probability density in the last exercise.

Suppose that a standard die is thrown 10 times. Let $$Y$$ denote the number of times an ace or a six occurred. Give the probability density function of $$Y$$ and identify the distribution by name and parameter values in each of the following cases:

1. The die is fair.
2. The die is an ace-six flat.
1. $$f(k) = \binom{10}{k} \left(\frac{1}{3}\right)^k \left(\frac{2}{3}\right)^{10-k}$$ for $$k \in \{0, 1, \ldots, 10\}$$. This is the binomial distribution with trial parameter $$n = 10$$ and success parameter $$p = \frac{1}{3}$$
2. $$f(k) = \binom{10}{k} \left(\frac{1}{2}\right)^{10}$$ for $$k \in \{0, 1, \ldots, 10\}$$. This is the binomial distribution with trial parameter $$n = 10$$ and success parameter $$p = \frac{1}{2}$$

Suppose that a standard die is thrown until an ace or a six occurs. Let $$N$$ denote the number of throws. Give the probability density function of $$N$$ and identify the distribution by name and parameter values in each of the following cases:

1. The die is fair.
2. The die is an ace-six flat.
1. $$g(n) = \left(\frac{2}{3}\right)^{n-1} \frac{1}{3}$$ for $$n \in \N_+$$. This is the geometric distribution with success parameter $$p = \frac{1}{3}$$
2. $$g(n) = \left(\frac{1}{2}\right)^n$$ for $$n \in \N_+$$. This is the geometric distribution with success parameter $$p = \frac{1}{2}$$

Fred and Wilma takes turns tossing a coin with probability of heads $$p \in (0, 1)$$: Fred first, then Wilma, then Fred again, and so forth. The first person to toss heads wins the game. Let $$N$$ denote the number of tosses, and $$W$$ the event that Wilma wins.

1. Give the probability density function of $$N$$ and identify the distribution by name.
2. Compute $$\P(W)$$ and sketch the graph of this probability as a function of $$p$$.
3. Find the conditional probability density function of $$N$$ given $$W$$.
1. $$f(n) = p(1 - p)^{n-1}$$ for $$n \in \N_+$$. This is the geometric distribution with success parameter $$p$$.
2. $$\P(W) = \frac{1-p}{2-p}$$
3. $$f(n \mid W) = p (2 - p) (1 - p)^{n-2}$$ for $$n \in \{2, 4, \ldots\}$$

The alternating coin tossing game is studied in more detail in the section on The Geometric Distribution in the chapter on Bernoulli trials.

#### Cards

Recall that a poker hand consists of 5 cards chosen at random and without replacement from a standard deck of 52 cards. Let $$X$$ denote the number of spades in the hand and $$Y$$ the number of hearts in the hand. Give the probability density function of each of the following random variables, and identify the distribution by name:

1. $$X$$
2. $$Y$$
3. $$(X, Y)$$
1. $$g(x) = \frac{\binom{13}{x} \binom{39}{5-x}}{\binom{52}{5}}$$ for $$x \in \{0, 1, 2, 3, 4, 5\}$$. This is the hypergeometric distribution with population size $$m = 52$$, type parameter $$r = 13$$, and sample size $$n = 5$$
2. $$h(y) = \frac{\binom{13}{y} \binom{39}{5-y}}{\binom{52}{5}}$$ for $$y \in \{0, 1, 2, 3, 4, 5\}$$. This is the same hypergeometric distribution as in part (a).
3. $$f(x, y) = \frac{\binom{13}{x} \binom{13}{y} \binom{26}{5-x-y}}{\binom{52}{5}}$$ for $$(x,y) \in \{0, 1, 2, 3, 4, 5\}^2$$ with $$x + y \le 5$$. This is a bivariate hypergeometric distribution.

Recall that a bridge hand consists of 13 cards chosen at random and without replacement from a standard deck of 52 cards. An honor card is a card of denomination ace, king, queen, jack or 10. Let $$N$$ denote the number of honor cards in the hand.

1. Find the probability density function of $$N$$ and identify the distribution by name.
2. Find the probability that the hand has no honor cards. A hand of this kind is known as a Yarborough, in honor of Second Earl of Yarborough.
1. $$f(n) = \frac{\binom{20}{n} \binom{32}{13-n}}{\binom{52}{13}}$$ for $$n \in \{0, 1, \ldots, 13\}$$. This is the hypergeometric distribution with population size $$m = 52$$, type parameter $$r = 20$$ and sample size $$n = 13$$.
2. 0.00547

In the most common high card point system in bridge, an ace is worth 4 points, a king is worth 3 points, a queen is worth 2 points, and a jack is worth 1 point. Find the probability density function of $$V$$, the point value of a random bridge hand.

#### Reliability

Suppose that in a batch of 500 components, 20 are defective and the rest are good. A sample of 10 components is selected at random and tested. Let $$X$$ denote the number of defectives in the sample.

1. Find the probability density function of $$X$$ and identify the distribution by name and parameter values.
2. Find the probability that the sample contains at least one defective component.
1. $$f(x) = \frac{\binom{20}{x} \binom{480}{10-x}}{\binom{500}{10}}$$ for $$x \in \{0, 1, \ldots, 10\}$$. This is the hypergeometric distribution with population size $$m = 500$$, type parameter $$r = 20$$, and sample size $$n = 10$$.
2. $$\P(X \ge 1) = 1 - \frac{\binom{480}{10}}{\binom{500}{10}} \approx = 0.3377$$

A plant has 3 assembly lines that produce a certain type of component. Line 1 produces 50% of the components and has a defective rate of 4%; line 2 has produces 30% of the components and has a defective rate of 5%; line 3 produces 20% of the components and has a defective rate of 1%. A component is chosen at random from the plant and tested.

1. Find the probability that the component is defective.
2. Given that the component is defective, find the conditional probability density function of the line that produced the component.

Let $$D$$ the event that the item is defective, and $$f(\cdot \mid D)$$ the PDF of the line number given $$D$$.

1. $$\P(D) = 0.037$$
2. $$f(1 \mid D) = 0.541$$, $$f(2 \mid D) = 0.405$$, $$f(3 \mid D) = 0.054$$

Recall that in the standard model of structural reliability, a systems consists of $$n$$ components, each of which, independently of the others, is either working for failed. Let $$X_i$$ denote the state of component $$i$$, where 1 means working and 0 means failed. Thus, the state vector is $$\bs{X} = (X_1, X_2, \ldots, X_n)$$. The system as a whole is also either working or failed, depending only on the states of the components. Thus, the state of the system is an indicator random variable $$V = V(\bs{X})$$ that depends on the states of the components according to a structure function. In a series system, the system works if and only if every components works. In a parallel system, the system works if and only if at least one component works. In a $$k$$ out of $$n$$ system, the system works if and only if at least $$k$$ of the $$n$$ components work.

The reliability of a device is the probability that it is working. Let $$p_i = \P(X_i = 1)$$ denote the reliability of component $$i$$, so that $$\bs{p} = (p_1, p_2, \ldots, p_n)$$ is the vector of component reliabilities. Because of the independence assumption, the system reliability depends only on the component reliabilities, according to a reliability function $$r(\bs{p}) = \P(V = 1)$$. Note that when all component reliabilities have the same value $$p$$, the states of the components form a sequence of $$n$$ Bernoulli trials. In this case, the system reliability is, of course, a function of the common component reliability $$p$$.

Suppose that the component reliabilities all have the same value $$p$$. Let $$\bs{X}$$ denote the state vector and $$Y$$ denote the number of working components.

1. Give the probability density function of $$\bs{X}$$.
2. Give the probability density function of $$Y$$ and identify the distribution by name and parameter.
3. Find the reliability of the $$k$$ out of $$n$$ system.
1. $$f(x_1, x_2, \ldots, x_n) = p^k (1 - p)^{n-k}$$ for $$(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$$ where $$k = x_1 + x_2 \cdots + x_n$$
2. $$g(k) = \binom{n}{k} p^k (1 - p)^{n-k}$$ for $$k \in \{0, 1, \ldots, n\}$$. This is the binomial distribution with trial parameter $$n$$ and success parameter $$p$$.
3. $$r(p) = \sum_{i=k}^n \binom{n}{i} p^i (1 - p)^{n-i}$$

Suppose that we have 4 independent components, with common reliability $$p = 0.8$$. Let $$Y$$ denote the number of working components.

1. Find the probability density function of $$Y$$ explicitly.
2. Find the reliability of the parallel system.
3. Find the reliability of the 2 out of 4 system.
4. Find the reliability of the 3 out of 4 system.
5. Find the reliability of the series system.
1. $$g(0) = 0.0016$$, $$g(1) = 0.0256$$, $$g(2) = 0.1536$$, $$g(3) = g(4) = 0.4096$$
2. $$r_{4,1} = 0.9984$$
3. $$r_{4,2} = 0.9729$$
4. $$r_{4,3} = 0.8192$$
5. $$r_{4,4} = 0.4096$$

Suppose that we have 4 independent components, with reliabilities $$p_1 = 0.6$$, $$p_2 = 0.7$$, $$p_3 = 0.8$$, and $$p_4 = 0.9$$. Let $$Y$$ denote the number of working components.

1. Find the probability density function of $$Y$$.
2. Find the reliability of the parallel system.
3. Find the reliability of the 2 out of 4 system.
4. Find the reliability of the 3 out of 4 system.
5. Find the reliability of the series system.
1. $$g(0) = 0.0024$$, $$g(1) = 0.0404$$, $$g(2) = 0.2.144$$, $$g(3) = 0.4404$$, $$g(4) = 0.3024$$
2. $$r_{4,1} = 0.9976$$
3. $$r_{4,2} = 0.9572$$
4. $$r_{4,3} = 0.7428$$
5. $$r_{4,4} = 0.3024$$

#### The Poisson Distribution

Let $$f(n) = e^{-a} \frac{a^n}{n!}$$ for $$n \in \N$$, where $$a \gt 0$$ is a parameter.

1. $$f$$ is a probability density function.
2. $$f(n - 1) \lt f(n)$$ if and only if $$n \lt a$$ so the distribution is unimodal.
3. If $$a$$ is not a positive integer, there is a single mode at $$\lfloor a \rfloor$$
4. If $$a$$ is a positive integer, there are two modes at $$a - 1$$ and $$a$$.

The distribution defined by the probability density function in the previous exercise is the Poisson distribution with parameter $$a$$, named after Simeon Poisson. The Poisson distribution is studied in detail in the Chapter on Poisson Processes, and is used to model the number of random points in a region of time or space, under certain ideal conditions. The parameter $$a$$ is proportional to the size of the region of time or space.

Suppose that the customers arrive at a service station according to the Poisson model, at an average rate of 4 per hour. Thus, the number of customers $$N$$ who arrive in a 2-hour period has the Poisson distribution with parameter 8.

1. Find the modes.
2. Find $$\P(N \ge 6)$$.
1. modes: 7, 8
2. $$\P(N \gt 6) = 0.8088$$

In the Poisson process, set $$r = 4$$ and $$t = 2$$. Run the simulation 1000 times and note the apparent convergence of the empirical density function to the probability density function.

Suppose that the number of flaws $$N$$ in a piece of fabric of a certain size has the Poisson distribution with parameter 2.5.

1. Find the mode.
2. Find $$\P(N \gt 4)$$.
1. mode: 2
2. $$\P(N \gt 4) = 0.1088$$

Suppose that the number of raisins $$N$$ in a piece of cake has the Poisson distribution with parameter 10.

1. Find the modes.
2. Find $$\P(N \le 12)$$.
1. modes: 9, 10
2. $$\P(8 \le N \le 12) = 0.5713$$

#### A Zeta Distribution

Let $$g(n) = \frac{1}{n^2}$$ for $$n \in \N_+$$.

1. Find the probability density function $$f$$ that is proportional to $$g$$.
2. Find the mode of the distribution.
3. Find $$\P(N \le 5)$$ where $$N$$ has probability density function $$f$$.
1. $$f(n) = \frac{6}{\pi^2 n^2}$$ for $$n \in \N_+$$. Recall that $$\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$$
2. Mode $$n = 1$$
3. $$\P(N \le 5) = \frac{5269}{600 \pi^2}$$

The distribution defined in the previous exercise is a member of the zeta family of distributions. Zeta distributions are used to model sizes or ranks of certain types of objects, and are studied in more detail in the chapter on Special Distributions.

#### Benford's Law

Let $$f(d) = \log(d + 1) - \log(d) = \log(1 + \frac{1}{d})$$ for $$d \in \{1, 2, \ldots, 9\}$$. (The logarithm function is the base 10 common logarithm, not the base $$e$$ natural logarithm.)

1. Show that $$f$$ is a probability density function.
2. Compute the values of $$f$$ explicitly, and sketch the graph.
3. Find $$\P(X \le 3)$$ where $$X$$ has probability density function $$f$$.
1.  $$d$$ $$f(d)$$ 1 2 3 4 5 6 7 8 9 0.301 0.1761 0.1249 0.0969 0.0792 0.0669 0.058 0.0512 0.0458
2. $$\P(X \le 3) = \log(4) \approx 0.6020$$

The distribution defined in the previous exercise is known as Benford's law, and is named for the American physicist and engineer Frank Benford. This distribution governs the leading digit in many real sets of data. Benford's law is studied in more detail in the chapter on Special Distributions.

#### Data Analysis Exercises

In the M&M data, let $$R$$ denote the number of red candies and $$N$$ the total number of candies. Compute and graph the empirical probability density function of each of the following:

1. $$R$$
2. $$N$$
3. $$R$$ given $$N \gt 57$$

We denote the PDF of $$R$$ by $$f$$ and the PDF of $$N$$ by $$g$$

1.  $$r$$ $$f(r)$$ 3 4 5 6 8 9 10 11 12 14 15 20 $$\frac{1}{30}$$ $$\frac{3}{30}$$ $$\frac{3}{30}$$ $$\frac{2}{30}$$ $$\frac{4}{30}$$ $$\frac{5}{30}$$ $$\frac{2}{30}$$ $$\frac{1}{30}$$ $$\frac{3}{30}$$ $$\frac{3}{30}$$ $$\frac{3}{30}$$ $$\frac{1}{30}$$
2.  $$n$$ $$g(n)$$ 50 53 54 55 56 57 58 59 60 61 $$\frac{1}{30}$$ $$\frac{1}{30}$$ $$\frac{1}{30}$$ $$\frac{4}{30}$$ $$\frac{4}{30}$$ $$\frac{3}{30}$$ $$\frac{9}{30}$$ $$\frac{3}{30}$$ $$\frac{2}{30}$$ $$\frac{2}{30}$$
3.  $$r$$ $$f(r \mid N \gt 57)$$ 3 4 6 8 9 11 12 14 15 $$\frac{1}{16}$$ $$\frac{1}{16}$$ $$\frac{1}{16}$$ $$\frac{3}{16}$$ $$\frac{3}{16}$$ $$\frac{1}{16}$$ $$\frac{1}{16}$$ $$\frac{3}{16}$$ $$\frac{2}{16}$$

In the Cicada data, let $$G$$ denotes gender, $$S$$ species type, and $$W$$ body weight (in grams). Compute the empirical probability density function of each of the following:

1. $$G$$
2. $$S$$
3. $$(G, S)$$
4. $$G$$ given $$W \gt 0.20$$ grams.
We denote the PDF of $$G$$ by $$g$$, the PDF of $$S$$ by $$h$$ and the PDF of $$(G, S)$$ by $$f$$.
1. $$g(0) = \frac{59}{104}$$, $$g(1) = \frac{45}{104}$$
2. $$h(0) = \frac{44}{104}$$, $$h(1) = \frac{6}{104}$$, $$h(2) = \frac{54}{104}$$
3. $$f(0, 0) = \frac{16}{104}$$, $$f(0, 1) = \frac{3}{104}$$, $$f(0, 2) = \frac{40}{104}$$, $$f(1, 0) = \frac{28}{104}$$, $$f(1, 1) = \frac{3}{104}$$, $$f(1, 2) = \frac{14}{104}$$
4. $$g(0 \mid W \gt 0.2) = \frac{31}{73}$$, $$g(1 \mid W \gt 0.2) = \frac{42}{73}$$