\(\newcommand{\P}{\mathbb{P}}\)
\(\newcommand{\E}{\mathbb{E}}\)
\(\newcommand{\R}{\mathbb{R}}\)
\(\newcommand{\N}{\mathbb{N}}\)
\(\newcommand{\bs}{\boldsymbol}\)
\(\newcommand{\var}{\text{var}}\)
\(\newcommand{\cov}{\text{cov}}\)
\(\newcommand{\cor}{\text{cor}}\)

A multinomial trials process is a sequence of independent, identically distributed random variables \(\bs{X} =(X_1, X_2, \ldots)\) each taking \(k\) possible values. Thus, the multinomial trials process is a simple generalization of the Bernoulli trials process (which corresponds to \(k = 2\)). For simplicity, we will denote the set of outcomes by \(\{1, 2, \ldots, k\}\), and we will denote the common probability density function of the trial variables by \[ p_i = \P(X_j = i), \quad i \in \{1, 2, \ldots, k\} \] Of course \(p_i \gt 0\) for each \(i\) and \(\sum_{i=1}^k p_i = 1\). In statistical terms, the sequence \(\bs{X}\) is formed by sampling from the distribution.

As with our discussion of the binomial distribution, we are interested in the random variables that count the number of times each outcome occurred. Thus, let \[ Y_i = \#\left\{j \in \{1, 2, \ldots, n\}: X_j = i\right\} = \sum_{j=1}^n \bs{1}(X_j = i), \quad i \in \{1, 2, \ldots, k\} \] Of course, these random variables also depend on the parameter \(n\) (the number of trials), but this parameter is fixed in our discussion so we suppress it to keep the notation simple. Note that \(\sum_{i=1}^k Y_i = n\) so if we know the values of \(k - 1\) of the counting variables, we can find the value of the remaining variable.

Basic arguments using independence and combinatorics can be used to derive the joint, marginal, and conditional densities of the counting variables. In particular, recall the definition of the multinomial coefficient: for positive integers \((j_1, j_2, \ldots, j_n)\) with \(\sum_{i=1}^k j_i = n\), \[ \binom{n}{j_1, j_2, \dots, j_k} = \frac{n!}{j_1! j_2! \cdots j_k!} \]

For positive integers \((j_1, j_2, \ldots, j_k)\) with \(\sum_{i=1}^k j_i = n\), \[ \P(Y_1 = j_1, Y_2, = j_2 \ldots, Y_k = j_k) = \binom{n}{j_1, j_2, \ldots, j_k} p_1^{j_1} p_2^{j_2} \cdots p_k^{j_k} \]

By independence, any sequence of trials in which outcome \(i\) occurs exactly \(j_i\) times for \(i \in \{1, 2, \ldots, k\}\) has probability \(p_1^{j_1} p_2^{j_2} \cdots p_k^{j_k}\). The number of such sequences is the multinomial coefficient \(\binom{n}{j_1, j_2, \ldots, j_k}\). Thus, the result follows from the additive property of probability.

The distribution of \(\bs{Y} = (Y_1, Y_2, \ldots, Y_k)\) is called the multinomial distribution with parameters \(n\) and \(\bs{p} = (p_1, p_2, \ldots, p_k)\). We also say that \( (Y_1, Y_2, \ldots, Y_{k-1}) \) has this distribution (recall that the values of \(k - 1\) of the counting variables determine the value of the remaining variable). Usually, it is clear from context which meaning of the term *multinomial distribution* is intended. Again, the ordinary binomial distribution corresponds to \(k = 2\).

For each \(i \in \{1, 2, \ldots, k\}\), \(Y_i\) has the binomial distribution with parameters \(n\) and \(p_i\): \[ \P(Y_i = j) = \binom{n}{j} p_i^j (1 - p_i)^{n-j}, \quad j \in \{0, 1, \ldots, n\} \]

There is a simple probabilistic proof. If we think of each trial as resulting in outcome \(i\) or not, then clearly we have a sequence of \(n\) Bernoulli trials with success parameter \(p_i\). Random variable \(Y_i\) is the number of successes in the \(n\) trials. The result could also be obtained by summing the joint probability density function in Exercise 1 over all of the other variables, but this would be much harder.

The multinomial distribution is preserved when the counting variables are combined. Specifically, suppose that \((A_1, A_2, \ldots, A_m)\) is a partition of the index set \(\{1, 2, \ldots, k\}\) into nonempty subsets. For \(j \in \{1, 2, \ldots, m\}\) let \[ Z_j = \sum_{i \in A_j} Y_i, \quad q_j = \sum_{i \in A_j} p_i \]

\(\bs{Z} = (Z_1, Z_2, \ldots, Z_m)\) has the multinomial distribution with parameters \(n\) and \(\bs{q} = (q_1, q_2, \ldots, q_m)\).

Again, there is a simple probabilistic proof. Each trial, independently of the others, results in an outome in \(A_j\) with probability \(q_j\). For each \(j\), \(Z_j\) counts the number of trails which result in an outcome in \(A_j\). This result could also be derived from the joint probability density function in Exercise 1, but again, this would be a much harder proof.

The multinomial distribution is also preserved when some of the counting variables are observed. Specifically, suppose that \((A, B)\) is a partition of the index set \(\{1, 2, \ldots, k\}\) into nonempty subsets. Suppose that \((j_i : i \in B)\) is a sequence of nonnegative integers, indexed by \(B\) such that \(j = \sum_{i \in B} j_i \le n\). Let \(p = \sum_{i \in A} p_i\).

The conditional distribution of \((Y_i: i \in A)\) given \((Y_i = j_i: i \in B)\) is multinomial with parameters \(n - j\) and \((p_i / p: i \in A)\).

Again, there is a simple probabilistic argument and a harder analytic argument. If we know \(Y_i = j_i\) for \(i \in B\), then there are \(n - j\) trials remaining, each of which, independently of the others, must result in an outcome in \(A\). The conditional probability of a trial resulting in \(i \in A\) is \(p_i / p\).

Combinations of the basic results involving grouping and conditioning can be used to compute any marginal or conditional distributions.

We will compute the mean and variance of each counting variable, and the covariance and correlation of each pair of variables.

For \(i \in \{1, 2, \ldots, k\}\), the mean and variance of \(Y_i\) are

- \(\E(Y_i) = n p_i\)
- \(\var(Y_i) = n p_i (1 - p_i)\)

Recall that \(Y_i\) has the binomial distribution with parameters \(n\) and \(p_i\).

For distinct \(i, \; j \in \{1, 2, \ldots, k\}\),

- \(\cov(Y_i, Y_j) = - n p_i p_j\)
- \(\cor(Y_i, Y_j) = -\sqrt{p_i p_j \big/ \left[(1 - p_i)(1 - p_j)\right]}\)

From the bi-linearity of the covariance operator, we have \[ \cov(Y_i, Y_j) = \sum_{s=1}^n \sum_{t=1}^n \cov[\bs{1}(X_s = i), \bs{1}(X_t = j)] \] If \(s = t\), the covariance of the indicator variables is \(-p_i p_j\). If \(s \ne t\) the covariance is 0 by independence. Part (b) can be obtained from part (a) using the definition of correlation and the variances of \(Y_i\) and \(Y_j\) given above.

From the last result, note that the number of times outcome \(i\) occurs and the number of times outcome \(j\) occurs are negatively correlated, but the correlation does not depend on \(n\).

If \(k = 2\), then the number of times outcome 1 occurs and the number of times outcome 2 occurs are perfectly correlated.

This follows immediately from the result above on covariance since we must have \(i = 1\) and \(j = 2\), and \(p_2 = 1 - p_1\). Of course we can also argue this directly since \(Y_2 = n - Y_1\).

In the dice experiment, select the number of aces. For each die distribution, start with a single die and add dice one at a time, noting the shape of the probability density function and the size and location of the mean/standard deviation bar. When you get to 10 dice, run the simulation 1000 times and compare the relative frequency function to the probability density function, and the empirical moments to the distribution moments.

Suppose that we throw 10 standard, fair dice. Find the probability of each of the following events:

- Scores 1 and 6 occur once each and the other scores occur twice each.
- Scores 2 and 4 occur 3 times each.
- There are 4 even scores and 6 odd scores.
- Scores 1 and 3 occur twice each given that score 2 occurs once and score 5 three times.

- 0.00375
- 0.0178
- 0.205
- 0.0879

Suppose that we roll 4 ace-six flat dice (faces 1 and 6 have probability \(\frac{1}{4}\) each; faces 2, 3, 4, and 5 have probability \(\frac{1}{8}\) each). Find the joint probability density function of the number of times each score occurs.

\(f(u, v, w, x, y, z) = \binom{4}{u, v, w, x, y, z} \left(\frac{1}{4}\right)^{u+z} \left(\frac{1}{8}\right)^{v + w + x + y}\) for nonnegative integers \(u, \, v, \, w, \, x, \, y, \, z\) that sum to 4

In the dice experiment, select 4 ace-six flats. Run the experiment 500 times and compute the joint relative frequency function of the number times each score occurs. Compare the relative frequency function to the true probability density function.

Suppose that we roll 20 ace-six flat dice. Find the covariance and correlation of the number of 1's and the number of 2's.

covariance: \(-0.625\); correlation: \(-0.0386\)

In the dice experiment, select 20 ace-six flat dice. Run the experiment 500 times, updating after each run. Compute the empirical covariance and correlation of the number of 1's and the number of 2's. Compare the results with the theoretical results computed previously.