\(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\skew}{\text{skew}}\) \(\newcommand{\kurt}{\text{kurt}}\)
  1. Random
  2. 4. Special Distributions
  3. The Extreme Value Distribution

The Extreme Value Distribution

Extreme value distributions arise as limiting distributions for maximums or minimums (extreme values) of a sample of independent, identically distributed random variables, as the sample size increases. Thus, these distributions are important in probability and mathematical statistics.

The Standard Distribution for Maximums

Distribution Functions

The function \( G \) defined by \[ G(v) = \exp\left(-e^{-v}\right), \quad v \in \R \] is a distribution function for a continuous distribution on \(\R\). The probability distribution defined by \( G \) is the (type 1) standard extreme value distribution (for maximums).

Proof:

Note that \( G \) is continuous, increasing, and satisfies \( G(v) \to 0 \) as \( v \to -\infty \) and \( G(v) \to 1 \) as \( v \to \infty \).

The distribution is also known as the (type 1) standard Gumbel distribution (for maximums) in honor of Emil Gumbel. As we will show below, it arises as the limit of the maximum of \(n\) independent random variables, each with the standard exponential distribution (when this maximum is appropriately centered). This fact is the main reason that the distribution is special, and is the reason for the name. In the sequel, we will just refer to the distribution as the standard Gumbel distribution.

The probability density function \( g \) of the standard Gumbel distribution is given by \[ g(v) = e^{-v} \exp\left(-e^{-v}\right) = \exp\left[-\left(e^{-v} + v\right)\right], \quad v \in \R \]

  1. \(g\) increases and then decreases with mode \( v = 0 \)
  2. \(g\) is concave upward, then downward, then upward again, with inflection points at \( v = \ln\left[(3 \pm \sqrt{5}) \big/ 2)\right] \approx \pm 0.9264\).
Proof:

These results follow from standard calculus. The PDF is \( g = G^\prime \).

  1. The first derivative of \( g \) satisfies \(g^\prime(v) = g(v)\left(e^{-v} - 1\right)\) for \( v \in \R \).
  2. The second derivative of \( g \) satisfies \( g^{\prime \prime}(v) = g(v) \left(e^{-2 v} - 3 e^{-v} + 1\right)\) for \( v \in \R \).

In the special distribution simulator, select the extreme value distribution. Keep the default parameter values and note the shape and location of the probability density function. In particular, note the lack of symmetry. Run the simulation 1000 times and compare the empirical density function to the probability density function.

The quantile function \( G^{-1} \) of the standard Gumbell distribution is given by \[ G^{-1}(p) = -\ln[-\ln(p)], \quad p \in (0, 1) \]

  1. The first quartile is \(-\ln[-\ln(4)] \approx -0.3266\).
  2. The median is \(-\ln[-\ln(2)] \approx 0.3665\)
  3. The third quartile is \(-\ln[\ln(4) - \ln(3)] \approx 1.2459\)
Proof:

The formula for \( G^{-1} \) follows from solving \( p = G(x) \) for \( x \) in terms of \( p \).

In the special distribution calculator, select the extreme value distribution. Keep the default parameter values and note the shape and location of the probability density and distribution functions. Compute the quantiles of order 0.1, 0.3, 0.6, and 0.9

Moments

Suppose that \( V \) has the standard Gumbel distribution. The moment generating function of \( V \) has a simple expression in terms of the gamma function.

The moment generating function \( m \) of \( V \) is given by \[ m(t) = \E\left(e^{t V}\right) = \Gamma(1 - t), \quad t \in (-\infty, 1) \]

Proof:

Note that \[ m(t) = \int_{-\infty}^\infty e^{t v} \exp\left(-e^{-v}\right) e^{-v} dv \] The substitution \( x = e^{-v} \), \( dx = -e^{-v} dv \) gives \(m(t) = \int_0^\infty x^{-t} e^{-x} dx = \Gamma(1 - t)\) for \(t \in (-\infty, 1)\).

Next we give the mean and variance. First, recall that the Euler constant, named for Leonhard Euler is defined by \[ \gamma = -\Gamma^\prime(1) = -\int_0^\infty e^{-x} \ln(x) \, dx \approx 0.5772156649 \]

The mean and variance of \( V \) are

  1. \(\E(V) = \gamma\)
  2. \(\var(V) = \frac{\pi^2}{6}\)

In the special distribution simulator, select the extreme value distribution and keep the default parameter values. Note the shape and location of the mean \( \pm \) standard deviation bar. Run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

Next we give the skewness and kurtosis of \( V \). The skewness involves a value of the Riemann zeta function \( \zeta \), named of course for Georg Riemann. Recall that \( \zeta \) is defined by \[ \zeta(n) = \sum_{k=1}^\infty \frac{1}{k^n}, \quad n \gt 1 \]

The skewness and kurtosis of \( V \) are

  1. \( \skew(V) = 12 \sqrt{6} \zeta(3) \big/ \pi^3 \approx 1.13955 \)
  2. \( \kurt(V) = \frac{27}{5} \)

The particular value of the zeta function, \( \zeta(3) \), is known as Apéry's constant. From (b), it follows that the excess kurtosis is \( \kurt(V) - 3 = \frac{12}{5} \).

Related Distributions

The standard Gumbel distribution has the usual connections to the standard uniform distribution by means of the distribution and quantile functions. Recall that the standard uniform distribution is the continuous uniform distribution on the interval \( (0, 1) \).

The standard Gumbel and standard uniform distributions are related as follows:

  1. If \( U \) has the standard uniform distribution then \( V = G^{-1}(U) = -\ln[-\ln(U)] \) has the standard Gumbel distribution.
  2. If \( V \) has the standard Gumbel distribution then \( U = G(V) = \exp\left(e^{-V}\right) \) has the standard uniform distribution.

Open the random quantile experiment and select the extreme value distribution. Keep the default parameter values and note again the shape and location of the probability density and distribution functions. Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counteparts.

The standard Gumbel distribution also has simple connections with the standard exponential distribution (the exponential distribution with rate parameter 1).

The standard Gumbel and standard exponential distributions are related as follows:

  1. If \(X\) has the standard exponential distribution then \(V = -\ln(X)\) has the standard Gumbel distribution.
  2. If \(V\) has the standard Gumbel distribution then \(X = e^{-V}\) has the standard exponential distribution.
Proof:

These results follow from the usual change of variables theorem. The transformations are \( v = -\ln(x)\) and \( x = e^{-v} \) for \( x \in (0, \infty) \) and \( v \in \R \), and these are inverses of each other. Let \( f \) and \( g \) denote PDFs of \( X \) and \( V \) respectively.

  1. We start with \( f(x) = e^{-x} \) for \( x \in (0, \infty) \) and then \[ g(v) = f(x) \left|\frac{dx}{dv}\right| = \exp\left(-e^{-v}\right) e^{-v}, \quad v \in \R \] so \( V \) has the standard Gumbel distribution.
  2. We start with \( g(v) = \exp\left(-e^{-v}\right) e^{-v} \) for \( v \in \R \) and then \[ f(x) = g(v) \left|\frac{dv}{dx}\right| = \exp\left(-\exp[\ln(x)]\right) \exp[\ln(x)] \frac{1}{x} = e^{-x}, \quad x \in (0, \infty) \] so \( X \) has the standard exponential distribution.

As noted in the introduction, the following theorem provides the motivation for the name extreme value distribution.

Suppose that \( (X_1, X_2, \ldots) \) is a sequence of independent random variables, each with the standard exponential distribution. The distribution of \(Y_n = \max\{X_1, X_2, \ldots, X_n\} - \ln(n) \) converges to the standard Gumbel distribution as \( n \to \infty \).

Proof:

Let \( X_{(n)} = \max\{X_1, X_2, \ldots, X_n\} \), so that \( X_{(n)} \) is the \( n \)th order statistics of the random sample \( (X_1, X_2, \ldots, X_n) \). Let \( G \) denote the standard exponential CDF, so that \( G(x) = 1 - e^{-x} \) for \( x \in [0, \infty) \). Note that \( X_{(n)} \) has CDF \( G^n \). Let \( F_n \) denote the CDF of \( Y_n \). For \( x \in \R \) \[ F_n(x) = \P(Y_n \le x) = \P\left[X_{(n)} \le x + \ln(n)\right] = G^n[x + \ln(n)] = \left[1 - e^{-(x + \ln(n)}\right]^n = \left(1 - \frac{e^{-x}}{n} \right)^n \] By a famous limit from calculus, \( F_n(x) \to e^{-e^{-x}} \) as \( n \to \infty \).

The General Extreme Value Distribution

As with many other distributions we have studied, the standard extreme value distribution can be generalized by applying a linear transformation to the standard variable.

Suppose that \(V\) has the (type 1) standard extreme value distribution for maximums, (the standard Gumbel distribution) discussed above. First, \(U = -V\) has the (type 1) standard extreme value distribution for minimums. More generally, if \(a \in \R\) and \(b \in (0, \infty)\), then

  1. \(X = a + b V\) has the (type 1) extreme value distribution for maximums with location parameter \(a\) and scale parameter \(b\).
  2. \(X = a - b V\) has the (type 1) extreme value distribution for minimums with location parameter \(a\) and scale parameter \(b\).

So the family in part (a) is the location-scale family associated with the standard distribution for maximums, while the family in part (b) is the location-scale family for the standard dsitribution for minimums. The distributions are also referred to more simply as Gumbel distributions rather than type 1 extreme value distributions. The web apps in this project use only the Gumbel distributions for maximums. As you will see below, the differences in the distribution for maximums and the distribution for minimums are minor.

Distribution Functions

Suppose that \( V \) has the standard Gumbel distribution for maximums, and that \( a \in \R \) and \( b \in (0, \infty) \).

Distribution functions.

  1. \(X = a + b V\) has distribution function \[ F(x) = \exp\left[-\exp\left(-\frac{x - a}{b}\right)\right], \quad x \in \R \]
  2. \(X = a - b V\) has distribution function \[ F(x) = 1 - \exp\left[-\exp\left(\frac{x - a}{b}\right)\right], \quad x \in \R \]
Proof:

Let \( G \) denote the CDF of \( V \) given above. Then

  1. \( F(x) = G\left(\frac{x - a}{b}\right) \) for \( x \in \R \)
  2. \( F(x) = 1 - G\left(-\frac{x - a}{b}\right) \) for \( x \in \R \)

Probability density functions.

  1. \(X = a + b V\) has probability density function \[ f(x) = \frac{1}{b} \exp\left(-\frac{x - a}{b}\right) \exp\left[-\exp\left(-\frac{x - a}{b}\right)\right], \quad x \in \R \]
  2. \(X = a - b V\) has probability density function \[ f(x) = \frac{1}{b} \exp\left(\frac{x - a}{b}\right) \exp\left[-\exp\left(\frac{x - a}{b}\right)\right], \quad x \in \R \]
Proof:

Let \( g \) denote the PDF of \( V \) given above. Then

  1. \( f(x) = \frac{1}{b} g\left(\frac{x - a}{b}\right) \) for \( x \in \R \)
  2. \( f(x) = \frac{1}{b} g\left(-\frac{x - a}{b}\right) \) for \( x \in \R \)

Open the special distribution simulator and select the extreme value distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.

Quantile functions.

  1. \(X = a + b V\) has quantile function \(F^{-1}(p) = a - b \ln[-\ln(p)]\) for \(p \in (0, 1)\).
  2. \(X = a - b V\) has quantile function \( F^{-1}(p) = a + b \ln[-\ln(1 - p)],\) for \(p \in (0, 1) \)
Proof:

Let \( G^{-1} \) denote the quantile function of \( V \) given above. Then

  1. \( F^{-1}(p) = a + b G^{-1}(p) \) for \( p \in (0, 1) \).
  2. \( F^{-1}(p) = a - b G^{-1}(1 - p) \) for \( p \in (0, 1) \).

Open the special distribution calculator and select the extreme value distribution. Vary the parameters and note the shape and location of the probability density and distribution functions. For selected values of the parameters, compute a few values of the quantile function and the distribution function.

Moments

Suppose again that \( V \) has the standard Gumbel distribution for maximums, and that \( a \in \R \) and \( b \in (0, \infty) \).

Moment generating functions

  1. \(X = a + b V\) has moment generating function \(M(t) = e^{a t} \Gamma(1 - b t)\) for \(t \lt \frac{1}{b}\).
  2. \(X = a - b \, V\) has moment generating function \(M(t) = e^{a t} \Gamma(1 + b t)\) for \( t \gt -\frac{1}{b}\).
Proof:

Let \( m \) denote the MGF of \( V \) given above. Then

  1. \( M(t) = e^{a t} m(b t) \) for \( b t \lt 1 \)
  2. \( M(t) = e^{a t} m(-b t) \) for \( - b t \lt 1 \)

The mean and variance are

  1. \(\E(a + b V) = a + b \gamma\)
  2. \(\E(a - b V) = a - b \gamma\)
  3. \(\var(a + b V) = \var(a - b V) = b^2 \frac{\pi}{6}\)
Proof:

These results follow from the moments of \( V \) given above and basic properties of expected value and variance.

  1. \( \E(a + b V) = a + b \E(V) \)
  2. \( \E(a - b V) = a - b \E(V) \)
  3. \( \var(a + b V) = \var(a - b V) = b^2 \var(V) \)

Open the special distribution simulator and select the extreme value distribution. Vary the parameters and note the size and location of the mean \( \pm \) standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

The skewness and kurtosis are

  1. \( \skew(a + b V) = 12 \sqrt{6} \zeta(3) \big/ \pi^3 \approx 1.13955 \)
  2. \( \skew(a - b V) = -12 \sqrt{6} \zeta(3) \big/ \pi^3 \approx -1.13955 \)
  3. \( \kurt(a + b V) = \kurt(a - b V) = \frac{27}{5} \)
Proof:

Recall that skewness and kurtosis are defined in terms of the standard score, and hence are invariant under linear transformations with positive slope. A linear transformation with negative slope changes the sign of the skewness and has no effect on kurtosis.

Once again, the excess kurtosis is \( \kurt(X) - 3 = \frac{12}{5} \).

Related Distributions

Since the general Gumbel distributions are location-scale families, they are trivially closed under linear transformations of the underlying variables (with nonzero slope).

Suppose that \( X \) has the Gumbel distribution for maximums (respectively minimums) with location parameter \( a \in \R \) and scale parameter \( b \in (0, \infty) \). Suppose also that \( c \in \R \) and \( d \in \R \setminus \{0\} \), and let \( Y = c + d X \).

  1. If \( d \gt 0 \) then \( Y \) has the Gumbel distribution for maximums (minimums) with location parameter \( c + a d \) and scale parameter \( b d \).
  2. If \( d \lt 0 \) then \( Y \) has the Gumbel distribution for minimums (maximums) with location parameter \( c + a d \) and scale parameter \( - b d \).

As with the standard Gumbel distribution, the general Gumbel distribution has the usual connections with the standard uniform distribution by means of the distribution and quantile functions. Since the quantile function has a simple closed form, the latter connection leads to the usual random quantile method of simulation. We state the result for maximums.

Suppose that \( a \in \R \) and \( b \in (0, \infty) \).

  1. If \( U \) has the standard uniform distribution then \( X = F^{-1}(U) = a - b \ln[-\ln(U)] \) has the Gumbel distribution for maximums with location parameter \( a \) and scale parameter \( b \).
  2. If \( X \) has the Gumbel distribution for maximums with location paramter \( a \) and scale parameter \( b \) then \[ U = F(X) = \exp\left[-\exp\left(-\frac{X - a}{b}\right)\right] \] has the standard uniform distribution.

Since the quantile function has a simple, closed form, the standard Gumbel distribution is easy to simulate by means of the random quantile method.

Open the random quantile experiment and select the extreme value distribution. Vary the parameters and note again the shape and location of the probability density and distribution functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counteparts.

The Gumbel distribution for maximums has a simple connection to the Weibull distribution, and this generalizes the connection above between the standard Gumbel distribution and the exponential distribution. There is a similar result for the Gumbel distribution for minimums.

The Gumbel and Weibull distributions are related as follows:

  1. If \(X\) has the Gumbel distribution for maximums, with location parameter \(a \in \R\) and scale parameter \(b \in (0, \infty)\), then \(Y = e^{-X}\) has the Weibull distribution with shape parameter \(\frac{1}{b}\) and scale parameter \(e^{-a}\).
  2. If \(Y\) has the Weibull distribution with shape parameter \(k \in (0, \infty)\) and scale parameter \(b \in (0, \infty)\) then \(X = -\ln(Y)\) has the Gumbel distribution for maximums, with location parameter \(-\ln(b)\) and scale parameter \(\frac{1}{k}\).
Proof:

As before, these results can be obtained using the change of variables theorem for probability density functions. We give an alternate proof using special forms of the random variables.

  1. We can write \( X = a + b V \) where \( V \) has the standard Gumbel distribution. Hence \[ Y = e^{-X} = e^{-a} \left(e^{-V}\right)^b \] As shown above, \( e^{-V} \) has the standard exponential distribution and therefore \( Y \) has the Weibull distribution with shape parameter \( 1/b \) and scale parameter \( e^{-a} \).
  2. We can write \( Y = b U^{1/k} \) where \( U \) has the standard exponential distribution. Hence \[ X = -\ln(Y) = -\ln(b) + \frac{1}{k}[-\ln(U)] \] As shown above, \( -\ln(U) \) has the standard Gumbel distribution and hence \( X \) has the Gumbel distribution with location parameter \( -\ln(b) \) and scale parameter \( \frac{1}{k} \).