Skip to main content

Order Statistics

  • Chapter
  • First Online:
Fundamentals of Order and Rank Statistics
  • 71 Accesses

Abstract

In this chapter, recollecting the notion of order statistics introduced in Sect. 1.4, we discuss the distributions and properties of order statistics in detail. As mentioned in Sect. 1.4, in dealing with the order statistics, we assume that the dimension, or size, of the random vectors in this chapter is \(n \in \mathbb {J}_{2, \infty }\) unless specified otherwise. When it becomes appropriate, we denote the size n of the random vector more explicitly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that the cdf of the binomial distribution \(b(n,p)\) is \(F(x) = \sum \limits _{k=0}^{\lfloor x \rfloor } \, { }_{n}\mbox{C}_{k} p^k(1-p)^{n-k}\) for \(\lfloor x \rfloor \le x < \lfloor x \rfloor +1\).

  2. 2.

    In describing the pmf and cdf in the discrete space, we often use a simpler notation by just specifying the values at non-zero points of the pmf instead of specifying over all real numbers unless the specification for all real points is required.

  3. 3.

    Exercise 2.15 discusses this observation more specifically in the gamma distribution, a generalization of the exponential distribution.

  4. 4.

    Here, \(\sum \limits _{j=r}^n \, { }_n\mbox{C}_j x^j (1-x)^{n-j}= \sum \limits _{j=r}^{n-1} \, { }_n\mbox{C}_j x^j (1-x)^{n-j} + x^n= 1\) when \(x=1\).

  5. 5.

    The pdf (2.3.14) can be obtained also by differentiating the cdf (2.3.13).

  6. 6.

    Note that \(f_{X_{[r]},X_{[s]}} (x,x) =0\) for \(s \ge r+2\). For \(s=r+1\), the values of \(f_{X_{[r]},X_{[s]}} (x,y)\) expressed as (2.3.31) along the line \(x=y\) depends on \(u(0)\): nonetheless, the values of a two-dimensional function along a line of discontinuities on the two-dimensional plane are practically negligible, which is similar to the fact that the value of a one-dimensional function at discontinuities is practically negligible.

  7. 7.

    In this expression, \(\int _{-1}^y \int _w^y dv dw\), \(\int _{-1}^x \int _w^y dv dw\), and \(\int _{-1}^x \int _w^1 dv dw\) can be replaced with \(\int _{-1}^y \int _{-1}^v dw dv\), \(\left ( \int _{-1}^x \int _{-1}^v + \int _x^y \int _{-1}^x \right ) dw dv\), and \(\left ( \int _{-1}^x \int _{-1}^v + \int _x^1 \int _{-1}^x \right ) dw dv\), respectively, when the order of integrations over v and w are interchanged.

  8. 8.

    Note that the value of \(\sum \limits _{s=2}^{n} \sum \limits _{r=1}^{s-1} f_{r,s} (x,x)\) depends on \(u(0)\): nonetheless, the difference is practically negligible.

  9. 9.

    Here, recollecting that \(n_0 = 0\) and \(n_{k+1} = n+1\), the numbers of integrations in the j-th group of integrations are \(n_1 -1 = n_1-n_0-1 \) from 1 to \(n_1 -1\) when \(j=0\), \(n_2-n_1-1 \) from \(n_1 +1\) to \(n_2 -1\) when \(j=1\), \(\cdots \), \(n-n_k = n_{k+1}-n_k-1\) from \(n_k +1\) to n when \(j=k\). Thus, the total number of integrations is \(\left (n_1-1 \right ) + \left ( n_2 - n_1 -1 \right ) + \cdots + \left ( n - n_k \right ) = n-k\).

  10. 10.

    When obtaining (2.3.55) from (2.3.52) and (2.3.54), note that \(\frac {u\left (x_{i+1}-x_i \right )}{u\left (x_{i+1}-x_i \right )} =u\left (x_{i+1}-x_i \right )\). In addition, \(\frac {u\left ( x_{r+1}-x_r \right )}{u\left ( x_s - x_r \right )}\) is defined only when \(x_s > x_r\) and is equal to \(u\left ( x_{r+1}-x_r \right )\) because \(x_s \ge x_{r+1}\).

  11. 11.

    The conditional joint pdf (2.3.55) is meaningful only when \(x_1 < x_2 < \cdots < x_n\). Meanwhile, the joint pdf of the order statistics of an \((s-r-1)\)-dimensional i.i.d. random vector with the marginal pdf \(\bar {f} (x) = \frac { f(x) u\left ( x - x_r \right )u\left ( x_s - x \right )} { F\left ( x_s \right ) - F\left ( x_r \right ) } \) is meaningful only when \(x_{r} < x_{r+1} < \cdots < x_{s}\).

  12. 12.

    Refer to (1.E.7).

  13. 13.

    Refer to (1.E.6).

  14. 14.

    Specifically, \(f_{\left . X_{[s]} \right | \boldsymbol {X}_{[r]}} \left (\left . x_s \right | \boldsymbol {x}^r \right )\) has the additional term of \(\prod \limits _{i=1}^{r-1} u\left ( x_{i+1} - x_i \right )\). In other words, \(f_{\left . X_{[s]} \right | \boldsymbol {X}_{[r]}} \left (\left . x_s \right | \boldsymbol {x}^r \right )\) is zero unless \(x_1 < x_2 < \cdots < x_r < x_s\) while \(f_{\left . X_{[s]} \right | X_{[r]}} \left (\left . x_s \right | x_r \right )\) is zero unless \(x_r < x_s\).

  15. 15.

    Refer to Exercise 2.28 as well.

  16. 16.

    Note here that the distributions of \(X_k\) and \(1-X_k\) are the same because \(X_k\) has the distribution \(U(0, 1)\). Consequently, the distributions of \(-\ln X_k\) and \(-\ln \left (1-X_k \right )\) are also the same.

  17. 17.

    Detailed steps in the evaluation of the joint cdf’s are provided in section “Proofs and Calculations” in Appendix 1, Chap. 2.

  18. 18.

    Here, \(f_{|X|{ }_{[q]}} (0) = 0\) for \(q \in \mathbb {J}_{2,\infty }\) because \(G_F(0)=0\). In addition, letting \(u(0)=0\), we have \(f_{|X|{ }_{[1]}} (0) = ng_F(0)u(0)=0\) also.

  19. 19.

    If \(x > y\) when \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\), we have \(p_{r, s}(x,y)=0\). In section “Order Statistics from Uniform Distributions” in Appendix 2, Chap. 2, we discuss another method of obtaining (2.4.23): refer to (2.A2.29).

  20. 20.

    The result obtained by changing all the negative signs into positive signs in the determinant of a square matrix is called the permanent of the matrix. The permanent of a matrix \(\boldsymbol {A}\) is usually denoted by \(\,{ }^{+}\left |\boldsymbol {A}\right |{ }^{+}\) or \(\mbox{per}\left [ \boldsymbol {A}\right ]\).

References

  1. M. Abramowitz, I.A. Stegun (ed.), Handbook of Mathematical Functions (Dover, New York, 1972)

    Google Scholar 

  2. N. Balakrishnan, A.C. Cohen, Order Statistics and Inference: Estimation Methods (Academic, San Diego, 1991)

    Google Scholar 

  3. K.E. Barner, G.R. Arce, Order-statistic filtering and smoothing of time-series: Part II. in Handbook of Statistics, vol. 17 (Elsevier, Amsterdam, 1998), pp. 555–602

    Google Scholar 

  4. P.J. Bickel, On some robust estimates of location. Ann. Math. Stat. 36, 847–858 (1965)

    Article  MathSciNet  Google Scholar 

  5. R.J. Crinon, The Wilcoxon filter: A robust filtering scheme, in Proceeding of the International Conference on Acoustics, Speech, and Signal Processing (1985), pp. 18.5.1–18.5.4

    Google Scholar 

  6. H.A. David, Order Statistics, 2nd edn. (John Wiley and Sons, New York, 1980)

    Google Scholar 

  7. H.A. David, H.N. Nagaraja, Order Statistics, 3rd edn. (John Wiley and Sons, New York, 2003)

    Book  Google Scholar 

  8. I.S. Gradshteyn, I.M. Ryzhik, Table of Integrals, Series, and Products (Academic, New York, 1980)

    Google Scholar 

  9. J. Hajek, Z. Sidak, P.K. Sen, Theory of Rank Tests, 2nd edn. (Academic, New York, 1999)

    Google Scholar 

  10. Y. Han, I. Song, Y. Park, Some root properties of recursive weighted median filters. Signal Process. 25(3), 337–344 (1991)

    Article  Google Scholar 

  11. S. Lee, Y. Zhang, S. Yoon, I. Song, Order statistics and recursive updating with aging factor for cooperative cognitive radio networks under SSDF attacks. ICT Express 6(1), 3–6 (2020)

    Article  Google Scholar 

  12. Y.H. Lee, S.A. Kassam, Generalized median filtering and related nonlinear filtering techniques. IEEE Trans. Acoust. Speech Signal Process. 33(3), 672–683 (1985)

    Article  Google Scholar 

  13. E.H. Loyd, Least-squares estimation of location and scale parameters using order statistics. Biometrika 39, 88–95 (1952)

    Article  MathSciNet  Google Scholar 

  14. C.L. Mallows, Some theory of nonlinear smoothers. Ann. Stat. 8(4), 695–715 (1980)

    Article  MathSciNet  Google Scholar 

  15. S.R. Park, H. Kwon, S.Y. Kim, I. Song, Relative frequency of order statistics in independent and identically distributed random vectors. Commun. Stat. Appl. Methods 13(2), 243–254 (2006)

    Google Scholar 

  16. S.R. Park, I. Song, S. Yoon, T. An, H.-K. Min, On the sums of the probability functions of order statistics. J. Korean Stat. Soc. 42(2), 257–265 (2013)

    Article  MathSciNet  Google Scholar 

  17. S. Peltonen, P. Kuosmanen, K. Egiazarian, M. Gabbouj, J. Astola, Analysis and optimization of weighted order statistic and stack filters, in Nonlinear Image Processing, ed. by S.K. Mitra, G.L. Sicuranza. (Academic, San Diego, 2001), pp. 1–26

    Google Scholar 

  18. I. Pitas, A.N. Venetsanopoulos, Digital filters based on order statistics, in Nonlinear Digital Filters, vol. 84 (Springer, Boston, 1990)

    Book  Google Scholar 

  19. H.V. Poor, J.B. Thomas, Advances in Statistical Signal Processing: Signal Detection, vol. 2 (JAI Press, Greenwich, 1993)

    Google Scholar 

  20. I. Song, J. Bae, S.Y. Kim, Advanced Theory of Signal Detection (Springer-Verlag, Berlin, 2002)

    Book  Google Scholar 

  21. I. Song, S. Lee, S.R. Park, S. Yoon, Asymptotic value of the probability that the first order statistic is from null hypothesis. Appl. Math. 4(12), 1702–1705 (2013)

    Article  Google Scholar 

  22. I. Song, C.H. Park, K.S. Kim, S.R. Park, Random Variables and Stochastic Processes (in Korean) (Freedom Academy, Paju, 2014)

    Google Scholar 

  23. I. Song, S.R. Park, S. Yoon, Probability and Random Variables: Theory and Applications (Springer-Verlag, Berlin, 2022)

    Book  Google Scholar 

  24. A. Stuart, J.K. Ord, Advanced Theory of Statistics: Distribution Theory, vol. 1, 5th edn. (Oxford University, New York, 1987)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendices

Appendix 1: Proofs and Calculations

2.1.1 Proof of Theorem 2.2.4

Proof

First, when \(x \ge y\), we can express \(\sum \limits _{s=2}^{n} \sum \limits _{r=1}^{s-1} F_{r,s}(x,y) = \sum \limits _{s=2}^{n} (s-1) F_{s}(y) = \sum \limits _{s=2}^{n} (s-1) \sum \limits _{j=s}^{n} \, { }_{n}\mbox{C}_{j} F^{j}(y) \{ 1- F(y)\}^{n-j}\) as

(2.A1.1)

based on the general formula (2.2.28) of the joint cdf of two order statistics. In the summation on the right-hand side of (2.A1.1), the term \(\, { }_{n}\mbox{C}_{k} F^{k}(y) \{ 1- F(y)\}^{n-k}\) is added \(1+2+ \cdots + (k-1)= \frac {k(k-1)}{2}\) times for \(k \in \mathbb {J}_{2,n}\): keeping this fact in mind and using (1.A3.7), we get \(\sum \limits _{s=2}^{n} \sum \limits _{r=1}^{s-1} F_{r,s}(x,y) = \frac {1}{2} \sum \limits _{k=2}^{n} k(k-1) \, { }_{n}\mbox{C}_{k} F^{k}(y) \{ 1- F(y)\}^{n-k} = \frac {1}{2} \{ 1- F(y)\}^{n-2} F^{2}(y) \sum \limits _{k=2}^{n} k(k-1) \, { }_{n}\mbox{C}_{k} \left \{ \frac {F(y)}{1- F(y)} \right \}^{k-2} = \frac {1}{2} F^{2}(y) \{ 1- F(y)\}^{n-2} n(n-1) \bigg \{ 1+ \frac { F(y) }{ 1- F(y) } \bigg \}^{n-2}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{s=2}^{n} \sum_{r=1}^{s-1} F_{r,s}(x,y) \ = \ \frac{1}{2} n(n-1) F^2(y) {} \end{array} \end{aligned} $$
(2.A1.2)

for \(x \ge y\). Next, when \(x \le y\), \(\sum \limits _{s=2}^{n} \sum \limits _{r=1}^{s-1} F_{r,s}(x,y) = \sum \limits _{s=2}^{n} \sum \limits _{r=1}^{s-1} \sum \limits _{j=s}^{n} \sum \limits _{i=r}^{j} \, { }_{n}\mbox{C}_{j}\, { }_{j}\mbox{C}_{i} F^{i}(x) \{F(y)-F(x)\}^{j-i} \{1- F(y)\}^{n-j}\) can be written as

(2.A1.3)

using (1.A3.22). Now, we have \(\sum \limits _{i=1}^{j} (j-1)i \, { }_{j}\mbox{C}_{i} F^{i}(x) \{F(y)-F(x)\}^{j-i} = (j-1) F(x) \{F(y)-F(x)\}^{j-1} j \left \{ 1+ \frac {F(x)}{F(y)-F(x)} \right \}^{j-1}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{i=1}^{j} (j-1)i \, {}_{j}\mbox{C}_{i} F^{i}(x) \{F(y)-F(x)\}^{j-i} & =&\displaystyle j(j-1) F(x) F^{j-1}(y) \qquad \quad {} \end{array} \end{aligned} $$
(2.A1.4)

from (1.A3.6), and \(\sum \limits _{i=1}^{j} i(i-1) \, { }_{j}\mbox{C}_{i} F^{i}(x) \{F(y)-F(x)\}^{j-i} = F^2(x) \{F(y)-F(x)\}^{j-2} \sum \limits _{i=2}^{j} i(i-1) \, { }_{j}\mbox{C}_{i} \left \{ \frac {F(x)}{F(y)-F(x) } \right \}^{i-2} = F^2(x) \{F(y)-F(x)\}^{j-2} j(j-1) \left \{ 1+\frac {F(x)}{F(y)-F(x)} \right \}^{j-2}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{i=1}^{j} i(i-1) \, {}_{j}\mbox{C}_{i} F^{i}(x) \{F(y)-F(x)\}^{j-i} & =&\displaystyle j(j-1) F^2(x) F^{j-2}(y) \qquad \quad {} \end{array} \end{aligned} $$
(2.A1.5)

from (1.A3.7). Thus, (2.A1.3) can be written as

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{s=2}^{n} \sum_{r=1}^{s-1} F_{r,s}(x,y) & =&\displaystyle \sum_{j=2}^{n} \left\{ j(j-1) F(x) F^{j-1}(y) - \frac{1}{2} j(j-1) F^2(x) F^{j-2}(y) \right\} \\ & &\displaystyle \ \times \, {}_{n}\mbox{C}_{j}\, \{1- F(y)\}^{n-j} {} \end{array} \end{aligned} $$
(2.A1.6)

using (2.A1.4) and (2.A1.5).

Now, \(\sum \limits _{j=2}^{n} j(j-1) \, { }_{n}\mbox{C}_{j} \, F^{j-1}(y) \{1- F(y)\}^{n-j} = F(y) \{1- F(y)\}^{n-2} \sum \limits _{j=2}^{n} j(j-1) \, { }_{n}\mbox{C}_{j} \, \left \{ \frac { F(y) }{ 1- F(y)} \right \}^{j-2}= F(y) \{1- F(y)\}^{n-2} n(n-1) \left \{ 1+ \frac { F(y) }{ 1- F(y)} \right \}^{n-2}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{j=2}^{n} j(j-1) \, {}_{n}\mbox{C}_{j} \, F^{j-1}(y) \{1- F(y)\}^{n-j} & =&\displaystyle n(n-1) F(y) , \qquad \quad {} \end{array} \end{aligned} $$
(2.A1.7)

which also implies

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{j=2}^{n} j(j-1) \, {}_{n}\mbox{C}_{j} \, F^{j-2}(y) \{1- F(y)\}^{n-j} \ = \ n(n-1) . {} \end{array} \end{aligned} $$
(2.A1.8)

Using (2.A1.7) and (2.A1.8) in (2.A1.6) results in

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{s=2}^{n} \sum_{r=1}^{s-1} F_{r,s}(x,y) & =&\displaystyle n(n-1) \left \{ F(x)F(y) - \frac{1}{2} F^2(x) \right \} {} \end{array} \end{aligned} $$
(2.A1.9)

for \(x \le y\). Combining (2.A1.2) and (2.A1.9), we have the equality (2.2.35). \(\spadesuit \)

2.1.2 Proof of Theorem 2.3.5

Proof

Recollecting the discussions on the steps leading to (2.1.10), let us obtain \(f_{X_{[i]},X_{[k]}} \left ( x_i, x_k \right ) = \underset { x_1 \le x_2 \le \cdots \le x_n } {\int \int \cdots \int } \left \{ n! \prod \limits _{j=1}^n f\left ( x_j \right ) \right \} d\boldsymbol {x}^{\overline {i,k}}\) by integrating the joint pdf \(f_{\boldsymbol {X_{[\cdot ]}}}( \boldsymbol {x})\) shown in (2.3.1) as

(2.A1.10)

where \(d\boldsymbol {x}^{\overline {i,k}} = dx_1 dx_2 \cdots dx_{i-1} dx_{i+1} dx_{i+2} \cdots dx_{k-1} dx_{k+1} dx_{k+2} \cdots dx_n\). Now, among the three groups of integrations on the right-hand side of (2.A1.10), the first group of integrations can be evaluated as

(2.A1.11)

Similarly, the second and third groups of integrations will produce

(2.A1.12)

and

(2.A1.13)

respectively. From (2.A1.10)–(2.A1.13), we have the joint pdf (2.3.31). In Exercise 2.22, we discuss another proof of Theorem 2.3.5. \(\spadesuit \)

2.1.3 Proof of Theorem 2.3.10

Proof

Let \(x_0=0\), \(x_{l+1}=1\), \(y_0=0\), \(y_{l+1}=1\), and \(i_0=0\). Then, writing as \(\boldsymbol {x} =\left ( x_1, x_2, \, \cdots , x_l \right )\), the joint pdf of \(\boldsymbol {X} _{[l]}=\left ( X_{\left [i_1 \right ]}, X_{\left [i_2 \right ]}, \, \cdots , X_{\left [i_l \right ]} \right )\) in the case of marginal uniform distribution \(U(0,1)\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X} _{[l]}}\left ( \boldsymbol{x} \right ) \ = \ n! \prod_{j=0}^{l} \frac { \left ( x_{j+1}-x_j \right )^{i_{j+1}-i_j-1} } { \left ( i_{j+1}-i_j-1 \right ) ! } u \left ( x_{j+1} - x_j \right ) {} \end{array} \end{aligned} $$
(2.A1.14)

from the general formula (2.3.42) of the joint pdf. Now, let \(\boldsymbol {Y}=\left ( Y_1, Y_2, \, \cdots , Y_l \right )\) with \(Y_k = \frac { X_{\left [i_k \right ]} } { X_{\left [i_{k+1} \right ]} }\) for \(k \in \mathbb {J}_{1,l}\), \(i_{l+1} = n+1\), and \(X_{\left [i_{l+1} \right ]} = 1\). Then, from \(X_{\left [i_1\right ] } = Y_1 Y_2 \cdots Y_l\), \(X_{\left [i_2 \right ]}=Y_2Y_3\cdots Y_l\), \(\cdots \), \(X_{\left [i_l \right ]}=Y_l\), the Jacobian of the inverse transformation \(\boldsymbol {X} _{[l]} = \boldsymbol {g}^{-1}(\boldsymbol {Y} )\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left | \frac {\partial \left ( y_1y_2\cdots y_l \quad y_2y_3\cdots y_l \quad \cdots \quad y_l \right) } {\partial (y_1 \quad y_2 \quad \cdots \quad y_l )} \right | \ = \ y_2 y_3^{2} y_4^3 \cdots y_l^{l-1} . \end{array} \end{aligned} $$
(2.A1.15)

Thus, the joint pdf of \(\boldsymbol {Y}\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{Y}} ( \boldsymbol{y} ) & =&\displaystyle \left . f_{\boldsymbol{X} _{[l]}}( \boldsymbol{x} ) \right |{}_{x_j=y_j y_{j+1}\cdots y_l,\ j \in \mathbb{J}_{1,l}; \ x_{l+1}=y_{l+1} } \left | y_2 y_3^{2} y_4^3 \cdots y_l^{l-1} \right | \\ & =&\displaystyle n! y_2 y_3^{2} y_4^3 \cdots y_l^{l-1} \frac { \left ( y_{l+1} - y_l \right )^{i_{l+1}-i_l-1} } { \left ( i_{l+1}-i_l-1 \right ) ! } u \left ( y_{l+1} - y_l \right ) \\ & &\displaystyle \times \prod_{j=0}^{l-1} \frac { \left ( y_{j+1} y_{j+2} \cdots y_l - y_j y_{j+1} \cdots y_l \right )^{i_{j+1}-i_j-1} }{ \left ( i_{j+1}-i_j-1 \right ) ! } \\ & &\displaystyle \times \prod_{j=0}^{l-1} u \left ( y_{j+1} y_{j+2} \cdots y_l - y_j y_{j+1} \cdots y_l \right ) \\ & =&\displaystyle n! \left ( \prod_{j=1}^{l-1} y_{j+1}^j \right ) \left \{ \prod_{j=0}^{l-1} \left ( y_{j+1} y_{j+2} \cdots y_l \right )^{i_{j+1}-i_j-1} \right \} \\ & &\displaystyle \times \left \{ \prod_{j=0}^{l} \frac { \left ( 1 - y_j \right )^{i_{j+1}-i_j-1} } { \left ( i_{j+1}-i_j-1 \right ) ! } \right\} \\ & &\displaystyle \times u \left ( 1- y_l \right ) \prod_{j=0}^{l-1} u \left ( y_{j+1} y_{j+2} \cdots y_l - y_j y_{j+1} \cdots y_l \right ) {} \end{array} \end{aligned} $$
(2.A1.16)

from (2.A1.14), where \(\boldsymbol {y} = \left ( y_1, y_2, \, \cdots , y_l \right )\).

On the right-hand side of (2.A1.16), the term \(u \left ( 1- y_l \right ) \prod \limits _{j=0}^{l-1} u \left ( y_{j+1} y_{j+2} \cdots y_l \right . \left . - y_j y_{j+1} \cdots y_l \right )\) is non-zero only when

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle y_1 y_2 \cdots y_l \ > \ 0, {} \end{array} \end{aligned} $$
(2.A1.17)
$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \left \{ y_{j+1} y_{j+2} \cdots y_l - y_{j} y_{j+1} \cdots y_l > 0 \right \}_{j=1}^{l-1}, {} \end{array} \end{aligned} $$
(2.A1.18)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} 1 - y_l & > &\displaystyle 0 {} \end{array} \end{aligned} $$
(2.A1.19)

are all satisfied. Now, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left \{ y_{j} < 1 \right \}_{j=1}^{l-1} {} \end{array} \end{aligned} $$
(2.A1.20)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left \{ y_{j} y_{j+1} \cdots y_l > 0 \right \}_{j=1}^{l} {} \end{array} \end{aligned} $$
(2.A1.21)

from (2.A1.17) and (2.A1.18). We then get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left \{ y_j > 0 \right \}_{j=1}^{l} {} \end{array} \end{aligned} $$
(2.A1.22)

from (2.A1.21) and, subsequently,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left \{ 0 \ < \ y_j \ < \ 1 \right \}_{j=1}^{l} {} \end{array} \end{aligned} $$
(2.A1.23)

from (2.A1.19), (2.A1.20), and (2.A1.22). In other words, (2.A1.17)–(2.A1.19) are equivalent to \(\big \{ 0 < y_j < 1 \big \}_{j=1}^{l} \), and thus the term \(u \left ( 1- y_l \right ) \prod \limits _{j=0}^{l-1} u \left ( y_{j+1} y_{j+2} \cdots y_l \right . \left . - y_j y_{j+1} \cdots y_l \right )\) can be written as

(2.A1.24)

We also have \(\left ( \prod \limits _{j=1}^{l-1} y_{j+1}^j \right ) \left \{ \prod \limits _{j=0}^{l-1} \left ( y_{j+1} y_{j+2} \cdots y_l \right )^{i_{j+1}-i_j-1} \right \} = y_1^{0+i_1-i_0-1} y_2^{1+i_1-i_0-1+i_2-i_1-1} \cdots y_l^{l-1+i_1-i_0-1+i_2-i_1-1+ \cdots + i_l-i_{l-1}-1}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left ( \prod_{j=1}^{l-1} y_{j+1}^j \right ) \left \{ \prod_{j=0}^{l-1} \left ( y_{j+1} y_{j+2} \cdots y_l \right )^{i_{j+1}-i_j-1} \right \} \ = \ \prod_{j=1}^l y_j^{i_j - 1} {} \end{array} \end{aligned} $$
(2.A1.25)

recollecting \(i_0=0\), and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \prod_{j=0}^l \left ( 1- y_j \right )^{i_{j+1} - i_j -1 } \ = \ \prod_{j=1}^l \left ( 1- y_j \right )^{i_{j+1} - i_j -1 } {} \end{array} \end{aligned} $$
(2.A1.26)

recollecting \(1-y_0=1\). In addition, writing \(n! \prod \limits _{j=0}^l \frac { 1 } { \left ( i_{j+1}-i_j-1 \right ) ! } \) as

(2.A1.27)

and then combining two factorials in the denominator and one factorial in the numerator as one fraction from the beginning, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} n! \prod_{j=0}^l \frac { 1 } { \left ( i_{j+1}-i_j-1 \right ) ! } \ = \ \prod_{j=1}^l \frac { \left ( i_{j+1} - 1 \right ) ! } { \left ( i_j - 1 \right ) ! \left ( i_{j+1}-i_j-1 \right ) ! } {} \end{array} \end{aligned} $$
(2.A1.28)

by noting \(i_1-i_0-1=i_1-1\) and \(i_{l+1}-1=n\).

Using (2.A1.24)–(2.A1.26) and (2.A1.28), the joint pdf (2.A1.16) can be written as

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{Y}}( \boldsymbol{y} ) \ = \ \prod_{j=1}^{l} \frac{ \left ( i_{j+1} -1 \right )! y_j^{i_j-1} \left (1-y_j \right )^{i_{j+1}-i_j-1} } { \left (i_j-1 \right )! \left (i_{j+1}-i_j-1 \right )! } u \left ( y_j \right ) u \left ( 1 - y_j \right ), \\ \end{array} \end{aligned} $$
(2.A1.29)

completing the proof. \(\spadesuit \)

2.1.4 Proof of Theorem 2.4.4

Proof

Let us first remember that the probability of several random variables having the same value is non-zero in discrete random vectors unlike that in continuous random vectors.

First, when \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \(x<y\), consider the arrangement

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left \{ X_{[1]} , X_{[2]} , \, \cdots , X_{[r]} , \, \cdots , X_{[s]} , \, \cdots , X_{[n]}\right \}. \end{array} \end{aligned} $$
(2.A1.30)

Assume i of the \((r-1)\) order statistics \(\left \{X_{[k]} \right \}_{k=1}^{r-1}\) on the left-hand side of \(X_{[r]}\) have the same value as \(X_{[r]}\), t of the \((n-r)\) order statistics \(\left \{X_{[k]} \right \}_{k=r+1}^{n}\) on the right-hand side of \(X_{[r]}\) have the same value as \(X_{[r]}\), u of the \((s-1)\) order statistics \(\left \{X_{[k]} \right \}_{k=1}^{s-1}\) on the left-hand side of \(X_{[s]}\) have the same value as \(X_{[s]}\), and j of the \((n-s)\) order statistics \(\left \{X_{[k]} \right \}_{k=s+1}^{n}\) on the right-hand side of \(X_{[s]}\) have the same value as \(X_{[s]}\). Then, we have the joint pmf \(p_{r, s} (x,y) = \mathsf {P} \left ( X_{[r]} =x,X_{[s]} =y \right )\) as

(2.A1.31)

The ranges of i and j in the summation of (2.A1.31) are \(i \in \mathbb {J}_{0,r-1}\) and \(j \in \mathbb {J}_{0,n-s}\), respectively. In addition, t and u are both non-negative, and the left-most order statistic with the value y and the right-most order statistic with the value x should be different or, equivalently, \((s-u)-(r+t) \ge 1\) because \(x < y\). Thus we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} 0 \ \le \ t+u \ \le \ s-r-1 . {} \end{array} \end{aligned} $$
(2.A1.32)

Noting that there exist \((r-i-1)\), \((i+t+1)\), \((s-r-t-u-1)\), \((j+u+1)\), and \((n-s-j)\) random variables from \(X_{[1]}\) to \(X_{[r-i-1]}\), from \(X_{[r-i]}\) to \(X_{[r+t]}\), from \(X_{[r+t+1]}\) to \(X_{[s-u-1]}\), from \(X_{[s-u]}\) to \(X_{[s+j]}\), and from \(X_{[s+j+1]}\) to \(X_{[n]}\), respectively, we can rewrite (2.A1.31) in terms of \(\left \{ X_l \right \}_{l=1}^{n}\) as

(2.A1.33)

Here, \(\sum \limits _{u,t}\) signifies the summation over all non-negative integers u and t such that \(t+u \in \mathbb {J}_{0,s-r-1}\) as shown in (2.A1.32), and thus can be written specifically as \(\sum \limits _{u=0}^{s-r-1} \sum \limits _{t=0}^{s-r-u-1}\) or \(\sum \limits _{t=0}^{s-r-1} \sum \limits _{u=0}^{s-r-t-1}\).

In terms of the number \(\tilde {C}_{n,s,r}\) defined in (2.3.29), we can rewrite the constant term in (2.A1.33) into

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \frac{n!}{(r-1-i)!(i+t+1)!(s-r-t-u-1)!(j+u+1)!(n-s-j)!} \\ & &\displaystyle \quad = \ \tilde{C}_{n,s,r} \, \frac{(s-r-1)! \, {}_{r-1} \mbox{C}_i \, \, {}_{n-s}\mbox{C}_j }{(s-r-1-t-u)!u!t!} \\ & &\displaystyle \quad \quad \times \int_{0}^{1} \int_{0}^{1} \alpha^i \beta^j (1-\alpha)^t (1-\beta)^u d \alpha d \beta {} \end{array} \end{aligned} $$
(2.A1.34)

by noting that \(\frac {i! \ t!}{(i+t+1)!} = \int _{0}^{1} \alpha ^i (1-\alpha )^t d \alpha \) and \(\frac {j!\ u!}{(j+u+1)!} = \int _{0}^{1} \beta ^j (1-\beta )^u d \beta \). If we rewrite (2.A1.33) using (2.A1.34), we get

(2.A1.35)

Evaluating the inner summation on the right-hand side of (2.A1.35) first, we get

(2.A1.36)

Letting \(w = F(y)- \beta p(y)\) and \(v=F(x-1)+\alpha p(x)\), we have \(p(x) d \alpha = dv\) and \(-p(y) d\beta = dw\). In addition, noting that \(F(x-1)+ p(x) = F(x)\), \(F(y)-p(y) = F(y-1)\), and \(F(y-1)- F(x) + (1- \alpha ) p(x) + (1-\beta ) p(y) = F(y)-F(x-1) -\alpha p(x) -\beta p(y) = F(y)- \beta p(y) - \{ F(x-1)+\alpha p(x)\} = w-v\), we get \(p_{r, s}(x,y) = \tilde {C}_{n,s,r}\int _{F(y)}^{F(y-1)}\int _{F(x-1)}^{F(x)} v^{r-1}(w-v)^{s-r-1} (1-w)^{n-s} dv ( - dw) \), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{r, s}(x,y) & =&\displaystyle \tilde{C}_{n,s,r}\int_{F(y-1)}^{F(y)}\int_{F(x-1)}^{F(x)} v^{r-1}(w-v)^{s-r-1}(1-w)^{n-s} \\ & &\displaystyle \quad \times dv \, dw . {} \end{array} \end{aligned} $$
(2.A1.37)

Similarly, when \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \(x=y\), consider the arrangement

$$\displaystyle \begin{aligned} \begin{array}{rcl} & \left ( X_{[1]} , X_{[2]}, \, \cdots , X_{[r-1]}, X_{[r]}=X_{[r+1]}= \cdots =X_{[s]}, \right. \\ & \ \left. X_{[s+1]}, \, \cdots , X_{[n]} \right ). \end{array} \end{aligned} $$
(2.A1.38)

Assume i of the \((r-1)\) order statistics \(\left \{X_{[k]} \right \}_{k=1}^{r-1}\) on the left-hand side of \(X_{[r]}\) have the same value as \(X_{[r]}\), and j of the \((n-s)\) order statistics \(\left \{X_{[k]} \right \}_{k=s+1}^{n}\) on the right-hand side of \(X_{[s]}\) have the same value as \(X_{[s]}\). Then, we have

(2.A1.39)

Here, noting that \(\frac {(s-r-1)! \ i! \ j!}{(s-r+i+j+1)!} = \frac {\varGamma (s-r)\varGamma (i+1)}{\varGamma (s-r+i+1)} \frac {\varGamma (s-r+i+1) \varGamma (j+1)}{\varGamma (s-r+i+j+2)} = \tilde {B}(i+1, s -r) \tilde {B}(j+1, s-r+i+1)\), i.e.,

(2.A1.40)

we can rewrite (2.A1.39) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{r, s}(x,y) & =&\displaystyle \tilde{C}_{n,s,r} \int_{0}^{1} \int_{0}^{1} \sum_{i=0}^{r-1}\sum_{j=0}^{n-s} \, {}_{r-1}\mbox{C}_{i} \, {}_{n-s}\mbox{C}_{j} F^{r-1-i}(x-1) \\ & &\displaystyle \times p^{s-r+i+j+1}(x)\{ 1-F(x) \}^{n-s-j} \alpha^i (1-\alpha)^{s-r-1} \\ & &\displaystyle \times \beta^{s-r+i} (1-\beta)^j d \alpha d \beta . {} \end{array} \end{aligned} $$
(2.A1.41)

Evaluating the two summations in (2.A1.41), we get

(2.A1.42)

In the inner integral on the right-hand side of (2.A1.42), letting \(v=F(x-1)+\alpha \beta p(x)\), we have \(\beta p(x) d \alpha = dv\), \((1-\alpha )\beta p(x) = \beta p(x) - v + F(x-1)\), \(v=F(x-1)\) when \(\alpha =0\), and \(v=F(x-1)+ \beta p(x)\) when \(\alpha =1\). Thus, we have

(2.A1.43)

Subsequently, letting \(w = F(x-1)+ \beta p(x)\), we have \(p(x) d\beta = dw\), \( 1-F(x) + (1-\beta )p(x) = 1-w\), \(w = F(x-1)\) when \(\beta =0\), and \(w = F(x-1)+ p(x) = F(x)\) when \(\beta =1\). Thus, (2.A1.43) can be written as

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{r, s}(x,y) & =&\displaystyle \tilde{C}_{n,s,r} \int_{F(x-1)}^{F(x)}\int_{F(x-1)}^{w} v^{r-1}(w-v)^{s-r-1}(1-w)^{n-s} \\ & &\displaystyle \qquad \qquad \times dv dw {} \end{array} \end{aligned} $$
(2.A1.44)

when \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \(x =y\).

Taking into account that \(\{x < y\}\) and \(\{x \le y-1\}\) are the same in the discrete space and recollecting the region \(D_0\) of integration defined in (2.4.24), the joint pmf’s (2.A1.37) and (2.A1.44) can be combined intoFootnote 19 (2.4.23). \(\spadesuit \)

2.1.5 Proof of Theorem 2.6.3

Proof

  1. (1)

    From the definition (2.6.2) of \(\kappa _{r,s}\), we have \(\kappa _{r,s}^{s-r} = \frac {(n-s)!(s-1)!}{(n-r)!(r-1)!} = \frac {(s-1)}{(n-r)} \frac {(s-2) \cdots r }{(n-r-1) \cdots (n-s+1)}\), the numerator and denominator of which are both products of \((s-r)\) consecutive natural numbers. By comparing the numbers \((s-k)\) and \((n-r-k+1)\) at the same location of the numerator and denominator, respectively, for \(k \in \mathbb {J}_{1,s-r}\), we easily get \(\kappa _{r,s} \lesseqqgtr 1\) when \(s-i \lesseqqgtr n-r-i+1\) or, equivalently, when \(r+s \lesseqqgtr n+1\). Next, noting that \(\kappa _{r,s} \gtreqqless 1\) for \(r+s \gtreqqless n+1\) and that \(\frac {\kappa _{r,s}}{1+\kappa _{r,s}}\) is an increasing function of \(\kappa _{r,s} \in [0, \infty )\), we have \(\frac {\kappa _{r,s}}{1+\kappa _{r,s}} \gtreqqless \frac {1}{2} \) when \(r+s \gtreqqless n+1\).

  2. (2)

    We have \(\kappa _{r,s} \kappa _{n-s+1,n-r+1}= \left \{ \frac {(n-s)!(s-1)!}{(n-r)!(r-1)!}\right \}^{\frac {1}{s-r}} \left \{ \frac {(r-1)!(n-r)!}{(s-1)!(n-s)!}\right \}^{\frac {1}{s-r}} =1\) from the definition (2.6.2) of \(\kappa _{r,s}\).

  3. (3)

    We have \(\left ( \frac {\kappa _{r,s+1}}{\kappa _{r,s}} \right )^{(s-r+1)(s-r)} = \left ( \frac { \kappa _{r,s+1}^{s-r+1} }{ \kappa _{r,s}^{s-r} } \frac {1}{ \kappa _{r,s}} \right )^{s-r} = \left ( \frac { s }{ n-s } \right )^{s-r} \frac {1}{ \kappa _{r,s}^{s-r}} = \left ( \frac { s }{ n-s } \right )^{s-r} \frac {(n-r)(n-r-1) \cdots (n-s+1)}{(s-1)(s-2) \cdots r } =\frac { s(n-r)}{ (n-s)(s-1) } \frac { s(n-r-1)}{ (n-s)(s-2) } \cdots \frac { s(n-s+1)}{ (n-s)r }\), the right-hand side of which is the product of ratios all in the form of \(\frac {ab}{cd}\) with \(a>d\) and \(b>c\), and is thus larger than 1.

  4. (4)

    Similarly to (3) above, we have \(\left ( \frac {\kappa _{r+1,s}}{\kappa _{r,s}} \right )^{(s-r-1)(s-r)} = \left ( \frac { \kappa _{r+1,s}^{s-r-1} }{ \kappa _{r,s}^{s-r} }\right )^{s-r} \kappa _{r,s}^{s-r} = \frac { (n-r)(s-1)}{ r(n-r) } \frac { (n-r)(s-2)}{ r(n-r-1) } \cdots \frac { (n-r)r}{ r(n-s+1) } \): all the terms in the product on the right-hand side are of the ratio \(\frac {ab}{cd}\), where \(\{a \ge d, b> c\}\) or \(\{a>d, b \ge c\}\) when \(s > r+1\). Thus, the product is larger than 1.

  5. (5)

    From \(\kappa _{r+1,s-1}^{s-r-2} = \frac { (s-2)!(n-s+1)! }{ (n-r-1)!r !} \) and \(\kappa _{r,s}^{s-r} = \frac { (s-1)!(n-s)! }{ (n-r)!(r-1)!} \), we have \(\frac {\kappa _{r+1,s-1}^{s-r-2}}{\kappa _{r,s}^{s-r}} = \frac { (n-s+1)(n-r) }{ r(s-1) }\). Thus, \(\left ( \frac {\kappa _{r+1,s-1}}{\kappa _{r,s}} \right )^{(s-r-2)(s-r)} = \left ( \frac {\kappa _{r+1,s-1}^{s-r-2}}{\kappa _{r,s}^{s-r}} \right )^{s-r} \left ( \kappa _{r,s}^{s-r} \right )^2 = \frac { (n-s+1)^{s-r} (n-r)^{s-r}}{ r^{s-r} (s-1)^{s-r} } \left \{\frac { (n-s)! (s-1)! }{ (n-r)!(r-1) !} \right \}^2 =\frac { (n-s+1)^{s-r} (n-r)^{s-r}}{ r^{s-r} (s-1)^{s-r} } \left \{\frac { (s-1) r }{ (n-r) (n-s+1)} \right \}^2 \left \{ \frac { (s-2) (s-3) \cdots (r+1) }{ (n-r-1)(n-r-2) \cdots (n-s+2)} \right \}^2 = \left \{ \frac { (n-s+1)^{s-r-2}}{ r^{s-r-2} } \frac { (s-2)(s-3) \cdots (r+1) }{ (n-r-1) (n-r-2) \cdots (n-s+2)} \right \} \left \{ \frac { (n-r)^{s-r-2}}{ (s-1)^{s-r-2} } \frac { (r+1)(r+2) \cdots (s-2) }{ (n-s+2) (n-s+3) \cdots (n-r-1)} \right \} =\left \{ \prod \limits _{k=0}^{s-r-3} \frac { (n-s+1)(s-2-k)}{r(n-r-1-k)} \right \} \bigg \{ \prod \limits _{k=0}^{s-r-3} \frac { (n-r)(r+1+k)}{(s-1)(n-s+2+k)}\bigg \} \), i.e.,

    (2.A1.45)

    Now consider the difference \( \varDelta = (n-s+1)(s-2-k)(n-r)(r+1+k) - r(n-r-1-k)(s-1)(n-s+2+k) = cd(b-m)(a+m) -ab(c-m)(d+m)\) between the numerator and denominator of a term on the right-hand side of (2.A1.45), where we let \(a=n-r-1-k\), \(b=n-s+2+k\), \(c= r+1+k\), \(d=s-2-k\), and \(m=k+1\) for simplicity and convenience. Then, we have \(\varDelta = cd\left (ab-am+bm-m^2 \right ) - ab\left (cd-dm+cm-m^2 \right )= -acdm+bcdm-cdm^2 +abdm-abcm+abm^2\), i.e.,

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta & =&\displaystyle m \{ -ac (b+d) + bd (a+c) +abm-cdm \}. \end{array} \end{aligned} $$
    (2.A1.46)

    Noting that \(a+c=n\), \(b+d=n\), and \(ab=n^2-cn-dn+cd\), we get \(\varDelta = m \left ( bdn -acn+ mn^2 -cmn-dmn \right ) = mn ( bd -ac+ mn -cm-dm ) = mn \left ( dn - d^2 - cn + c^2 + mn -cm-dm \right ) = mn \big \{ (d-c+m) n - d^2 + c^2 - (c+d)m \big \} = mn \{ (d-c+m) n + (c+d)(c-d-m) \} \), i.e.,

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta & =&\displaystyle mn (d-c+m)( n - c-d). \end{array} \end{aligned} $$
    (2.A1.47)

    Here, \(d-c+m = s-2-k-r-1-k+k+1=s-r-2-k > 0\) for \(k \in \mathbb {J}_{0,s-r-3}\) and \(3 \le r+2 < s \le n\). In addition, we have \(n-c-d = n- r-1-k - s+2+k=n-r-s +1 \lesseqqgtr 0\) when \( n+1\lesseqqgtr s+r\). Therefore, \(\varDelta \lesseqqgtr 0\) for all values of \(k \in \mathbb {J}_{0,s-r-3}\) when \(n+1 \lesseqqgtr s+r\), which completes the proof.

  6. (6)

    We have \(\kappa _{r+1,s-a-1} > \kappa _{r,s}\) because \(\kappa _{r,s}>1\), \(n-r > s-1\), \(n-s+a+1 > s-2\), \(\cdots \), \(n-s+a+1 > n-s+2\), and \(n-s+1 > r\) in \(\left ( \frac {\kappa _{r+1,s-a-1}} {\kappa _{r,s}} \right )^{s-r-a-2} =\frac { (n-r) \times (n-s+a+1)(n-s+a) \cdots (n-s+1) } { (s-1) (s-2) \cdots (s-a-1) \times r} \kappa _{r,s}^{a+2}\).

  7. (7)

    We have \(\kappa _{m,s-m}^{s-2m} = \frac {(s-m-1)(s-m-2) \cdots m }{(n-m)(n-m-1) \cdots (n-s+m+1)}\), i.e.,

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{m,s-m}^{s-2m} &=& \left\{ \begin{array}{ll} \frac{s-1}{2n-s+1}, & s\mbox{ is an odd number},\\ \frac{s(s-2)}{(2n-s+2)(2n-s)} , & s\mbox{ is an even number} \end{array} \right. \end{array} \end{aligned} $$
    (2.A1.48)

    because \(s-2m=1\) from \(m=\frac {s}{2}-1\) and \(s-2m=2\) from \(m=\frac {s-1}{2}\) when s is an odd and even numbers, respectively. \(\spadesuit \)

2.1.6 Proof of Theorem 2.6.4

Proof

First, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{r_i,s_i} \ = \ \left\{ \frac {\left (s_i-1 \right )!} {\left ( r_i-1 \right )!} \frac { 1 } {\left (n-r_i \right )\left(n-r_i-1 \right ) \cdots \left(n-s_i+1 \right )} \right\}^{\frac{1}{s_i-r_i}} \qquad \end{array} \end{aligned} $$
(2.A1.49)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{r_j,s_j} \ = \ \left\{ \frac {\left(s_j-1 \right )!} {\left ( r_j-1 \right )!} \frac { 1 } { \left(n-r_j \right )\left(n-r_j-1 \right ) \cdots \left(n-s_j+1 \right ) } \right\}^{\frac{1}{s_j-r_j}} \qquad \qquad \quad \end{array} \end{aligned} $$
(2.A1.50)

from the definition (2.6.2) of \(\kappa _{r,s}\). Then, recollecting that \(s_j < s_i -r_j+r_i < s_i\), \(r_i < r_j < s_j < s_i\), \(s_i -r_i > s_j-r_j\), \(n-r_i >n-r_j >n-s_j >n- s_i\), and \(\left (s_i-r_i \right ) -\left ( s_j-r_j \right ) = s_i-s_j+r_j-r_i >0\) when \(r_j > r_i\) and \(r_j + s_j < r_i+s_i\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left\{ \frac { \kappa_{r_i,s_i} } { \kappa_{r_j,s_j} } \right\}^{\left(s_i-r_i \right ) \left(s_j-r_j \right )} \ = \ M_{r_i, s_i, r_j, s_j} \frac {N_{r_i, s_i, r_j, s_j}(n)} {D_{r_i, s_i, r_j, s_j}(n)}, \end{array} \end{aligned} $$
(2.A1.51)

where \(M_{a, b, c, d} = \left \{ \frac { (b-1)! } { (a-1)! } \right \}^{d-c} \left \{ \frac { (c-1)! } { (d-1)! } \right \}^{b-a}\), \(N_{a, b, c, d}(n) = \{ (n-c)(n-c-1) \cdots (n-d+1) \}^{b-d+c-a}\), and \(D_{a, b, c, d}(n) = \{ (n-a)(n-a-1) \cdots (n-c+1) \}^{d-c}\{ (n-d)(n-d-1) \cdots (n-b+1) \}^{d-c}\).

Both \(N_{r_i, s_i, r_j, s_j}(n)\) and \(D_{r_i, s_i, r_j, s_j}(n)\) are \(\left (s_i-s_j+r_j-r_i \right )\left (s_j-r_j \right )\)-th order polynomials of n and the largest zero \(\left ( s_j-1 \right )\) of \(N_{r_i, s_i, r_j, s_j}(n)\) is smaller than the largest zero \(\left (s_i-1 \right )\) of \(D_{r_i, s_i, r_j, s_j}(n)\). Thus, \(\frac {N_{r_i, s_i, r_j, s_j}(n)}{D_{r_i, s_i, r_j, s_j}(n)}\) is a decreasing function no smaller than 1 for \(n > s_i-1\) and \(\lim \limits _{n \to \infty } \frac {N_{r_i, s_i, r_j, s_j}(n)}{D_{r_i, s_i, r_j, s_j}(n)} = 1\). Therefore, \(\kappa _{r_i,s_i} > \kappa _{r_j,s_j}\) if \(M_{r_i, s_i, r_j, s_j}>1\). On the other hand, if \(M_{r_i, s_i, r_j, s_j}<1\), then \(\kappa _{r_i,s_i} > \kappa _{r_j,s_j}\) when n is small and \(\kappa _{r_i,s_i} < \kappa _{r_j,s_j}\) when n is large. Taking the natural logarithm, this result can be expressed as (2.6.23). \(\spadesuit \)

Note

Assume we arrange \(\left \{ \kappa _{r,n+1} \right \}_{r=1}^{n}\) at the n-th row for \(n \in \mathbb {J}_{1,\infty }\) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \kappa_{1,2} \\ & &\displaystyle \kappa_{1,3} \quad \kappa_{2,3} \\ & &\displaystyle \kappa_{1,4} \quad \kappa_{2,4} \quad \kappa_{3,4} \\ & &\displaystyle \kappa_{1,5} \quad \kappa_{2,5} \quad \kappa_{3,5} \quad \kappa_{4,5} \\ & &\displaystyle \kappa_{1,6} \quad \kappa_{2,6} \quad \kappa_{3,6} \quad \kappa_{4,6} \quad \kappa_{5,6} \\ & &\displaystyle \kappa_{1,7} \quad \kappa_{2,7} \quad \kappa_{3,7} \quad \kappa_{4,7} \quad \kappa_{5,7} \quad \kappa_{6,7} \\ & &\displaystyle \ \ \vdots \qquad \ \ \vdots \qquad \ \ \vdots \qquad \ \ \vdots \qquad \ \ \vdots \qquad \ \ \vdots \quad \ \ddots \end{array} \end{aligned} $$

to compare \(\kappa _{r_i,s_i}\) and \(\kappa _{r_j,s_j}\) in a visual manner. Let us call the straight line connecting \(\kappa _{r,s}\)’s for which \(r+s=t\) the ‘diagonal line t’: for instance, in the arrangement shown above, the diagonal line 6 is the straight line connecting \(\kappa _{1,5}\) and \(\kappa _{2,4}\), and the diagonal line 7 is the straight line connecting \(\kappa _{1,6}\), \(\kappa _{2,5}\), and \(\kappa _{3,4}\). Then, when we consider on or above the diagonal line \((n+1)\), from Theorem 2.6.3, we have the following observations.

  1. (A)

    The parameter \(\kappa _{r_i,s_i}\) is no smaller than \(\kappa _{r_j,s_j}\) located in or on the left-hand side of column \(r_i\) and at the same time on or above the diagonal line \(\left ( r_i+s_i \right )\), that is, when \(\left \{ r_j \le r_i, r_j + s_j \le r_i + s_i \right \}\).

  2. (B)

    The parameter \(\kappa _{r_i,s_i}\) is no larger than \(\kappa _{r_j,s_j}\) located in or on the right-hand side of column \(r_i\) and at the same time on or below the diagonal line \(\left (r_i+s_i \right )\), that is, when \(\left \{ r_j \ge r_i, r_j + s_j \ge r_i + s_i \right \}\). This result is basically the same as that in (A).

  3. (C)

    The relative magnitude depends on n in the remaining region. Here, the remaining region is the union of \(\left \{ r_j < r_i, r_j + s_j > r_i + s_i \right \}\), the region on the left-hand side of column \(r_i\) and at the same time below the diagonal line \(\left (r_i+s_i \right )\), and \(\left \{ r_j > r_i, r_j + s_j < r_i + s_i \right \}\), the region on the right-hand side of column \(r_i\) and at the same time above the diagonal line \(\left (r_i+s_i \right )\). Theorem 2.6.4 deals with these cases.

2.1.7 Evaluations in Example 2.4.4

As shown in (2.2.15), the marginal cdf of \(\boldsymbol {X}\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} F(x) \ = \ \left\{ \begin{array}{llll} 0, & x < 1; & \quad \frac{1}{2} , & 1 \le x < 2; \\ \frac{5}{6}, & 2 \le x < 3; & \quad 1, & x \ge 3 . \end{array} \right. {} \end{array} \end{aligned} $$
(2.A1.52)

Then, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1,2}(x,y) &=& \left\{ \begin{array}{ll} 0, & x < 1 \mbox{ or } y < 1, \\ \left ( \frac{1}{2} \right )^3 -3 \left ( \frac{1}{2} \right )^2 - 3 \left ( \frac{1}{2} \right )^3 + 6\left ( \frac{1}{2} \right )^2, & 1 \le x \le y <2, \\ 3 \left ( \frac{1}{2} \right )^2 -2 \left ( \frac{1}{2} \right )^3 , & 1 \le y \le x <2, \\ 3 \left ( \frac{1}{2} \right )^2 -2 \left ( \frac{1}{2} \right )^3 ,\\ \qquad 2 \le x < 3, \, 1 \le y <2, \\ 3 \left ( \frac{1}{2} \right )^2 -2 \left ( \frac{1}{2} \right )^3 , & x \ge 3, \, 1 \le y <2, \\ \left ( \frac{1}{2} \right )^3 -3 \left ( \frac{1}{2} \right )^2 - 3 \times \frac{1}{2} \left ( \frac{5}{6} \right )^2 \\ \quad + 6\times \frac{1}{2} \times \frac{5}{6},\\ \qquad 1 \le x < 2, \, 2 \le y <3, \\ \left ( \frac{5}{6} \right )^3 -3 \left ( \frac{5}{6} \right )^2 - 3 \left ( \frac{5}{6} \right )^3 + 6\left ( \frac{5}{6} \right )^2, & 2 \le x \le y <3, \\ 3 \left ( \frac{5}{6}\right )^2 -2 \left ( \frac{5}{6} \right )^3 , & 2 \le y \le x <3, \\ 3 \left ( \frac{5}{6}\right )^2 -2 \left ( \frac{5}{6} \right )^3 , & x \ge 3, \, 2 \le y <3, \\ \left ( \frac{1}{2} \right )^3 -3 \left ( \frac{1}{2} \right )^2 - 3 \times \frac{1}{2} + 6 \times \frac{1}{2} , & 1 \le x < 2, \, y \ge 3, \\ \left ( \frac{5}{6} \right )^3 -3 \left ( \frac{5}{6} \right )^2 - 3 \times \frac{5}{6} + 6 \times \frac{5}{6}, & 2 \le x < 3, \, y \ge 3, \\ 1, & x \ge 3, \, y \ge 3, \\ \end{array} \right. \end{array} \end{aligned} $$
(2.A1.53)

that is,

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1,2}(x,y) &=& \left\{ \begin{array}{llll} 0, & x < 1 \mbox{ or } y < 1; & \quad \frac{1}{2} , & x \ge 1, \, 1 \le y <2; \\ \\ \frac{5}{6} , & 1 \le x < 2, \, 2 \le y <3; & \quad \frac{25}{27} , & x \ge 2, \, 2 \le y <3; \\ \\ \frac{7}{8} , & 1 \le x < 2, \, y \ge 3; & \quad \frac{215}{216} , & 2 \le x < 3, \, y \ge 3; \\ \\ 1, & x \ge 3, \, y \ge 3 \end{array} \right. {}\qquad \qquad \qquad \end{array} \end{aligned} $$
(2.A1.54)

using

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1,2}(x,y) &=& \left\{ \begin{array}{ll} F^3 (x) -3 F^2(x)-3 F(x) F^2 (y) \\ \quad + 6 F(x) F(y) , & x \le y, \\ 3 F^2(y) - 2 F^3(y), & x \ge y, \\ \end{array} \right. {}\qquad \qquad \end{array} \end{aligned} $$
(2.A1.55)

shown in (2.E.13). We similarly get

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1,3}(x,y) \ = \ \left\{ \begin{array}{llll} 0, & x < 1 \mbox{ or } y < 1; & \quad \frac{1}{8} , & x \ge 1, \, 1 \le y <2; \\ \\ \frac{13}{24} , & 1 \le x < 2, \, 2 \le y <3; & \quad \frac{125}{216} , & x \ge 2, \, 2 \le y <3; \\ \\ \frac{7}{8} , & 1 \le x < 2, \, y \ge 3; & \quad \frac{215}{216} , & 2 \le x < 3, \, y \ge 3; \\ \\ 1, & x \ge 3, \, y \ge 3 \end{array} \right. {}\qquad \qquad \end{array} \end{aligned} $$
(2.A1.56)

using

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1,3}(x,y) &=& \left\{ \begin{array}{ll} F^3(x) - 3 F^2(x) F(y)+ 3 F(x) F^2 (y) , & x \le y, \\ F^3(y), & x \ge y \end{array} \right. \qquad {} \end{array} \end{aligned} $$
(2.A1.57)

shown in (2.E.14) and

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{2,3}(x,y) \ = \ \left\{ \begin{array}{llll} 0, & x < 1 \mbox{ or } y < 1; & \quad \frac{1}{8}, & x \ge 1, \, 1 \le y <2; \\ \\ \frac{3}{8} , & 1 \le x < 2, \, 2 \le y < 3; & \quad \frac{125}{216} , & x \ge 2, \, 2 \le y <3; \\ \\ \frac{1}{2} , & 1 \le x < 2, \, y \ge 3; & \quad \frac{25}{27} , & 2 \le x < 3, \, y \ge 3; \\ \\ 1, & x \ge 3, \, y \ge 3 \end{array} \right. {}\qquad \qquad \qquad \end{array} \end{aligned} $$
(2.A1.58)

using

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{2,3}(x,y) &=& \left\{ \begin{array}{ll} 3 F^2(x) F(y)-2F^3(x) , & x \le y, \\ F^3(y), & x \ge y \end{array} \right. {} \end{array} \end{aligned} $$
(2.A1.59)

shown in (2.E.15). Based on these results, we can obtain the marginal cdf’s of the order statistics. Specifically, we get the cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[1]}}(x) \ = \ \left\{ \begin{array}{llll} 0, & x < 1; & \quad \frac{7}{8}, & 1 \le x < 2; \\ \frac{215}{216}, & 2 \le x < 3; & \quad 1, & x \ge 3 \end{array} \right. {} \end{array} \end{aligned} $$
(2.A1.60)

of the first order statistic based on (2.A1.54) or (2.A1.56), the cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[2]}}(x) \ = \ \left\{ \begin{array}{llll} 0, & x < 1; & \quad \frac{1}{2} , & 1 \le x < 2; \\ \frac{25}{27}, & 2 \le x < 3; & \quad 1, & x \ge 3 \end{array} \right. {} \end{array} \end{aligned} $$
(2.A1.61)

of the second order statistic based on (2.A1.54) or (2.A1.58), and the cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[3]}}(x) \ = \ \left\{ \begin{array}{llll} 0, & x < 1; & \quad \frac{1}{8}, & 1 \le x < 2; \\ \frac{215}{216}, & 2 \le x < 3; & \quad 1, & x \ge 3 \end{array} \right. {} \end{array} \end{aligned} $$
(2.A1.62)

of the third order statistic based on (2.A1.56) or (2.A1.58). Clearly, (2.A1.60), (2.A1.61), and (2.A1.62) are the same as (2.2.16), (2.2.17), and (2.2.18), respectively.

Next, based on (2.4.23), the joint pmf’s can be obtained as follows. First, from

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{1,2} (x,y) &=& \left\{ \begin{array}{ll} 6 \int_{F(y-1)}^{F(y)} \int_{F(x-1)}^{F(x)} (1-w) dv \, dw, & x < y, \\ 6 \int_{F(y-1)}^{F(y)} \int_{F(x-1)}^{w} (1-w) dv \, dw, & x = y, \\ 0, & x > y \end{array} \right. \\ &=& \left\{ \begin{array}{ll} 6 p(x) \int_{F(y-1)}^{F(y)} (1-w) dw, & x < y, \\ 6 \int_{F(y-1)}^{F(y)} \big [ -w^2 + \{ 1 + F(y-1) \} w \\ \qquad - F(y-1) \big] dw, & x = y,\\ 0, & x > y \end{array} \right. \\ &=& \left\{ \begin{array}{ll} 3 p(x)p(y)\left\{2-F(y)- F(y-1) \right \}, & x < y, \\ p^2(y)\left\{3-2F(y)- F(y-1) \right \}, & x = y, \\ 0, & x > y , \end{array} \right. \end{array} \end{aligned} $$
(2.A1.63)

we have \(p_{1,2}(1,1) = \frac {1}{4} ( 3-1 ) = \frac {1}{2} \), \(p_{1,2}(1,2) = 3 \times \frac {1}{2} \times \frac {1}{3} \left ( 2- \frac {5}{6}- \frac {1}{2} \right ) = \frac {1}{3}\), \(p_{1,2}(1,3) = 3 \times \frac {1}{2} \times \frac {1}{6} \left ( 2- 1 - \frac {5}{6} \right ) = \frac {1}{24}\), \(p_{1,2}(2,2) = \frac {1}{9} \left ( 3- 2 \times \frac {5}{6}- \frac {1}{2} \right ) = \frac {5}{54}\), \(p_{1,2}(2,3) = 3 \times \frac {1}{3} \times \frac {1}{6} \times \frac {1}{6} = \frac {1}{36}\), and \(p_{1,2}(3,3) = \frac {1}{36} \left ( 3- 2 - \frac {5}{6} \right ) = \frac {1}{216}\). Similarly, we get \(p_{1,3}(1,1) = \frac {1}{8}\), \(p_{1,3}(1,2) = \frac {5}{12}\), \(p_{1,3}(1,3) = \frac {1}{3}\), \(p_{1,3}(2,2) = \frac {1}{27}\), \(p_{1,3}(2,3) = \frac {1}{12}\), and \(p_{1,3}(3,3) = \frac {1}{216}\) from

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{1,3} (x,y) \ = \ \left\{ \begin{array}{ll} 6 \int_{F(y-1)}^{F(y)} \int_{F(x-1)}^{F(x)} (w-v) dv \, dw, & x < y, \\ 6 \int_{F(y-1)}^{F(y)} \int_{F(x-1)}^{w} (w-v) dv \, dw, & x = y,\\ 0, & x > y . \end{array} \right. \end{array} \end{aligned} $$
(2.A1.64)

From

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{2,3} (x,y) \ = \ \left\{ \begin{array}{ll} 6 \int_{F(y-1)}^{F(y)} \int_{F(x-1)}^{F(x)} v dv dw, & x < y, \\ 6 \int_{F(y-1)}^{F(y)} \int_{F(x-1)}^{w} v dv dw, & x = y, \\ 0, & x > y , \end{array} \right. {} \end{array} \end{aligned} $$
(2.A1.65)

we also get \(p_{2,3}(1,1) = \frac {1}{8}\), \(p_{2,3}(1,2) = \frac {1}{4}\), \(p_{2,3}(1,3) = \frac {1}{8}\), \(p_{2,3}(2,2) = \frac {11}{54}\), \(p_{2,3}(2,3) = \frac {2}{9}\), and \(p_{2,3}(3,3) = \frac {2}{27}\).

Appendix 2: Further Topics in Order Statistics

2.1.1 Finding Distributions from Order Statistics

We have discussed distributions of order statistics of i.i.d. random vectors with known marginal distributions in Sects. 2.22.4. Let us now consider briefly the converse problem of finding the marginal distribution of an i.i.d. random vector when the distributions of the order statistics of the random vector are given.

When all the marginal cdf’s \(\left \{ F_{X_{[r]}}\right \} _{r=1}^{n}\) of the order statistics of an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\) are available, we can obtain the marginal cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} F(x) \ = \ \frac{1}{n} \sum_{r=1}^{n} F_{X_{[r]}}(x) {} \end{array} \end{aligned} $$
(2.A2.1)

of \(\boldsymbol {X}\) based on the equality (2.2.12). Subsequently, the marginal pdf or pmf of \(\boldsymbol {X}\) can be obtained. The marginal pdf or pmf can alternatively be obtained directly. Specifically, we can obtain the pdf directly as

$$\displaystyle \begin{aligned} \begin{array}{rcl} f(x) \ = \ \frac{1}{n} \sum_{r=1}^{n} f_{X_{[r]}}(x) {} \end{array} \end{aligned} $$
(2.A2.2)

from the equality (2.3.16) when all the marginal pdf’s \(\left \{ f_{X_{[r]}}\right \} _{r=1}^{n}\) of the order statistics are given, and the marginal pmf directly as

$$\displaystyle \begin{aligned} \begin{array}{rcl} p(x) \ = \ \frac{1}{n} \sum_{r=1}^{n} p_{X_{[r]}}(x) {} \end{array} \end{aligned} $$
(2.A2.3)

from the equality (2.4.10) when all the marginal pmf’s \(\left \{ p_{X_{[r]}}\right \} _{r=1}^{n}\) of the order statistics are known.

When the joint distributions for all the pairs of two order statistics of \(\boldsymbol {X}\) are available, we may first obtain the marginal distributions of all of the order statistics. Then, we can obtain the marginal cdf F, marginal pdf f, or marginal pmf p of \(\boldsymbol {X}\).

Example 2.A2.1

Assume that the marginal pdf of the r-th order statistic of an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1, X_2, X_3, X_4 \right )\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{X_{[r]}}(x) \ = \ \frac{24\alpha }{(r-1)!(4-r)!}\, e^{-\alpha (5-r)x} \left ( 1-e^{-\alpha x} \right )^{r-1} u(x) {} \end{array} \end{aligned} $$
(2.A2.4)

for \(r \in \mathbb {J}_{1,4}\). Obtain the marginal pdf of \(\boldsymbol {X}\).

Solution

From (2.A2.2), the marginal pdf of \(\boldsymbol {X}\) is \(f(x) = \alpha e^{-\alpha x} \Big \{ e^{-3\alpha x} + 3 e^{-2\alpha x}\left ( 1-e^{-\alpha x} \right ) + 3 e^{-\alpha x} \left ( 1-e^{-\alpha x} \right )^{2} + \left ( 1-e^{-\alpha x} \right )^{3} \Big \} u(x) = \alpha e^{-\alpha x}u(x)\). \(\diamondsuit \)

When the cdf \(F_{X_{[r]}} (x)\) of the r-th order statistic is known, we can in theory obtain the marginal cdf \(F(x)\) of the i.i.d. random vector \(\boldsymbol {X}\) by solving the n-th order equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{j=r}^n \, {}_n\mbox{C}_j F^j(x) \{ 1-F(x) \}^{n-j} \ = \ F_{X_{[r]}} (x) {} \end{array} \end{aligned} $$
(2.A2.5)

about \(F(x)\). For example, if \(F_{X_{[n]}} (x)\) or \(F_{X_{[1]}} (x)\) is known, the solution is

$$\displaystyle \begin{aligned} \begin{array}{rcl} F (x) \ = \ \left \{F_{X_{[n]}} (x) \right \}^{\frac{1}{n}} {} \end{array} \end{aligned} $$
(2.A2.6)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} F (x) \ = \ 1- \left \{1-F_{X_{[1]}} (x) \right \}^{\frac{1}{n}} {} \end{array} \end{aligned} $$
(2.A2.7)

from \(F_{X_{[n]}} (x) = F^n(x)\) and \(F_{X_{[1]}} (x) = 1- \{ 1- F(x)\}^n\), respectively. However, solving (2.A2.5) for \(F(x)\) is generally not plausible.

Example 2.A2.2

Assume the cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[4]}} (x) \ = \ \left\{ \begin{array}{llll} 0, & x < 1; & \quad \frac{1}{81}, & 1 \le x < 2; \\ \frac{1}{16}, & 2 \le x < 3;& \quad 1, & x \ge 3 \end{array} \right. {} \end{array} \end{aligned} $$
(2.A2.8)

of the largest order statistic of a four-dimensional discrete i.i.d. random vector. Obtain the cdf \(F_{X_{[1]}}\) of the smallest order statistic.

Solution

From (2.A2.6) and (2.A2.8), we get \(F(1) = \left ( \frac {1}{81} \right )^{\frac {1}{4}}= \frac {1}{3}\), \(F(2) = \left ( \frac {1}{16} \right )^{\frac {1}{4}}= \frac {1}{2} \), and \(F(3) = 1^{\frac {1}{4}} = 1\). Based on these values, we can obtain the cdf’s \(F_{X_{[1]}}\), \(F_{X_{[2]}}\), and \(F_{X_{[3]}}\); and the pmf’s \(p_{X_{[1]}}\), \(p_{X_{[2]}}\), \(p_{X_{[3]}}\), and \(p_{X_{[4]}}\) of the order statistics. Specifically, we have the cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[1]}} (x) \ = \ \left\{ \begin{array}{llll} 0, & x < 1; & \quad \frac{65}{81}, & 1 \le x < 2; \\ \frac{15}{16}, & 2 \le x < 3; & \quad 1, & x \ge 3 \end{array} \right. \end{array} \end{aligned} $$
(2.A2.9)

of the smallest order statistic from \(F_{X_{[1]}}(1) = 1- \left ( 1- \frac {1}{3} \right )^4= \frac {65}{81}\), \(F_{X_{[1]}}(2) = 1- \left ( 1- \frac {1}{2} \right )^4=\frac {15}{16}\), and \(F_{X_{[1]}}(3) = 1-( 1- 1 )^4=1\). Figure 2.20 shows the cdf’s \(F_{X_{[4]}}(x)\) and \(F_{X_{[1]}} (x)\). \(\diamondsuit \)

Fig. 2.20
Two line graphs of F x subscript 4 over x and f x subscript 1 over x versus x. Graph 1 plots vertical and dashed lines that intersect at (1, 1 by 81), (2, 1 by 81), (2, 1 by 16), (3, 1 by 16), and (3, 1). Graph 2 plots vertical and dashed lines that intersect at (1, 65 by 81) and (2, 65 by 81).

The cdf \(F_{X_{[1]}} (x)\) of the smallest order statistic when the cdf \(F_{X_{[4]}} (x)\) of the largest order statistic is given for a four-dimensional discrete i.i.d. random vector

More generally, assume an i.i.d. random vector \(\boldsymbol {X}\) of dimension n. Then, there exist \(\,{ }_{n}\mbox{C}_m \) joint distributions of m-dimensional order statistic vectors. Now, recollecting the discussion in section “Selecting Elements with No Element Excluded” in Appendix 3, Chap. 1, if we use \(\left \lceil \frac {n}{m}\right \rceil \) appropriately selected, or \(\left ( 1 + \,{ }_{n-1}\mbox{C}_m \right ) \) arbitrarily selected, joint distributions of m-dimensional order statistic vectors, all the marginal distributions of the order statistics \(\left \{X_{[r]}\right \}_{r=1}^{n}\) can be obtained.

Example 2.A2.3

Assume a five-dimensional i.i.d. random vector \(\boldsymbol {X}\). Then, from \(\left \lceil \frac {5}{3}\right \rceil = 2\) appropriately selected joint distributions of three-dimensional order statistic vectors, for example from the joint distribution of \(\left ( X_{[1]}, X_{[2]}, X_{[4]}\right )\) and that of \(\left ( X_{[2]}, X_{[3]}, X_{[5]}\right )\), we can obtain all the marginal distributions of \(\left \{ X_{[r]} \right \}_{r=1}^{5}\). The marginal distribution of \(\boldsymbol {X}\) can then subsequently be obtained by (2.A2.1), (2.A2.2), or (2.A2.3).

Alternatively, among all the \({ }_{5}\mbox{C}_3=10\) possible joint distributions of three-dimensional order statistic vectors of \(\boldsymbol {X}\), if \(1 + \,{ }_{4}\mbox{C}_3 = 5\) arbitrarily selected joint distributions are available, we can obtain all the marginal distributions of \(\left \{ X_{[r]} \right \}_{r=1}^{5}\). The marginal distribution of \(\boldsymbol {X}\) can then subsequently be obtained by (2.A2.1), (2.A2.2), or (2.A2.3). \(\diamondsuit \)

2.1.2 Random Vectors of Distinct Distributions with the Same Joint Distributions of Order Statistic Vectors

As it is observed in Example 2.1.3 and Exercise 2.47, even when the joint pdf of a random vector \(\boldsymbol {X}\) is different from that of another random vector \(\boldsymbol {Y}\), the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}\) of \(\boldsymbol {X}\) may sometimes be the same as that of the order statistic vector \(\boldsymbol {Y_{[\cdot ]}}\) of \(\boldsymbol {Y}\).

Let us now discuss this issue briefly when the support of the joint pdf \(f_{\boldsymbol {X}} (x,y)\) is \(\left \{(x,y): \, 0 \le x \le 1, 0 \le y \le 1 \right \}\) for two-dimensional random vectors.

Theorem 2.A2.1

Assume non-negative constants\(\left \{ a_j \right \}_{j=0}^{m}\), at least one of which is positive. Let

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{a} \ = \ \sum_{j=0}^{m} \frac {a_j}{(j+1)(m-j+1)}. \end{array} \end{aligned} $$
(2.A2.10)

When the joint pdf of a random vector\(\boldsymbol {X}=\left (X_1, X_2\right )\)is

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X}}( x, y ) \ = \ u(x)u(1-x)u(y)u(1-y) \, \frac{1}{\tilde{a}} \sum_{j=0}^{m} a_{j} x^j y^{m-j} , \end{array} \end{aligned} $$
(2.A2.11)

we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X_{[\cdot]}}}( x, y ) \ = \ u(x)u(y-x)u(1-y) \, \frac{1}{\tilde{a}}\sum_{j=0}^{m} \left(a_{j} + a_{m-j} \right)x^j y^{m-j} \qquad {} \end{array} \end{aligned} $$
(2.A2.12)

as the joint pdf of the order statistic vector\(\boldsymbol {X_{[\cdot ]}}= \left ( X_{[1]} , X_{[2]} \right )\)of\(\boldsymbol {X}\).

Proof

We get \(f_{\boldsymbol {X_{[\cdot ]}}}( x, y ) = u(x)u(y-x)u(1-y) \frac {1}{\tilde {a}} \sum \limits _{j=0}^{m} \left ( a_j x^j y^{m-j} + a_j y^j x^{m-j} \right ) = u(x)u(y-x)u(1-y) \frac {1}{\tilde {a}} \sum \limits _{j=0}^{m} \left ( a_j + a_{m-j} \right ) x^j y^{m-j}\) from (2.1.9). \(\spadesuit \)

Note that \(\sum \limits _{j=0}^{m} \frac { a_{j} + a_{m-j} } { j+1 } = \sum \limits _{j=0}^{m} \frac { a_{j} } { j+1 } + \sum \limits _{t=0}^{m} \frac { a_{t} } { m-t+1 }= \sum \limits _{j=0}^{m} \frac { ( m+2 ) a_{j} } { (j+1)(m- j+1) } = ( m+2 ) \tilde {a}\).

Example 2.A2.4

Let us consider the case \(m=1\) with \(\tilde {a} = \frac {a_0}{2} + \frac {a_1}{2}\), where \(a_0 \ge 0\), \(a_1 \ge 0\), and \(\max \left \{ a_0, a_1 \right \} >0\). Assume the joint pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X}}( x, y ) \ = \ \frac{2\left ( a_1 x + a_0 y \right)}{a_1 + a_0} u(x)u(1-x)u(y)u(1-y) \end{array} \end{aligned} $$
(2.A2.13)

of a random vector \(\boldsymbol {X} = \left ( X_1, X_2 \right )\). Then, we have the joint pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X_{[\cdot]}}}( x, y ) \ = \ 2(x+y) u(x)u(y-x)u(1-y) \end{array} \end{aligned} $$
(2.A2.14)

of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}=\left ( X_{[1]}, X_{[2]} \right )\) of \(\boldsymbol {X}\) from (2.A2.12). \(\diamondsuit \)

2.1.3 Order Statistics of Independent and Non-identically Distributed Random Vectors

In Sect. 2.2, for example, we have discussed the order statistics of i.i.d. random vectors. Let us now briefly consider the order statistics of independent, non-identically distributed random vectors.

For an independent random vector \(\boldsymbol {X}=\left ( X_1 , X_2 , \, \cdots , X_n \right )\), denote the cdf of \(X_i\) by \(F_i\) and the pdf by \(f_i\). Then, the cdf of the r-th order statistic \(X_{[r]}\) of \(\boldsymbol {X}\) can be written as [7]

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[r]}} \left ( x; \boldsymbol {F} \right ) \ = \ \sum_{i=r}^{n} \sum_{S_i} \left \{ \prod_{l=1}^{i}F_{j_l} (x) \right\} \left [ \prod_{l=i+1}^{n} \left\{ 1- F_{j_l} (x) \right\} \right ] , {} \end{array} \end{aligned} $$
(2.A2.15)

where \(\boldsymbol {F} = \left ( F_1, F_2, \, \cdots , F_n \right )\) denotes the vector of the cdf’s, \(\prod \limits _{l=n+1}^{n}\left \{ 1- F_{j_l} (x) \right \}=1\), and \(\sum \limits _{S_i}\) denotes the sum over the permutations \(\left (j_1, j_2 , \, \cdots , j_n \right )\) of \(\mathbb {J}_{1,n}\) satisfying both \(j_1 < j_2 < \cdots < j_i\) and \(j_{i+1} < j_{i+2} < \cdots < j_n\).

Example 2.A2.5

For an independent random vector \(\boldsymbol {X}=\left (X_1, X_2, X_3\right )\) with \(\boldsymbol {F} = \left ( F_1, F_2, F_3 \right )\), express the cdf of the second order statistic \(X_{[2]}\) in terms of \(\boldsymbol {F}\).

Solution

From (2.A2.15), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[2]}} \left ( x; F_1, F_2, F_3 \right ) & =&\displaystyle \sum_{S_2} \left \{ \prod_{l=1}^{2}F_{j_l} (x) \right\} \left [ \prod_{l=3}^{3}\left\{ 1- F_{j_l} (x) \right\} \right ] \\ & &\displaystyle \quad + \sum_{S_3} \left \{ \prod_{l=1}^{3}F_{j_l} (x) \right\} . \qquad {} \end{array} \end{aligned} $$
(2.A2.16)

Now, it is easy to see that we have \(\left ( j_1, j_2, j_3 \right ) = (1, 2, 3)\), \((1, 3, 2)\), and \((2, 3, 1)\) for \(\sum \limits _{S_2}\); and \(\left ( j_1, j_2, j_3 \right ) = (1, 2, 3)\) for \(\sum \limits _{S_3}\). Therefore, (2.A2.16) can be rewritten as \(F_{X_{[2]}} ( x; \boldsymbol {F} ) = F_1 (x) F_2 (x) \left \{ 1- F_3 (x) \right \} + F_1 (x) F_3 (x) \left \{ 1- F_2 (x) \right \} + F_2 (x) F_3 (x) \left \{ 1- F_1 (x) \right \} + F_1 (x) F_2 (x) F_3 (x)\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[2]}} ( x; \boldsymbol {F} ) & =&\displaystyle F_1 (x) F_2 (x)+F_2 (x) F_3 (x)+F_3 (x) F_1 (x) \\ & &\displaystyle - 2F_1 (x) F_2 (x) F_3 (x) . {} \end{array} \end{aligned} $$
(2.A2.17)

Then, when \(F_1 = F_2 = F_3 = F\), we have \(F_{X_{[2]}} \left ( x; F_1, F_2, F_3 \right ) = 3F^2 (x) - 2 F^3 (x)\), which is the same as the cdf (2.2.1) of the i.i.d random vector with \(n=3\) and \(r=2\). \(\diamondsuit \)

Remark 2.A2.1

The cdf of the largest order statistic \(X_{[n]}\) among n independent random variables can be written as \(F_{X_{[n]}} \left ( x; \boldsymbol {F} \right ) = \sum \limits _{S_n} \prod \limits _{l=1}^{n}F_{j_l} (x)\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[n]}} \left ( x; \boldsymbol {F} \right ) & =&\displaystyle \prod_{l=1}^{n}F_{l} (x) {} \end{array} \end{aligned} $$
(2.A2.18)

by noting that \(\sum \limits _{S_n}\) in (2.A2.15) corresponds to the case \(\left ( j_1, j_2 , \, \cdots , j_n \right ) = (1, 2, \, \cdots , n)\). The pdf of the largest order statistic \(X_{[n]}\) among n independent random variables can subsequently be obtained as

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{X_{[n]}} (x) \ = \ \left \{ \prod_{i=1}^{n}F_i (x) \right \} \sum_{i=1}^{n} \frac { f_i (x) }{ F_i (x) } {} \end{array} \end{aligned} $$
(2.A2.19)

based on (2.A2.18). \(\clubsuit \)

Example 2.A2.6

For an i.i.d. random vector of dimension n with the marginal cdf F, we first have \(\prod \limits _{l=i+1}^{n}\left \{ 1- F_{j_l} (x) \right \}= \left \{ 1- F(x) \right \}^{n-i}\) and \(\prod \limits _{l=1}^{i}F_{j_l} (x)= F^i (x)\). Next, consider the partitions \(\big \{ \left \{j_1, j_2, \, \cdots , j_i \right \}, \left \{j_{i+1}, j_{i+2}, \, \cdots , j_n \right \} \big \}\) of \(\mathbb {J}_{1,n}\) into two sets. The number of such partitions is \(\, { }_{n}\mbox{C}_{i} = \frac {n!}{i!(n-i)!}\) and, for any of such partitions, we can construct exactly one permutation of \(\mathbb {J}_{1,n}\) satisfying both \(j_1 < j_2 < \, \cdots < j_i\) and \(j_{i+1} < j_{i+2} < \, \cdots < j_n\): in other words, \(\sum \limits _{S_i} 1 = \frac {n!}{i!(n-i)!}\) in (2.A2.15). Consequently, (2.A2.15) can be written as

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[r]}} \left ( x; \boldsymbol {F} \right ) \ = \ \sum_{i=r}^{n} \frac{n!}{i!(n-i)!} F^i (x) \left\{ 1- F (x) \right\}^{n-i} , {} \end{array} \end{aligned} $$
(2.A2.20)

which is clearly the same as (2.2.1). \(\diamondsuit \)

When \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \( x \le y\), the joint cdf of the r-th order statistic \(X_{[r]}\) and the s-th order statistic \(X_{[s]}\) of an independent random vector \(\boldsymbol {X} = \left ( X_1 , X_2 , \, \cdots , X_n \right )\) is

(2.A2.21)

where \( _{ \{ k \} } \{ C \}\) denotes that k rows are C and \(\mbox{per}\left [\boldsymbol {A}\right ]\) denotes the permanentFootnote 20 of matrix \(\boldsymbol {A}\). When \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \( x \ge y\), the joint cdf \(F_{X_{[r]}, X_{[s]}} \left ( x, y ; \boldsymbol {F} \right )\) is the same as \(F_{X_{[s]}} \left ( y ; \boldsymbol {F} \right )\): that is, it is the same as (2.A2.15) with x replaced by y and r replaced by s: specifically,

(2.A2.22)

for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \(x \ge y\). In parallel, the joint pdf of the r-th and s-th order statistics of \(\boldsymbol {X}\) is

(2.A2.23)

for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\).

Example 2.A2.7

Assume an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f. Let the matrix on the right-hand side of (2.A2.21) be \(\boldsymbol {B}\), and the matrix obtained from \(\boldsymbol {B}\) by deleting the first k rows and m columns be \(\boldsymbol {B}_{k,m}\). Then, we have \(\mbox{per} \left (\boldsymbol {B} \right ) = F(x) \boldsymbol {B}_{1,1} + F(x) \boldsymbol {B}_{1,1} + \, \cdots + F(x)\boldsymbol {B}_{1,1} = nF(x) \boldsymbol {B}_{1,1}= n(n-1)F^2(x) \boldsymbol {B}_{2,2} = \cdots = \, { }_n \mbox{P}_{i_1} F^{i_1}(x) \boldsymbol {B}_{i_1 , i_1} = \, { }_n \mbox{P}_{i_1+1} F^{i_1}(x) \{ F(y)- F(x) \} \boldsymbol {B}_{i_1+1, i_1+1} = \, _n \mbox{P}_{i_1+2} F^{i_1}(x) \{ F(y)- F(x) \}^2 \boldsymbol {B}_{i_1+2, i_1+2} = \cdots \), where \(\, { }_n \mbox{P}_{r} = \frac {n!}{(n-r)!r!}\) is the \((n,r)\) permutation. Subsequently,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mbox{per} \left (\boldsymbol{B} \right ) & =&\displaystyle \, {}_n \mbox{P}_{i_1 + i_2} F^{i_1}(x) \{ F(y)- F(x) \}^{i_2} \boldsymbol{B}_{i_1+i_2, i_1+i_2} \\ & \vdots&\displaystyle \\ & =&\displaystyle n! F^{i_1}(x) \{ F(y)- F(x) \}^{i_2} \{ 1-F(y) \}^{i_3} , \end{array} \end{aligned} $$
(2.A2.24)

Now, for \(\left \{ i_1 \ge r,\, i_1 + i_2 \ge s,\, i_1 + i_2 + i_3 =n \right \}\), let \(i_1 + i_2 = k\). Then, \(i_2 = k-i_1\), \(i_3 = n-k\), and \(i_2 \ge 0\), and thus \(i_1 \le k\): noting this result, the sum \(\sum \limits _{i_1 \ge r, i_1 + i_2 \ge s, i_1 + i_2 + i_3 =n}\) in (2.A2.21) can be written as \(\sum \limits _{k=s}^n \sum \limits _{i_1 = r}^{k}\). Then, the joint cdf of the r-th order statistic \(X_{[r]}\) and the s-th order statistic \(X_{[s]}\) of an n-dimensional i.i.d. random vector \(\boldsymbol {X}\) with the marginal cdf F is expressed as

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle F_{X_{[r]}, X_{[s]}} \left ( x, y ; \boldsymbol {F} \right ) \ = \ \sum_{i_1 \ge r, i_1 + i_2 \ge s, i_1 + i_2 + i_3 =n} \frac{n!F^{i_1} (x)}{i_1 ! i_2 ! i_3 !} \{ F(y)- F(x) \}^{i_2} \\ & &\displaystyle \quad \quad \quad \times \{ 1-F(y) \}^{i_3} \\ & &\displaystyle \quad = \ \sum_{k=s}^n \sum_{i_1 = r}^{k} \frac{n! F^{i_1} (x) \{ F(y)- F(x) \}^{k-i_1} }{i_1 ! \left ( k-i_1 \right )! (n-k) !} \{ 1-F(y) \}^{n-k} {} \end{array} \end{aligned} $$
(2.A2.25)

for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \( x \le y\). Following similar steps, we have the joint pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{X_{[r]}, X_{[s]}} \left ( x, y ; \boldsymbol {F} \right ) & =&\displaystyle \frac{n!F^{r-1} (x) \{ F(y)- F(x) \}^{s-r-1} }{(r-1)!(s-r-1)!(n-s)!} \{ 1-F(y) \}^{n-s} \\ & &\displaystyle \quad \times f(x) f(y) u(y-x) {} \end{array} \end{aligned} $$
(2.A2.26)

for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) of the r-th order statistic \(X_{[r]}\) and the s-th order statistic \(X_{[s]}\) of an n-dimensional i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\). Clearly, (2.A2.25) and (2.A2.26) are the same as the cdf (2.2.28) and pdf (2.3.31), respectively, of the i.i.d. random vectors. \(\diamondsuit \)

2.1.4 Order Statistics from Uniform Distributions

Consider an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf \(F_X\) and an i.i.d. random vector \(\boldsymbol {U} = \left ( U_1, U_2, \, \cdots , U_n \right )\) with marginal distribution \(U(0,1)\). Let the cdf of \(U(0,1)\) be \(F_U\). Then, from \(F_U(x)=x\) for \(0 \le x \le 1\) and \(\mathsf {P} \left (U_j \le F_X(x) \right ) = F_U\left (F_X(x) \right ) = F_X (x)\), we have \(\mathsf {P} \left (X_i \le x \right ) = F_X(x) = \mathsf {P} \left (U_j \le F_X(x) \right ) = \mathsf {P} \left (F_X^{-1} \left (U_j \right ) \le x \right )\) recollecting (1.A1.13): in other words,

$$\displaystyle \begin{aligned} \begin{array}{rcl} X_i\ \stackrel{d}{=} \ F_X^{-1}\left ( U_j \right ) \end{array} \end{aligned} $$
(2.A2.27)

for \(i, j \in \mathbb {J}_{1,n}\), where \(\stackrel {d}{=}\) means ‘equals in distribution’. Following similar steps, for the order statistic vector \(\boldsymbol {X_{[\cdot ]}} = \left (X_{[1]}, X_{[2]}, \, \cdots , X_{[n]} \right )\) of \(\boldsymbol {X}\) and the order statistic vector \(\left (U_{[1]}, U_{[2]}, \, \cdots , U_{[n]} \right )\) of \(\boldsymbol {U}\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \boldsymbol{X_{[\cdot]}} \ \stackrel{d}{=} \ \left( F_X^{-1}\left(U_{[1]}\right), F_X^{-1}\left(U_{[2]} \right), \, \cdots, F_X^{-1}\left(U_{[n]}\right ) \right ) . {} \end{array} \end{aligned} $$
(2.A2.28)

Specifically, the cdf of the random variable \(Y= F_X \left (X_{[r]} \right )\) can be obtained as \(F_Y(x)=\mathsf {P} \left ( F_X \left (X_{[r]} \right ) \le x \right ) = \mathsf {P} \Big ( F_X^{-1} \big ( F_X \left ( X_{[r]} \right ) \big ) \le F_X^{-1}(x) \Big ) = \mathsf {P} \big ( X_{[r]} \le F_X^{-1}(x) \big ) = F_{X_{[r]}} \left ( F_X^{-1}(x) \right )\), where (1.A1.11) and (1.A1.12) are used. Now, using \(F_X \left ( F_X^{-1} (u) \right ) = u\) shown in (1.A1.4) and noting \(F_{X_{[r]}} (x) = \sum \limits _{j=r}^n \, { }_n\mbox{C}_j F_X^j(x) \big \{ 1- F_X(x)\big \}^{n-j}\) shown in (2.2.1), we get \(F_Y(x)= F_{X_{[r]}} \left ( F_X^{-1}(x) \right ) = \sum \limits _{j=r}^n \, { }_n\mbox{C}_j \big \{ F_X\big ( F_X^{-1}(x) \big ) \big \}^j \left \{ 1-F_X\left ( F_X^{-1}(x) \right ) \right \}^{n-j} = \sum \limits _{j=r}^n \, { }_n\mbox{C}_j x^j ( 1- x )^{n-j}\), which is the cdf of \(U_{[r]}\) from an n-dimensional i.i.d. random vector with the marginal distribution \(U(0,1)\) as shown in (2.3.13): in other words, we have \(F_X \left (X_{[r]} \right ) \stackrel {d}{=} U_{[r]}\), or equivalently, \(X_{[r]} \stackrel {d}{=} F_X^{-1} \left (U_{[r]} \right ) \).

The relation (2.A2.28) implies the following: When we obtain the order statistics \(\left \{X_{[i]}\right \}_{i=1}^{n}\) of an i.i.d. random vector with the marginal cdf \(F_X\), we could obtain n observations with the cdf \(F_X\) and then order them. Another way to obtain the order statistics is to use the transformation \(X_{[i]}=F_X^{-1}\left (U_{[i]}\right )\) after first obtaining \(\left \{U_{[i]} \right \}_{i=1}^{n}\).

Based on (2.A2.28), the pmf \(p_{X_{[r]}}(x)= \mathsf {P} \left ( X_{[r]} =x \right ) = \mathsf {P} \big ( F_X^{-1} \left (U_{[r]} \big ) =x \right )\) shown in (2.4.4) of \(X_{[r]}\) can be obtained rather directly as discussed in Exercise 2.37. In addition, when x is an integer and \(F_X\) is the cdf of a discrete random variable, the set \(\left \{ F_X(x-1) < U_{[r]} \le F_X(x) \right \}\) is equivalent to the set \(\left \{ F_X^{-1}\left (U_{[r]}\right )=x \right \}\): this equivalence can also be obtained by replacing F, u, and \(\epsilon \) with \(F_X\), \(U_{[r]}\), and 1, respectively, in \(F \left ( F^{-1} (u) - \varepsilon \right ) < u \le F \left ( F^{-1} (u) \right )\) shown in (1.A1.8) recollecting that the support of a discrete random variable is a set of consecutive integers. Then, the pmf \(p_{r,s} (x,y) = \mathsf {P} \big ( X_{[r]}=x, X_{[s]} =y \big ) = \mathsf {P} \big ( F_X^{-1}\left (U_{[r]}\right ) =x, F_X^{-1}\left (U_{[s]}\right ) =y \big )\) of \(X_{[r]}\) and \(X_{[s]}\) can be obtained in a direct way as

(2.A2.29)

using the joint pdf \(f_{ U_{[r]}, U_{[s]}}\) shown in (2.3.33), where \(D_0\) is defined in (2.4.24). The result (2.A2.29) is the same as (2.4.23) .

Exercises

Exercise 2.1

Obtain the joint cdf of \((W,V)\) for \(W=\min \{X,Y\}\) and

$$\displaystyle \begin{aligned} \begin{array}{rcl} V \ = \ \left\{ {\begin{array}{ll} 1, & \mbox{if }X \leq Y,\\ 0, & \mbox{if }X > Y . \\ \end{array} } \right. \end{array} \end{aligned} $$
(2.E.1)

Here, the two random variables X with pdf \(f_X (x) = \lambda e^{-\lambda x}u(x)\) and Y  with pdf \(f_Y (y) = \mu e^{-\mu y}u(y)\) are independent of each other, \(\lambda >0\), and \(\mu > 0\).

Exercise 2.2

Assume the joint pmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{\boldsymbol{X}} (x,y) \ = \ \left\{ \begin{array}{llll} \frac{1}{4}, & (x,y)= (0,2), (1,0); & \quad \frac{1}{6}, & (x,y)= (2,0), (1,2); \\ \\ \frac{1}{12}, & (x,y)= (0,1), (2,1); & \quad 0, & \mbox{otherwise} \end{array} \right. \end{array} \end{aligned} $$
(2.E.2)

for a discrete random vector \(\boldsymbol {X}=\left ( X_1, X_2 \right )\). Obtain the joint pmf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}} = \left ( X_{[1]}, X_{[2]}\right ) \), the pmf of \(X_{[1]}\), and the pmf of \(X_{[2]}\).

Exercise 2.3

A discrete random vector \(\boldsymbol {X}=\left ( X_1, X_2, X_3 \right )\) has the joint pmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{\boldsymbol{X}} (x,y,z) \ = \ \left\{ \begin{array}{ll} \frac{3}{14}, & (x,y,z)= (0,0,0), (2,2,2),\\ \\ \frac{1}{14}, & (x,y,z)= (0,1,1), (0,1,2), (0,2,0), (1,0,0), \\ & \ (1,0,1), (1,1,2), (1, 2, 0), (2,1,2), \\ \\ 0, & \mbox{otherwise.} \end{array} \right.\qquad \qquad \end{array} \end{aligned} $$
(2.E.3)

Obtain the joint pmf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}} = \left ( X_{[1]}, X_{[2]}, X_{[3]}\right ) \) and the marginal pmf’s of the order statistics.

Exercise 2.4

A discrete random vector \(\boldsymbol {X}= \left ( X_1, X_2, X_3, X_4 \right )\) has the joint pmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{\boldsymbol{X}} (x,y,z,w) \ = \ \left\{ \begin{array}{llll} \frac{1}{36}, & (x,y,z,w)= (3,4,5,4), (4,5,4,5),\\ \\ \frac{2}{36}, & (x,y,z,w)= (4,3,4,5), (4,5,5,4), \\ \\ \frac{3}{36}, & (x,y,z,w)= (3,3,3,3), (5,3,4,4), \\ & \quad (5,5,5,5), \\ \frac{5}{36}, & (x,y,z,w)= (5,4,4,3), \\ \\ \frac{7}{36}, & (x,y,z,w)= (5,4,4,5), \\ \\ \frac{9}{36}, & (x,y,z,w)= (3,4,4,5), \\ \\ 0, & \mbox{otherwise.} \end{array} \right. \end{array} \end{aligned} $$
(2.E.4)

Obtain the joint pmf \(p_{\boldsymbol {X_{[\cdot ]}}}\) of the order statistic vector \(\boldsymbol {X_{[\cdot ]}} = \big ( X_{[1]}, X_{[2]}, X_{[3]}, X_{[4]} \big ) \). Obtain the marginal pmf’s \(\left \{ p_{X_{[i]}} \right \}_{i=1}^{4}\) of the order statistics \(\left \{ X_{[i]} \right \}_{i=1}^{4}\).

Exercise 2.5

A discrete random vector \(\boldsymbol {X}=\left ( X_1, X_2, X_3 \right )\) has the joint pmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{\boldsymbol{X}} (x,y,z) \ = \ \left\{ \begin{array}{llll} \frac{1}{12}, & (x,y,z)= (3,3,3), (5,4,4), (5,5,5);\\ \\ \frac{3}{12}, & (x,y,z)= (3,4,4); \quad \frac{4}{12}, \ (x,y,z)= (4,3,4); \\ \\ \frac{2}{12}, & (x,y,z)= (3,4,5); \quad 0, \ \mbox{otherwise.} \end{array} \right.\qquad \qquad \end{array} \end{aligned} $$
(2.E.5)

Obtain the joint pmf \(p_{\boldsymbol {X_{[\cdot ]}}}\) of the order statistic vector \(\boldsymbol {X_{[\cdot ]}} = \left ( X_{[1]}, X_{[2]}, X_{[3]}\right ) \).

Exercise 2.6

For a discrete random vector \(\boldsymbol {X}=\left ( X_1, X_2 \right )\), assume the joint pmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{\boldsymbol{X}} (x,y) \ = \ \left\{ \begin{array}{llll} \frac{1}{12}, & (x,y)= (1,1), (3,1); & \quad \frac{2}{12}, & (x,y)= (2,3), (3,3); \\ \\ \frac{3}{12}, & (x,y)= (2,1), (1,3); & \quad 0, & \mbox{otherwise.} \end{array} \right. \end{array} \end{aligned} $$
(2.E.6)

Obtain the joint pmf \(p_{\boldsymbol {X_{[\cdot ]}}}\) of the order statistic vector \(\boldsymbol {X_{[\cdot ]}} = \left ( X_{[1]}, X_{[2]}\right ) \) and the marginal pmf’s \(p_{X_{[1]}}\) and \(p_{X_{[2]}}\).

Exercise 2.7

Assume that the continuous random variables \(X_1\), \(X_2\), \(X_3\), and \(X_4\) are i.i.d. with the marginal cdf F and pdf f. Let \(p=\mathsf {P} \left (X_1 <X_2 < X_3 <X_4 \right )\).

  1. (1)

    Show that, for any cdf F, the value of p is the same.

  2. (2)

    Obtain p by integrating the joint pdf of \(\left ( X_1, X_2, X_3, X_4 \right )\).

  3. (3)

    Obtain p by noting that the \(4! =24\) ways of arranging the four random variables \(X_1\), \(X_2\), \(X_3\), and \(X_4\) are equiprobable.

Exercise 2.8

Show that the cdf \(F_{X_{[r]}} (x) = \sum \limits _{j=r}^n \, { }_n\mbox{C}_j F^j(x) \{ 1-F(x) \}^{n-j}\) shown in (2.2.1) can be written also as

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[r]}} (x) \ = \ \frac{n!}{(n-r)!(r-1)!} \int_0^{F(x)} t^{r-1} (1-t)^{n-r} \, dt . {} \end{array} \end{aligned} $$
(2.E.7)

Exercise 2.9

Assume i.i.d. random variables \(\left \{X_k\right \}_{k=1}^{n}\) with the marginal cdf F. Let the cdf of the r-th order statistic \(X_{[r:n]}\) be \(F_{r:n}\).

  1. (1)

    Show \(F_{r:n}(x)=F_{r: n-1}(x)+ \, { }_{n-1}\mbox{C}_{r-1}F^r(x)\{1-F(x)\}^{n-r}\).

  2. (2)

    Show \(F_{r:n}(x) = F^r(x) \sum \limits _{j=0}^{n-r} {{ }_{j+r-1}}\mbox{C}_{r-1} \{1-F(x)\}^j\) by using (1) repeatedly.

  3. (3)

    Show

    $$\displaystyle \begin{aligned} \begin{array}{rcl} F_{r:n}(x) \ =\ F(x)F_{r-1:n-1}(x)+\{1-F(x)\}F_{r:n-1}(x) . {} \end{array} \end{aligned} $$
    (2.E.8)

    The equality (2.E.8) dictates that the cdf \(F_{r:n}(x)\) of the r-th order statistic from n random variables \(\left \{ X_k \right \}_{k=1}^{n}\) can be obtained as the sum of two factors. The first factor is the cdf \(F_{r-1:n-1}(x)\) of the \((r-1)\)-st order statistic from \((n-1)\) i.i.d. random variables \(\left \{ X_k \right \}_{k=1}^{n-1}\) multiplied by the probability \(F(x) =\mathsf {P} \left ( X_j \le x \right )\) of the event \(\left \{ X_j \le x \right \}\). The second factor is the cdf \(F_{r:n-1}(x)\) of the r-th order statistic from \((n-1)\) i.i.d. random variables \(\left \{ X_k \right \}_{k=1}^{n-1}\) multiplied by the probability \(1-F(x) =\mathsf {P} \left ( X_j > x \right ) \) of the event \(\left \{ X_j > x \right \}\).

  4. (4)

    Show \(F_{r:n-1}(x) = \frac {r}{n}F_{r+1: n}(x)+\frac {n-r}{n}F_{r:n}(x)\).

Exercise 2.10

Show

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathsf{E} \left\{ F^k \left ( X_{[1]} \right ) \right \} & =&\displaystyle \frac { n! k! }{ (n+k) ! } \end{array} \end{aligned} $$
(2.E.9)

for an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F. Note that \( \frac { n! k! }{ (n+k) ! } = \frac { 1 }{ { }_{n+k}\mbox{C}_{k} }= n \tilde {B}(k+1, n)= \frac { k! }{ (n+1)(n+2)\cdots (n+k) }\).

Exercise 2.11

Consider an i.i.d. random vector of size n with the marginal distribution \(U(0,1)\).

  1. (1)

    Obtain the function \(G_F(x)\) defined in (1.3.18).

  2. (2)

    When \(n=3\), obtain the marginal pdf’s of the order statistics. Obtain the joint pdf’s \(f_{X_{[1]}, X_{[2]}} (x,y)\), \(f_{X_{[2]}, X_{[3]}} (x,y)\), and \(f_{X_{[1]}, X_{[3]}} (x,y)\).

Exercise 2.12

Consider an i.i.d. random vector of size n with the marginal distribution \(U(-1,1)\).

  1. (1)

    Obtain the function \(G_F(x)\) defined in (1.3.18).

  2. (2)

    Show that the pdf of \(Y = X_{[1]}\) is

    $$\displaystyle \begin{aligned} \begin{array}{rcl} f_Y(x) \ = \ \frac {n}{2^n} (1-x)^{n-1} u(1-|x|) \end{array} \end{aligned} $$
    (2.E.10)

    and that the pdf of \(W = X_{[n]}\) is

    $$\displaystyle \begin{aligned} \begin{array}{rcl} f_W(x) \ = \ \frac {n}{2^n} (1+x)^{n-1}u(1-|x|) . \end{array} \end{aligned} $$
    (2.E.11)
  3. (3)

    When \(n=2\), obtain the cdf’s of the order statistics. Compare the results with the cdf’s (2.3.35) and (2.3.36).

  4. (4)

    When \(n=3\), obtain the pdf’s of the order statistics. Obtain the pdf’s of the magnitude order statistics.

Exercise 2.13

Based on the results in Exercise 2.12, confirm the equalities (2.3.16) and (2.3.17).

Exercise 2.14

Show that, for i.i.d. random variables \(\left \{ X_i \right \}_{i=1}^{k}\), a necessary and sufficient condition for \(X_i \sim B(\alpha , 1)\) is \(X_{[k]}\sim B(\alpha k, 1)\).

Exercise 2.15

Show that \(X_i \sim G(1, n\beta )\) and \(X_{[1]} \sim G(1, \beta )\) are the necessary and sufficient conditions of each other for i.i.d. random variables \(\left \{ X_i \right \}_{i=1}^{n}\), where \(G(\alpha , \beta )\) denotes the gamma distribution with pdf \(f(x) = \frac {x^{\alpha -1}}{ \beta ^{\alpha } \varGamma (\alpha )} \exp \left (-\frac {x}{\beta } \right ) u(x)\).

Exercise 2.16

Assumek i.i.d. random vectors \(\boldsymbol {W}_1 = \left (W_{11}, W_{12}, \, \cdots , W_{1n} \right )\), \(\boldsymbol {W}_2 = \left (W_{21}, W_{22}, \, \cdots , W_{2n} \right )\), \(\cdots \), and \(\boldsymbol {W}_k = \left (W_{k1}, W_{k2}, \, \cdots , W_{kn} \right )\), each with the marginal distribution \(U(0,1)\), are independent of each other. Show that the pdf of \(V= \prod \limits _{i=1}^k X_i\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{V} (v) \ = \ \frac{n^k }{ (k-1)! } v^{n-1} (- \ln v )^{k-1}u(v)u(1-v) , \end{array} \end{aligned} $$
(2.E.12)

where \(X_i\) denotes the largest order statistic of \(\boldsymbol {W}_i\) for \(i \in \mathbb {J}_{1,k}\).

Exercise 2.17

For an i.i.d. random vector \(\boldsymbol {X} = \left ( X_1, X_2, X_3\right ) \) with the marginal cdf F, confirm the joint cdf’s

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1,2}(x,y) &=& \left\{ \begin{array}{ll} F^3 (x) -3 F^2(x) -3 F(x) F^2 (y)\\ \qquad + 6F(x) F(y) , & x \le y, \\ 3F^2(y)- 2 F^3(y), & x \ge y, \end{array} \right. {} \end{array} \end{aligned} $$
(2.E.13)
$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1,3}(x,y) &=& \left\{ \begin{array}{ll} F^3(x) - 3 F^2(x) F(y) + 3 F(x) F^2 (y) , & x \le y, \\ F^3(y), & x \ge y, \end{array} \right. {} \end{array} \end{aligned} $$
(2.E.14)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{2,3}(x,y) &=& \left\{ \begin{array}{ll} 3 F^2(x) F(y)-2F^3(x) , & x \le y, \\ F^3(y), & x \ge y \end{array} \right. {} \end{array} \end{aligned} $$
(2.E.15)

of the order statistics. When the marginal distribution of \(\boldsymbol {X}\) is \(U(0,1)\), obtain the joint cdf’s \(F_{1,2}\), \(F_{1,3}\), and \(F_{2,3}\); and the cdf’s of \(X_{[1]}\), \(X_{[2]}\), and \(X_{[3]}\).

Exercise 2.18

With the joint cdf’s of \(\left ( X_{[1]}, X_{[2]}\right )\), \(\left ( X_{[1]}, X_{[3]}\right )\), and \(\left ( X_{[2]}, X_{[3]}\right )\) obtained in Exercise 2.17, confirm the equality (2.2.35) for \(n=3\).

Exercise 2.19

Let \(\left \{ X_{[k]} \right \}_{k=1}^{n}\) be the order statistics of an i.i.d. random vector \(\boldsymbol {X}\) with the marginal cdf F and pdf f.

  1. (1)

    Obtain the pdf \(f_{W_{s,r}}\) of \(W_{s,r} = X_{[s]} - X_{[r]}\) for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\), and the pdf \(f_{W_{n,1}}\) and cdf \(F_{W_{n,1}}\) of \(W_{n,1} = X_{[n]} - X_{[1]}\). Here, \(W_{n,1} = X_{[n]} - X_{[1]}\) is called the range of \(\boldsymbol {X}\).

  2. (2)

    When the marginal distribution of \(\boldsymbol {X}\) is \(U(0,1)\), obtain the pdf \(f_{W_{s,r}}\) of \(W_{s,r}\) for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and the variance of \(W_{n,1}\).

Exercise 2.20

Let us consider two additional methods of proving the formula (2.3.5) of the pdf.

  1. (1)

    Compute the integration

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \underset{y_1 < y_2 < \cdots < y_{r-1} < y_{r+1} < y_{r+2} < \cdots < y_n } {\int \int \cdots \int} f_{\boldsymbol{X_{[\cdot]}}} ( \boldsymbol{y} ) \, d\boldsymbol{y}^{\bar r} \end{array} \end{aligned} $$
    (2.E.16)

    shown in (2.1.11), and confirm that the result is the same as (2.3.5), where \(f_{\boldsymbol {X_{[\cdot ]}}}( \boldsymbol {y})= n! \left \{\prod \limits _{i=1}^n f\left ( y_i \right ) \right \} \left \{\prod \limits _{i=2}^n u\left ( y_i - y_{i-1} \right ) \right \} u\left ( y_1 \right )\) with \(\boldsymbol {y} = \left ( y_1, y_2, \, \cdots , y_n \right )\) and \(d\boldsymbol {y}^{\bar r} = dy_1 dy_2 \cdots dy_{r-1} dy_{r+1} dy_{r+2} \cdots dy_n\).

  2. (2)

    Obtain the pdf (2.3.5) by differentiating the cdf (2.2.1).

Exercise 2.21

Recollecting the symmetry (2.3.71), show that \(\mathsf {E} \left \{ X_{[r]}\right \} = 1- \mathsf {E} \left \{ X_{[n-r+1]}\right \} \) and \(\mathsf {Var} \left \{ X_{[r]}\right \} = \mathsf {Var} \left \{ X_{[n-r+1]}\right \} \) for the order statistics \(\left \{ X_{[r]}\right \}_{r=1}^{n}\) of an i.i.d. random vector with the marginal uniform distribution \(U(0,1)\).

Exercise 2.22

With the steps similar to those in the proof of (2.3.5), prove

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{X_{[r]},X_{[s]}} (x,y) & =&\displaystyle \tilde{C}_{n,s,r} F^{r-1} (x) \{ F(y)-F(x) \}^{s-r-1} \\ & &\displaystyle \ \times \{ 1-F(y) \}^{n-s} f(x)f(y)u(y-x) {} \end{array} \end{aligned} $$
(2.E.17)

for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) shown in (2.3.31).

Exercise 2.23

First recollect that \(\sum \limits _{r=1}^{s-1} \, { }_{s-2}\mbox{C}_{r-1} w^{r-1} ( v-w )^{s-r-1} = ( v-w )^{s-2} \sum \limits _{t=0}^{s-2} \, { }_{s-2}\mbox{C}_{t} \left ( \frac { w } { v-w } \right )^{t} = ( v-w )^{s-2} \left ( 1+ \frac { w } { v-w } \right )^{s-2} \), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{r=1}^{s-1} \, {}_{s-2}\mbox{C}_{r-1} w^{r-1} (v-w)^{s-r-1} \ = \ v^{s-2} , {} \end{array} \end{aligned} $$
(2.E.18)

and that \(\sum \limits _{s=2}^{n} \, { }_{n-2}\mbox{C}_{s-2} v^{s-2} ( 1-v )^{n-s} = (1- v )^{n-2} \sum \limits _{t=0}^{n-2} \, { }_{n-2}\mbox{C}_{t} \left ( \frac { v } { 1- v } \right )^{s-2}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{s=2}^{n} \, {}_{n-2}\mbox{C}_{s-2} v^{s-2} ( 1-v )^{n-s} & =&\displaystyle 1 {} \end{array} \end{aligned} $$
(2.E.19)

from the binomial theorem. By directly evaluating \(\sum \limits _{s=2}^{n} \sum \limits _{r=1}^{s-1} f_{r,s} (x,y)\) based on the joint pdf (2.3.31), prove the equality (2.3.37).

Exercise 2.24

Let the marginal pmf of an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1, X_2 \right )\) be

$$\displaystyle \begin{aligned} \begin{array}{rcl} p (x)\ = \ (1-\alpha) \alpha^{x-1} \tilde{u} (x-1), \end{array} \end{aligned} $$
(2.E.20)

where \(0 < \alpha <1\).

  1. (1)

    Obtain the joint pmf \(p_{X_{[2]}, X_1}\) of the random vector \(\left (X_{[2]}, X_1 \right )\). Based on the joint pmf \(p_{X_{[2]}, X_1}\), obtain the pmf \( p_{X_{[2]}}\) of the order statistic \(X_{[2]}\) and then obtain the conditional pmf \(p_{ X_1 | X_{[2]}}\) of \(X_1\) when \(X_{[2]}\) is given.

  2. (2)

    Obtain the joint pmf \(p_{V, W}\) of \(V= X_{[2]} - X_{[1]}\) and \(W= X_{[1]}\). For \(v \in \mathbb {J}_{1,\infty }\) and \(w \in \mathbb {J}_{1,\infty }\), check if \(p_{V, W} (v,w)= 2 p(v+w)p(w)\).

  3. (3)

    Based on the joint pmf \(p_{V, W}\) obtained in (2), obtain the pmf \(p_{V}\) of V  and the pmf \(p_{W}\) of W. Discuss if V  and W are independent.

  4. (4)

    Obtain the cdf and pmf of the first order statistic, the cdf and pmf of the second order statistic, and the joint cdf of the order statistic vector \(\left ( X_{[1]}, X_{[2]} \right )\).

Exercise 2.25

For i.i.d. random variables \(\left \{ X_i \right \}_{i=1}^{k}\) with the marginal geometric distribution of parameter \(\alpha \), that is, with the pmf \(p(k)= (1-\alpha )^{k}\alpha \), \(k \in \mathbb {J}_{1,\infty }\), show that the distribution of \(X_{[1]}\) is geometric with parameter \(1- ( 1- \alpha )^k\). More generally, when independent geometric random variables \(\left \{X_i \right \}_{i=1}^{k}\) have parameters \(\left \{ \alpha _i \right \}_{i=1}^{k}\), respectively, the distribution of \( X_{[1]}\) is geometric with parameter \(1- \prod \limits _{i=1}^{k}\left ( 1- \alpha _i \right )\).

Exercise 2.26

Show that the pmf of \(X_{[1]}\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{X_{[1]}}(k) \ = \ e^{-\lambda}\frac{\lambda^k}{k!} \left(2 - 2 \sum_{x=0}^k e^{-\lambda}\frac{\lambda^x}{x!} + e^{-\lambda}\frac{\lambda^k}{k!}\right) \tilde{u} (k) \end{array} \end{aligned} $$
(2.E.21)

and the pmf of \(X_{[2]}\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{X_{[2]}}(k) \ = \ e^{-\lambda}\frac{\lambda^k}{k!} \left(2 \sum_{x=0}^{k} e^{-\lambda}\frac{\lambda^x}{x!} - e^{-\lambda}\frac{\lambda^k}{k!}\right) \tilde{u} (k) \end{array} \end{aligned} $$
(2.E.22)

when \(X_1\) and \(X_2\) are i.i.d. with the marginal pmf \(p(x) = e^{-\lambda }\frac {\lambda ^x}{x!} \tilde {u} (x)\) with \(\lambda >0\).

Exercise 2.27

Assume independent random variables \(\left \{ X_j \right \}_{j=1}^{n}\). Each of the n random variables has the cdf \(F_0\) and \(F_1\) with probability p and \(1-p\), respectively: in other words, on average, np and the remaining \(n(1-p)\) random variables among \(\left \{ X_j \right \}_{j=1}^{n}\) have cdf \(F_0\) and \(F_1\), respectively. Denote by \(\textsf {S}_i = \mathsf {P} \left ( X_{[1]}\leftarrow F_i \right )\) the probability that the first order statistic \(X_{[1]}\) of \(\left \{ X_j \right \}_{j=1}^{n}\) is one of the random variables with the cdf \(F_i\) for \(i=0,1\).

  1. (1)

    Obtain the probability \(\mathsf {P} \left ( B_k \right )\), where \(B_k\) denotes the event that k and the remaining \((n-k)\) random variables among \(\left \{ X_j \right \}_{j=1}^{n}\) have the cdf \(F_0\) and \(F_1\), respectively.

  2. (2)

    Obtain the probability \(\mathsf {P} \left ( \left . X_{[1]}\leftarrow F_0 \right | B_k \right )\) that the first order statistic \(X_{[1]}\) of \(\left \{ X_j \right \}_{j=1}^{n}\) is one of the random variables with the cdf \(F_0\) when \(B_k\) occurs.

  3. (3)

    Let \(f_i (x) = \frac {d}{dx} F_i (x)\) for \(i=0,1\). Based on (1) and (2) above, and recollecting that \(\textsf {S}_i = \sum \limits _{k=0}^{n} \mathsf {P} \left ( \left . X_{[1]}\leftarrow F_i \right | B_k \right ) \mathsf {P} \left ( B_k \right )\) for \(i=0,1\), show

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \textsf{S}_0 & =&\displaystyle np \int_{-\infty}^{\infty} \left \{ 1- p F_0 (x) - (1-p) F_1 (x) \right \}^{n-1} f_0 (x) dx {} \end{array} \end{aligned} $$
    (2.E.23)

    and

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \textsf{S}_1 & =&\displaystyle n(1-p) \int_{-\infty}^{\infty} \left \{ 1- p F_0 (x) - (1-p) F_1 (x) \right \}^{n-1} f_1 (x) dx. \quad \qquad \quad {} \end{array} \end{aligned} $$
    (2.E.24)

    Note that \(\textsf {S}_0 = p\) and \(\textsf {S}_1 = 1-p\) when \(F_0 = F_1\). Show \(\textsf {S}_0+\textsf {S}_1 =1\) in general.

  4. (4)

    Show \(\textsf {S}_0 \le 1 - (1-p)^n\). Prove that \(\textsf {S}_0 \ge p\) if \(F_0 (x) \ge F_1 (x)\) at every point x.

  5. (5)

    Obtain \(\textsf {S}_0\) and \(\lim \limits _{n \to \infty }\textsf {S}_0\) when \(f_0 (x) = u \left ( x+ \frac {1}{2} \right ) u \left ( \frac {1}{2} - x \right )\) and \(f_1 (x) = f_0 ( x- m)\), where \(m > 0\).

  6. (6)

    Obtain \(\textsf {S}_0\) and \(\lim \limits _{n \to \infty }\textsf {S}_0\) when \(f_0 (x)= e^{-x}u (x)\) and \(f_1 (x) = f_0 ( x- m)\), where \(m > 0\).

  7. (7)

    Let the support of the cdf \(F_i (x)\) be \(\left [ x_{L_i}, x_{U_i}\right ]\) for \(i= 0, 1\) and let

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \bar{x} \ = \ \max \left \{ x_{L_0}, x_{L_1} \right \} . \end{array} \end{aligned} $$
    (2.E.25)

    Show

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{n \to \infty} \textsf{S}_0 & =&\displaystyle \frac{p}{p+(1-p) \xi}, {} \end{array} \end{aligned} $$
    (2.E.26)

    where

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \xi \ = \ \lim_{x \to \bar {x} } \frac {F_1 (x)} { F_0 (x) }. \end{array} \end{aligned} $$
    (2.E.27)

    Next, when \(f_0 (x)= \frac {1}{2} e^{-|x|}\) and \(f_1 (x) = f_0 ( x- m)\) with \(m > 0\), obtain \(\textsf {S}_0\) from (2.E.23). Based on the result, obtain \(\lim \limits _{n \to \infty }\textsf {S}_0\) directly. Compare the result with \(\lim \limits _{n \to \infty }\textsf {S}_0\) obtained from (2.E.26).

Exercise 2.28

Assume an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal pdf \(f(x) = e^{-x}u(x)\).

  1. (1)

    Obtain the joint pdf \(f_{X_{[r]}, X_{[s]}} (x,y)\) for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\).

  2. (2)

    Obtain the correlation coefficient \(\rho _{sr,r}\) between the difference \(W_{s,r} = X_{[s]}-X_{[r]}\) and the r-th order statistic \(X_{[r]}\) for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\).

  3. (3)

    Show that the first order statistic \(X_{[1]}\) has the pdf

    $$\displaystyle \begin{aligned} \begin{array}{rcl} f_{X_{[1]}}(x) \ = \ n e^{-nx}u(x). \end{array} \end{aligned} $$
    (2.E.28)

    Show that, for \(r \in \mathbb {J}_{1,n-1}\), the difference \(X_{[r+1]}-X_{[r]}\) between the \((r+1)\)-st order statistic \(X_{[r+1]}\) and r-th order statistic \(X_{[r]}\) has the pdf

    $$\displaystyle \begin{aligned} \begin{array}{rcl} f_{X_{[r+1]}-X_{[r]}} (x) \ = \ (n-r) e^{-(n-r)x}u(x) . \end{array} \end{aligned} $$
    (2.E.29)
  4. (4)

    Obtain the mean \(\mathsf {E} \left \{ X_{[r]}\right \}\) of the r-th order statistic \(X_{[r]}\) for \(r \in \mathbb {J}_{1,n}\).

  5. (5)

    Is \(\left ( X_{[1]}, X_{[2]} -X_{[1]}, \, \cdots , X_{[n]}-X_{[n-1]} \right )\) an independent random vector?

Exercise 2.29

For an i.i.d. random vector \(\left ( U_1, U_2, \, \cdots , U_n \right )\) with the marginal distribution \(U(0,1)\), consider the order statistics \(\left \{ U_{[k]} \right \}_{k=1}^{n}\).

  1. (1)

    Obtain \(\mathsf {E} \left \{U_{[1]} U_{[3]}^{2} U_{[5]}^{4} \right \}\) when \(n=5\) and \(\mathsf {E} \left \{ U_{[1]}^{2} U_{[5]}^{3}\right \}\) when \(n=6\).

  2. (2)

    Obtain the variance \(\mathsf {Var}\left \{U_{[2]} \right \}\) of the second order statistic \(U_{[2]}\) when \(n=19\).

  3. (3)

    Obtain \(\mathsf {E} \left \{U_{[5]}^5 U_{[6]}^{6} U_{[7]}^{7}\right \}\) when \(n=7\).

  4. (4)

    Show that \(\left \{Y_r \right \}_{r=1}^{n}\) are independent, where

    $$\displaystyle \begin{aligned} \begin{array}{rcl} Y_r \ = \ \frac{U_{[r]}}{U_{[r+1]}} \end{array} \end{aligned} $$
    (2.E.30)

    for \(r \in \mathbb {J}_{1,n}\) with \(U_{[n+1]}=1\).

  5. (5)

    Obtain the expected values \(\mathsf {E} \left \{ - \ln U_{[r]} \right \}\) and \(\mathsf {E} \left \{ - \ln \left ( 1-U_{[r]} \right ) \right \}\).

Exercise 2.30

Consider an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with a standard exponential distribution, i.e., the exponential distribution with \(\lambda =1\) in the pdf (1.1.61). Let the k-th moment of the r-th order statistic \(X_{[r:n]}\) be \(\alpha _{r:n}^{(k)} = \mathsf {E} \left \{X_{[r:n]}^k \right \}\) with \(\alpha _{r:n}^{(0)} =1\) and \(\alpha _{0:n}^{(k)} =0\). Denote by \(\alpha _{r,s:n} = \mathsf {E} \left \{X_{[r:n]}X_{[s:n]} \right \}\) the joint moment of the r-th order statistic \(X_{[r:n]}\) and the s-th order statistic \(X_{[s:n]}\). Show the following.

  1. (1)

    \(\alpha _{1:n}^{(k)} = \frac {k}{n} \alpha _{1:n}^{(k-1)}\) for \(n \in \mathbb {J}_{1,\infty }\) and \(k \in \mathbb {J}_{1,\infty }\)

  2. (2)

    \(\alpha _{r:n}^{(k)} = \alpha _{r-1:n-1}^{(k)} + \frac {k}{n} \alpha _{r:n}^{(k-1)}\) for \(r \in \mathbb {J}_{1,n}\) and \(k \in \mathbb {J}_{1,\infty }\)

  3. (3)

    \(\alpha _{r,r+1:n} = \alpha _{r:n}^{(2)} + \frac {1}{n-r} \alpha _{r:n}^{(1)}\) for \(r \in \mathbb {J}_{1,n-1}\)

  4. (4)

    \(\alpha _{r,s:n} = \alpha _{r,s-1:n} + \frac {1}{n-s+1} \alpha _{r:n}^{(1)}\) for \(r < s\) with \(r, s \in \mathbb {J}_{1,n}\) and \(s-r \in \mathbb {J}_{2, n-1}\)

Exercise 2.31

Denote by \(X_{[r:n]}\) the r-th order statistic from an i.i.d. random vector \(\boldsymbol {X}= \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with a marginal standard normal distribution.

  1. (1)

    Based on (2.3.78), show that \(\mathsf {E} \left \{ X_{[3:4]} \right \} = - \mathsf {E} \left \{ X_{[2:4]} \right \} = \frac {6} {\pi \sqrt {\pi }} \big ( \pi - 3 \tan ^{-1} \sqrt {2} \big ) \approx 0.2970\), \(\mathsf {E} \left \{ X_{[4:5]} \right \} = - \mathsf {E} \left \{ X_{[2:5]} \right \} = \frac {5} {2\sqrt {\pi }} - \frac {15} {\pi \sqrt {\pi }} \sin ^{-1} \frac {1}{3} \approx 0.4950\), and \(\mathsf {E} \left \{ X_{[5:5]} \right \} = - \mathsf {E} \left \{ X_{[1:5]} \right \} = \frac {15} {\pi \sqrt {\pi }} \tan ^{-1} \sqrt {2} - \frac {5} {2\sqrt {\pi }} \approx 1.1630\).

  2. (2)

    Show that \(2 \mathsf {E} \left \{ X_{[2:3]} X_{[3:3]} \right \} = - \mathsf {E} \left \{ X_{[1:3]} X_{[3:3]} \right \} = \frac {\sqrt {3}}{\pi } \approx 0.5513\), \(\mathsf {E} \left \{ X_{[2:4]} \right . \left . X_{[3:4]} \right \} = - \ \mathsf {E} \left \{ X_{[2:4]} X_{[4:4]} \right \} = \frac {\sqrt {3}\left (2-\sqrt {3}\right )}{\pi } \approx 0.1477\), and \(- \sqrt {3} \mathsf {E} \left \{ X_{[3:4]} X_{[4:4]} \right \} = \mathsf {E} \left \{ X_{[1:4]} X_{[4:4]} \right \} = - \frac {3}{\pi } \approx -0.9549 \) based on (2.3.78).

  3. (3)

    Show that \(\mathsf {E} \left \{ X_{[3:5]} X_{[5:5]} \right \} = - \frac {10\sqrt {3}}{\pi ^2} \tan ^{-1} \sqrt { \frac {5}{3} } + \frac {15}{2\pi } - \frac {15}{\pi ^2} \tan ^{-1} \sqrt { \frac {1}{5} } \approx 0.1481\) using (1.A2.2)–(1.A2.5).

Exercise 2.32

For the joint cdf \(F_{r,s}\) of the r-th and s-th order statistics of an i.i.d. random vector \(\boldsymbol {X}\), suppose it is known that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{s=2}^{n} \sum_{r=1}^{s-1} F_{r,s}(x,y) \ = \ 3 \left \{ 1- e^{-2x} + 2e^{-(x+y)} - 2e^{-y} \right \} {} \end{array} \end{aligned} $$
(2.E.31)

for \(0 < x \le y\). Obtain the dimension n of the random vector \(\boldsymbol {X}\), the joint cdf \(F_{2,3}\) of the order statistics \(X_{[2]}\) and \(X_{[3]}\), and \(\sum \limits _{s=2}^{n} \sum \limits _{r=1}^{s-1} F_{r,s}(x,y)\) for \(x \ge y\).

Exercise 2.33

Obtain the joint pmf’s (2.4.21) and (2.4.22).

Exercise 2.34

An i.i.d. random vector \(\left ( X_1, X_2, X_3 \right )\) has the marginal pmf \(p_X(i) = \frac {1}{2} , \frac {1}{3}, \frac {1}{6}\), and 0 for \(i=1, 2, 3\), and \(i \notin \mathbb {J}_{1,3}\), respectively, as in (2.4.6).

  1. (1)

    Obtain the joint pmf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}\).

  2. (2)

    Confirm \(\sum \limits _{y=-\infty }^{\infty } p_{1,2}(x,y)= \sum \limits _{y=-\infty }^{\infty } p_{1,3}(x,y)= p_1 (x)\), \(\sum \limits _{y=-\infty }^{\infty } p_{2,3}(x,y)= p_2 (x)\), \(\sum \limits _{x=-\infty }^{\infty } p_{1,2}(x,y)= p_2 (y)\), and \(\sum \limits _{x=-\infty }^{\infty } p_{1,3}(x,y)= \sum \limits _{x=-\infty }^{\infty } p_{2,3}(x,y)= p_3 (y)\) based on the joint pmf’s (2.4.20)–(2.4.22).

Exercise 2.35

In Example 2.4.2, obtain the marginal pmf of the order statistic via other methods.

Exercise 2.36

Prove the equality (2.4.29) based on the equality (2.2.35) and the joint pmf (2.4.16).

Exercise 2.37

Obtain the formula (2.4.4) of the pmf of the r-th order statistic based on each of the following.

  1. (1)

    The relation (2.A2.28).

  2. (2)

    The pmf \(p_{X_{[r]}} (x) = \mathsf {P} \left ( X_{[r]} =x\right )\) can be expressed, similarly to (2.A1.31), as

    $$\displaystyle \begin{aligned} \begin{array}{rcl} p_{X_{[r]}} (x) & =&\displaystyle \sum_{i, t} \mathsf{P} \left ( X_{[r-i-1]} < X_{[r-i]} = X_{[r-i+1]} = \cdots = X_{[r-1]} \right. \\ & &\displaystyle \quad \left. = X_{[r]} = X_{[r+1]} = \cdots = X_{[r+t]} = x < X_{[r+t+1]} \right ). \qquad \qquad {} \end{array} \end{aligned} $$
    (2.E.32)

Exercise 2.38

For the marginal pmf \(p_{r}\) of the r-th order statistic from a discrete i.i.d. random vector \(\boldsymbol {X} = \left ( X_1, X_2, X_3 \right )\), it is known that

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_1(1) = \frac{189}{216}, \quad p_1(2) = \frac{26}{216}, \quad p_1(3) = \frac{1}{216}; \\ p_2(1) = \frac{108}{216}, \quad p_2(2) = \frac{92}{216}, \quad p_2(3) = \frac{16}{216}; \\ p_3(1) = \frac{27}{216}, \quad p_3(2) = \frac{98}{216}, \quad p_3(3) = \frac{91}{216}. \end{array} \end{aligned} $$

Obtain the marginal pmf \(p_X\) of \(\boldsymbol {X}\).

Exercise 2.39

For a discrete i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 \right )\), the cdf of the smaller order statistic is

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{X_{[1]}} (x) \ = \ \left\{ \begin{array}{llll} 0, & x < 1; & \quad \frac{5}{9}, & 1 \le x < 2; \\ \frac{3}{4}, & 2 \le x < 3; & \quad 1, & x \ge 3. \end{array} \right. \end{array} \end{aligned} $$
(2.E.33)

Obtain the pmf \(p_{X_{[2]}}\) of the larger order statistic.

Exercise 2.40

For the pmf \(p_{X_{[1]}}\) of the first order statistic from a discrete i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 \right )\), we have \(p_{X_{[1]}} (1) = \frac {5}{9}\), \(p_{X_{[1]}} (2) = \frac {3}{9}\), and \(p_{X_{[1]}} (3) = \frac {1}{9}\). Obtain the pmf \(p_{X_{[2]}}\) of the second order statistic.

Exercise 2.41

The joint pdf of the first and second order statistics from an i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 \right )\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{1,2} (x,y) \ = \ 8 \left(1-x \right) \left(1-y \right) u(x) u(y-x) u(1-y) . \end{array} \end{aligned} $$
(2.E.34)

Obtain the pdf \(f_{1} (x)\) of the first order statistic, the pdf \(f_{2} (x)\) of the second order statistic, and the joint pdf \(f_{\boldsymbol {X}}\) of \(\boldsymbol {X}\).

Exercise 2.42

Denoting by \(F_{r}\) the cdf of the r-th order statistic from a continuous i.i.d. random vector \(\boldsymbol {X}=\left ( X_1, X_2 \right )\), obtain the marginal cdf F of \(\boldsymbol {X}\) when

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{1} (x) \ = \ \left\{ \begin{array}{llll} 1, & x \ge 1; & \quad \frac{1}{4} (1+x)(3-x), & -1 \le x \le 1; \\ 0, & x \le -1 \end{array} \right. {} \end{array} \end{aligned} $$
(2.E.35)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_{2} (x) \ = \ \left\{ \begin{array}{llll} 1, & x \ge 1; & \quad \frac{1}{4} (1+x)^2, & -1 \le x \le 1; \\ 0, & x \le -1. \end{array} \right. {} \end{array} \end{aligned} $$
(2.E.36)

Exercise 2.43

For the parameter

$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{r,s} \ = \ \left\{ \frac{(n-s)!(s-1)!}{(n-r)!(r-1)!}\right\}^{\frac{1}{s-r}} \end{array} \end{aligned} $$
(2.E.37)

defined in (2.6.2), show that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{1,s} & >&\displaystyle \kappa_{2,s-2} \quad \mbox{when }5 \le s \le 10, \end{array} \end{aligned} $$
(2.E.38)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{1,2r+1} & >&\displaystyle \kappa_{r,r+1} \quad \mbox{when }r \ge 3\mbox{ and }n\mbox{ is small,} \end{array} \end{aligned} $$
(2.E.39)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{1,2r+1} & <&\displaystyle \kappa_{r,r+1} \quad \mbox{when }r \ge 3\mbox{ and }n\mbox{ is large,} \end{array} \end{aligned} $$
(2.E.40)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{1,2r+2} & >&\displaystyle \kappa_{r,r+2} \quad \mbox{when }r \ge 3\mbox{ and }n\mbox{ is small,} \end{array} \end{aligned} $$
(2.E.41)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \kappa_{1,2r+2} & <&\displaystyle \kappa_{r,r+2} \quad \mbox{when }r \ge 3\mbox{ and }n\mbox{ is large.} \end{array} \end{aligned} $$
(2.E.42)

Exercise 2.44

Assume \(F(1) = 0.1\), \(F(2) =F(3) =F(4) =0.3\), \(F(5) =F(6) =F(7) =F(8)=0.5\), \(F(9) = 0.6\), \(F(10)=0.8\), and \(F(11)=1\) for a discrete i.i.d. random vector of size n with the marginal cdf F. When \(n=10\), is \(p_{X_{[3]}}(x)\) larger or smaller than \(p_{X_{[4]}}(x)\)?

Exercise 2.45

Consider a discrete i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 , X_3\right )\) with the marginal pmf p.

  1. (1)

    When \(p(1) = \frac {3}{10}\), \(p(2) = \frac {1}{10}\), and \(p(3) = \frac {3}{5}\), compare the magnitude of the pmf \(p_1\) of the first order statistic and that of the pmf \(p_2\) of the second order statistic.

  2. (2)

    When \(p(1) = \frac {1}{6} \approx 0.1667\), \(p(2) = \frac {1}{12}\left (9-\sqrt {21}\right ) \approx 0.3681\), and \(p(3) = \frac {1}{12}\left ( 1+\sqrt {21}\right ) \approx 0.4652\), compare the magnitude of the pmf \(p_1\) of the first order statistic and that of the pmf \(p_2\) of the second order statistic.

Exercise 2.46

Let us consider the case in which \(\overline {x}_L=\overline {x}_U-2\) and \(p_{X_{[r]}}\left ( \overline {x}_L+1 \right )=p_{X_{[s]}}\left ( \overline {x}_L+1 \right )\) as in Exercise 2.45 (2).

  1. (1)

    Let \((n, s, r)=(3, 2, 1)\). Show that \(p_1(x)=p_2(x)\) when \(F(x) = \frac {1}{3} +b \) and \(F(x-1) = \frac {1}{3} -a\), where a and b satisfy \(a^2 +b^2 -ab +a -b = 0\), \(0 < a \le \frac {1}{3}\), and \( 0 < b \le \frac {2}{3}\). For example, in Exercise 2.45 (2), \(a= \frac {1}{6}\) and \(b= \frac {1}{12}\left ( 7-\sqrt {21} \right )\approx 0.2015\).

  2. (2)

    Let \((n, s, r)=(5, 4, 2)\). Show that \(p_2(x)=p_4(x)\) when \(F(x) = \frac {1}{2} +a \) and \(F(x-1) = \frac {1}{2} -a\) for a number \(a \in \left (0 , \frac {1}{2} \right ]\).

Exercise 2.47

Obtain the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}} = \left ( X_{[1]}, X_{[2]}\right ) \), the pdf of the first order statistic \(X_{[1]}\), and the pdf of the second order statistic \(X_{[2]}\) in each of the following joint pdf of \(\boldsymbol {X}=\left ( X_1, X_2 \right )\).

  1. (1)

    \(f_{\boldsymbol {X}} (x,y) = \frac {2}{3}(2x+y)u(x)u(1-x)u(y)u(1-y)\).

  2. (2)

    \(f_{\boldsymbol {X}} (x,y) = 2x u(x)u(1-x)u(y)u(1-y)\).

  3. (3)

    \(f_{\boldsymbol {X}} (x,y) = \frac {2}{a+b}(ax+by) u(x)u(1-x)u(y)u(1-y)\), where \(a \ge 0\), \(b \ge 0\), and \(a+b>0\).

Exercise 2.48

When \(\left \{ a_j \ge 0 \right \}_{j=0}^{2}\) and \(\max \left \{ a_0 , a_1 , a_2 \right \} >0\), a random vector \(\boldsymbol {X} = \left ( X_1, X_2 \right )\) has the joint pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X}}( x, y ) \ = \ \frac{12 \left ( a_2 x^2 + a_1 xy + a_0 y^2 \right)}{4\left(a_0 + a_2 \right) +3a_1} u(x)u(1-x)u(y)u(1-y) .\\ \end{array} \end{aligned} $$
(2.E.43)

Obtain the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}=\left ( X_{[1]}, X_{[2]} \right )\) of \(\boldsymbol {X}\).

Exercise 2.49

For a random vector \(\boldsymbol {X} = \left ( X_1, X_2 \right )\), assume the joint pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X}}( x, y ) & =&\displaystyle \frac{12\left ( a_3 x^3 + a_2 x^2 y + a_1 xy^2 + a_0 y^3 \right)} {3\left(a_0 + a_3 \right) + 2 \left ( a_1 + a_2 \right )} \\ & &\displaystyle \quad \times u(x)u(1-x)u(y)u(1-y), \end{array} \end{aligned} $$
(2.E.44)

where \(\left \{ a_j \right \}_{j=0}^{3}\) are non-negative numbers with at least one positive. Obtain the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}=\left ( X_{[1]}, X_{[2]} \right )\) of \(\boldsymbol {X}\).

Exercise 2.50

Obtain the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}=\left ( X_{[1]}, X_{[2]} \right )\) of \(\boldsymbol {X}\) with the joint pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X}}( x, y ) & =&\displaystyle \frac{360\left ( a_4 x^4 + a_3 x^3 y + a_2 x^2y^2 + a_1 x y^3 + a_0 y^4 \right)} {72\left(a_0 + a_4 \right) + 45 \left ( a_1 + a_3 \right )+ 40 a_2 } \\ & &\displaystyle \quad \ \times \left ( a_4 x^4 + a_3 x^3 y + a_2 x^2y^2 + a_1 x y^3 + a_0 y^4 \right) \\ & &\displaystyle \quad \ \times u(x)u(1-x)u(y)u(1-y), \end{array} \end{aligned} $$
(2.E.45)

where \(\left \{ a_j \right \}_{j=0}^{4}\) are non-negative numbers with at least one positive.

Exercise 2.51

Obtain the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}\) in each of the following joint pdf of \(\boldsymbol {X}=\left ( X_1, X_2 \right )\).

  1. (1)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {2}{9} \left ( 4x+ 5y \right ) u(x)u(1-x)u(y)u(1-y)\).

  2. (2)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {2}{7} \left ( 3x+ 4y \right ) u(x)u(1-x)u(y)u(1-y)\).

  3. (3)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {1}{7} \left ( 9x+ 5y \right ) u(x)u(1-x)u(y)u(1-y)\).

  4. (4)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {1}{4} \left ( 4 x^2 + 4 x y + 5 y^2 \right ) u(x)u(1-x)u(y)u(1-y)\).

  5. (5)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {1}{4} \left ( x^2 + 4 x y + 8 y^2 \right ) u(x)u(1-x)u(y)u(1-y)\).

  6. (6)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {1}{4} \left ( 9 x^2 + 4 x y \right ) u(x)u(1-x)u(y)u(1-y)\).

  7. (7)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {3}{10} \left ( 3 x^3 + 4 x^2 y + 4 x y^2 + 5y^3 \right ) u(x)u(1-x)u(y)u(1-y)\).

  8. (8)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {3}{5} \left ( 4 x^3 + x^2 y + 3 x y^2 \right ) u(x)u(1-x)u(y)u(1-y)\).

  9. (9)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {3}{5} \left ( 3 x^3 + 2x^2 y + 2 x y^2 + y^3 \right ) u(x)u(1-x)u(y)u(1-y)\).

  10. (10)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {24}{17} \left ( 3x^3 y + 3 x^2 y^2 \right ) u(x)u(1-x)u(y)u(1-y)\).

  11. (11)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {24}{17} \left ( x^3 y + 3 x^2 y^2 + 2 x y^3 \right ) u(x)u(1-x)u(y)u(1-y)\).

  12. (12)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {5}{16} \left ( 6x^4 + 8x^3 y + 9 x^2 y^2 \right ) u(x)u(1-x)u(y)u(1-y)\).

  13. (13)

    \(f_{\boldsymbol {X}}( x, y ) = \frac {5}{16} \left ( 5x^4 + y^4 + 7x^3 y + xy^3 + 9 x^2 y^2 \right ) u(x) u(1-x)u(y)u(1-y)\).

Exercise 2.52

Assume that the joint pdf \(f_{\boldsymbol {X}}(x,y,z)\) of a random vector \(\boldsymbol {X} = \left ( X_1, X_2, X_3 \right )\) is a polynomial of order 3 or lower and has the support \(\{(x,y,z): 0 \le x \le 1, \, 0 \le y \le 1, \, 0 \le z \le 1 \}\). Then, the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}\) of \(\boldsymbol {X}\) will be in the form of

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X_{[\cdot]}}}( x, y, z ) & =&\displaystyle 4 (x+y+z)u(x)u(y-x)u(z-y)u(1-z) , {} \end{array} \end{aligned} $$
(2.E.46)
$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X_{[\cdot]}}}( x, y, z ) & =&\displaystyle \left\{ k_1 \left ( x^2 + y^2 + z^2 \right ) + k_2 (xy+yz+zx) \right\} \\ & &\displaystyle \ \ \times u(x)u(y-x)u(z-y)u(1-z) , {} \end{array} \end{aligned} $$
(2.E.47)

or

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{\boldsymbol{X_{[\cdot]}}}( x, y, z ) & =&\displaystyle \big\{ k_1 \left ( x^3 + y^3 + z^3 \right ) + k_2 \big ( x^2y + y^2z + z^2 x + xy^2 + yz^2 \\ & &\displaystyle \ \ + zx^2 \big )+ k_3 xyz \big\} u(x)u(y-x)u(z-y)u(1-z). {} \end{array} \end{aligned} $$
(2.E.48)

When the joint pdf of \(\boldsymbol {X}=\left ( X_1, X_2, X_3 \right )\) is \(f_{\boldsymbol {X}}( x, y, z ) = 2x u(x) u(1-x) u(y) u(1-y) u(z)u(1-z)\), obtain the joint pdf \(f_{\boldsymbol {X_{[\cdot ]}}}\) of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}\).

Exercise 2.53

When the joint pdf of a random vector \(\boldsymbol {X}=\left ( X_1, X_2, X_3 \right )\) is \(f_{\boldsymbol {X}}( x, y, z ) = \frac {1}{5} (x+2y+7z) u(x)u(1-x)u(y)u(1-y)u(z)u(1-z)\), obtain the joint pdf of the order statistic vector \(\boldsymbol {X_{[\cdot ]}}\).

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Song, I., Park, S.R., Zhang, W., Lee, S. (2024). Order Statistics. In: Fundamentals of Order and Rank Statistics. Springer, Cham. https://doi.org/10.1007/978-3-031-50601-7_2

Download citation

Publish with us

Policies and ethics