Skip to main content

Abstract

Based on the description of probability in Chap. 2, let us now introduce and discuss several topics on random variables: namely, the notions of the cumulative distribution function, expected values, and moments. We will then discuss conditional distribution and describe some of the widely-used distributions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The distribution of X is a hypergeometric distribution.

  2. 2.

    Here, ‘\(\max (0, n-B) \le k \le \min (n,G)\)’ can be replaced with ‘all integers k’ by noting that \(\, _{p}\text{ C}_{q} =0\) for \(q < 0\) or \(q > p\) when p is a non-negative integer and q is an integer from Table 1.4.

  3. 3.

    We can equivalently obtain \( \int _{-\infty }^{\infty } f_{\sqrt{X}}(y) dy = \int _{0}^{\infty } 2y f_X \left( y^2 \right) dy + F_X ( 0 ) = \int _{0}^{\infty } f_X ( t ) dt + \int _{-\infty }^{0} f_X ( t ) dt = \int _{-\infty }^{\infty } f_X ( t ) dt =1\) using \(F_X ( 0 )= \int _{-\infty }^{0} f_X (t) dt\).

  4. 4.

    This pdf is called the central chi-square pdf with the degree of freedom of 1. The central chi-square pdf, together with the non-central chi-square pdf, is discussed in Sect. 5.4.2.

  5. 5.

    If \(a <0\), a will be replaced with |a|.

  6. 6.

    Here, because \(F_X\) is a continuous function, \(F_X \left( F_X^{-1} (z) \right) =z\) as it is discussed in (3.A.26).

  7. 7.

    Note that if the median \(x_{med}\) is defined by \( \mathsf {P}( X \le x_{med}) \, = \ \mathsf {P}\left( X \ge x_{med}\right) \), we do not have the median in this pmf.

  8. 8.

    Note that if the median \(x_{med}\) is defined by \( \mathsf {P}( X \le x_{med}) \, = \ \mathsf {P}\left( X \ge x_{med} \right) \), any real number in the interval (2, 3) is the median.

  9. 9.

    Unless stated otherwise, an appropriate region of convergence is assumed when we consider the mgf.

  10. 10.

    Denoting the location of the second cutting by \(y \in (t, 1)\), the lengths of the three pieces are t, \(y-t\), and \(1-y\), resulting in the condition \( \frac{1}{2}< y < t + \frac{1}{2} \) of y to make a triangle.

  11. 11.

    Denoting the location of the second cutting by \(y \in (1, t)\), the lengths of the three pieces are y, \(t-y\), and \(1-y\), resulting in the condition \(t- \frac{1}{2}< y < \frac{1}{2} \) of y to make a triangle.

  12. 12.

    Only the geometric distribution satisfies (3.5.32) among discrete distributions.

  13. 13.

    A similar relationship \(g(s+t) = g(s)+g(t)\) is called the Cauchy equation.

  14. 14.

    Note that \( \mathsf {P}\left( X = k^- \right) =0\) even for a discrete random variable X because the value \(p_{X}(k)= \mathsf {P}( X = k )\) is 0 when k is not an integer for a pmf \(p_{X}(k)\).

  15. 15.

    Here, because a cdf is continuous from the right-hand side, \(S_u\) is either in the form \(S_u = \left[ x_L, \infty \right) \) or in the form \(S_u = \left\{ x_L, x_L +1, \ldots \right\} \).

  16. 16.

    In (3.A.75), the function \(\varPhi ( \cdot )\) is the standard normal cdf defined in (3.5.3).

  17. 17.

    More generally, \(\int _{0}^{\infty } x^m f_X(x) dx\) for \(m=1, 2, \ldots \) are called the half moments, incomplete moments, or partial moments.

  18. 18.

    Here, when \(n \rightarrow \infty \), \(p \rightarrow 0\), and \(np \rightarrow \lambda \), the skewness is \(\frac{1}{\sqrt{ \lambda }}\) and the kurtosis is \(3 + \frac{1}{ \lambda }\). In addition, when \(\sigma ^2 = np(1-p) \rightarrow \infty \), the skewness is \(\frac{1}{\sigma } \rightarrow 0\) and the kurtosis is \(3 + \frac{1}{ \sigma ^2 } \rightarrow 3\).

References

  • N. Balakrishnan, Handbook of the Logistic Distribution (Marcel Dekker, New York, 1992)

    Google Scholar 

  • E.F. Beckenbach, R. Bellam, Inequalities (Springer, Berlin, 1965)

    Google Scholar 

  • P.J. Bickel, K.A. Doksum, Mathematical Statistics (Holden-Day, San Francisco, 1977)

    MATH  Google Scholar 

  • P.O. Börjesson, C.-E.W. Sundberg, Simple approximations of the error function \(Q(x)\) for communications applications. IEEE Trans. Commun. 27(3), 639–643 (Mar. 1979)

    Google Scholar 

  • W. Feller, An Introduction to Probability Theory and Its Applications, 3rd edn., revised printing (Wiley, New York, 1970)

    Google Scholar 

  • W.A. Gardner, Introduction to Random Processes with Applications to Signals and Systems, 2nd edn. (McGraw-Hill, New York, 1990)

    Google Scholar 

  • B.R. Gelbaum, J.M.H. Olmsted, Counterexamples in Analysis (Holden-Day, San Francisco, 1964)

    MATH  Google Scholar 

  • G.J. Hahn, S.S. Shapiro, Statistical Models in Engineering (Wiley, New York, 1967)

    MATH  Google Scholar 

  • J. Hajek, Nonparametric Statistics (Holden-Day, San Francisco, 1969)

    MATH  Google Scholar 

  • J. Hajek, Z. Sidak, P.K. Sen, Theory of Rank Tests, 2nd edn. (Academic, New York, 1999)

    MATH  Google Scholar 

  • N.L. Johnson, S. Kotz, Distributions in Statistics: Continuous Univariate Distributions, vol. I, II (Wiley, New York, 1970)

    MATH  Google Scholar 

  • S.A. Kassam, Signal Detection in Non-Gaussian Noise (Springer, New York, 1988)

    Book  Google Scholar 

  • A.I. Khuri, Advanced Calculus with Applications in Statistics (Wiley, New York, 2003)

    MATH  Google Scholar 

  • P. Komjath, V. Totik, Problems and Theorems in Classical Set Theory (Springer, New York, 2006)

    MATH  Google Scholar 

  • A. Leon-Garcia, Probability, Statistics, and Random Processes for Electrical Engineering, 3rd edn. (Prentice Hall, New York, 2008)

    Google Scholar 

  • M. Loeve, Probability Theory, 4th edn. (Springer, New York, 1977)

    MATH  Google Scholar 

  • E. Lukacs, Characteristic Functions, 2nd edn. (Griffin, London, 1970)

    MATH  Google Scholar 

  • R.N. McDonough, A.D. Whalen, Detection of Signals in Noise, 2nd edn. (Academic, New York, 1995)

    Google Scholar 

  • D. Middleton, An Introduction to Statistical Communication Theory (McGraw-Hill, New York, 1960)

    MATH  Google Scholar 

  • A. Papoulis, The Fourier Integral and Its Applications (McGraw-Hill, New York, 1962)

    MATH  Google Scholar 

  • A. Papoulis, S.U. Pillai, Probability, Random Variables, and Stochastic Processes, 4th edn. (McGraw-Hill, New York, 2002)

    Google Scholar 

  • S.R. Park, Y.H. Kim, S.C. Kim, I. Song, Fundamentals of Random Variables and Statistics (in Korean) (Freedom Academy, Paju, 2017)

    Google Scholar 

  • V.K. Rohatgi, A.KMd.E. Saleh, An Introduction to Probability and Statistics, 2nd edn. (Wiley, New York, 2001)

    Google Scholar 

  • J.P. Romano, A.F. Siegel, Counterexamples in Probability and Statistics (Chapman and Hall, New York, 1986)

    MATH  Google Scholar 

  • I. Song, J. Bae, S.Y. Kim, Advanced Theory of Signal Detection (Springer, Berlin, 2002)

    Book  Google Scholar 

  • J.M. Stoyanov, Counterexamples in Probability, 3rd edn. (Dover, New York, 2013)

    MATH  Google Scholar 

  • A.A. Sveshnikov (ed.), Problems in Probability Theory, Mathematical Statistics and Theory of Random Functions (Dover, New York, 1968)

    Google Scholar 

  • J.B. Thomas, Introduction to Probability (Springer, New York, 1986)

    Book  Google Scholar 

  • R.D. Yates, D.J. Goodman, Probability and Stochastic Processes (Wiley, New York, 1999)

    MATH  Google Scholar 

  • D. Zwillinger, S. Kokoska, CRC Standard Probability and Statistics Tables and Formulae (CRC, New York, 1999)

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Iickho Song .

Appendices

Appendices

3.1.1 Appendix 3.1 Cumulative Distribution Functions and Their Inverse Functions

3.1.1.1 (A) Cumulative Distribution Functions

We now discuss the cdf in the context of function theory (Gelbaum and Olmsted 1964).

Definition 3.A.1

(cdf) A real function F(x) possessing all of the three following properties is a cdf:

  1. (1)

    The function F(x) is non-decreasing: \(F(x+h)\ge F(x)\) for \(h>0\).

  2. (2)

    The function F(x) is continuous from the right-hand side: \(F\left( x^{+} \right) =F(x)\).

  3. (3)

    The function F(x) has the limits \(\lim \limits _{x \rightarrow -\infty }F(x) =0\) and \(\lim \limits _{x \rightarrow \infty } F(x) =1\).

 

A cdf is a finite and monotonic function. A point x such that \(F(x+\varepsilon )-F(x-\varepsilon )>0\) for every positive number \(\varepsilon \), \(F(x)=F\left( x^{-} \right) \), and \(F\left( x^{+} \right) =F(x)\ne F\left( x^{-} \right) \) is called an increasing point, a continuous point, and a discontinuity, respectively, of F(x). Here, as we have already seen in Definition 1.3.3, \(p_x = F\left( x^{+} \right) -F\left( x^{-} \right) = F(x)-F\left( x^{-} \right) \) is the jump of F(x) at x. A cdf may have only type 1 discontinuity, i.e., jump discontinuity, with every jump between 0 and 1.

Fig. 3.20
figure 20

The function \(Y=g(X)\)

Example 3.A.1

Consider the function g shown in Fig. 3.20. Here, \(y_1\) is the local minimum of \(y=g(x)\), and \(x_1\) and \(x_{11}\) are the two solutions to \(y_1=g(x)\). In addition, \(y_2\) is the local maximum of \(y=g(x)\), and \(x_2\) and \(x_{22}\) are the solutions to \(y_2=g(x)\). Let \(x_3< x_4 < x_5\) be the X coordinates of the crossing points of the straight line \(Y=y\) and the function \(Y=g(X)\) for \(y_1< y < y_2\). Then, \(x_{11}< x_3< x_2< x_4< x_1< x_5 < x_{22}\) and \(y=g\left( x_3 \right) =g\left( x_4 \right) =g\left( x_5 \right) \) for \(y_1< y < y_2\). Obtain the cdf \(F_Y\) of \(Y= g(X)\) in terms of the cdf \(F_X\) of X, and discuss if the cdf \(F_Y\) is a continuous function.

Solution

When \(y > y_2\) or \(y < y_1\), we get

$$\begin{aligned} F_Y (y)= & {} F_X \left( g^{-1} (y) \right) \end{aligned}$$
(3.A.1)

from Fig. 3.20 because \( \mathsf {P}(Y \le y ) = \mathsf {P}( g(X) \le y ) = \mathsf {P}\left( X \le g^{-1} (y) \right) \). When \(y = y_1\), because \(\left\{ Y \le y_1 \right\} = \left\{ g(X) \le y_1 \right\} = \left\{ X \le x_{11} \right\} + \left\{ X = x_1 \right\} \), we get

$$\begin{aligned} F_Y (y)= & {} F_X \left( x_{11} \right) + \mathsf {P}\left( X = x_1\right) . \end{aligned}$$
(3.A.2)

In addition, when \(y_1< y < y_2\), the cdf is

$$\begin{aligned} F_Y (y)= & {} F_X \left( x_3 \right) + F_X \left( x_5 \right) - F_X \left( x_4 \right) + \mathsf {P}\left( X = x_4 \right) \end{aligned}$$
(3.A.3)

because \(\{ g(X) \le y \} = \left\{ X \le x_3 \right\} + \left\{ x_4 \le X \le x_5 \right\} = \left\{ X \le x_3 \right\} + \left\{ x_4 < X \le \right. \left. x_5 \right\} + \left\{ X = x_4 \right\} \). Finally, from \(\left\{ g(X) \le y_2 \right\} = \left\{ X \le x_{22} \right\} \) we get

$$\begin{aligned} F_Y (y) \, = \ F_X \left( x_{22} \right) \end{aligned}$$
(3.A.4)

when \(y = y_2\). Combining the results (3.A.1)–(3.A.4), we have the cdf of Y as

$$\begin{aligned} F_Y (y) \, = \ \left\{ \begin{array}{ll} F_X \left( g^{-1} (y) \right) ,&{} y< y_1 \text{ or } y > y_2, \\ F_X \left( x_{11} \right) + \mathsf {P}\left( X = x_1 \right) ,&{} y = y_1, \\ F_X \left( x_3 \right) + F_X \left( x_5 \right) - F_X \left( x_4 \right) \\ \quad + \mathsf {P}\left( X = x_4 \right) ,&{} y_1< y < y_2, \\ F_X \left( x_{22} \right) ,&{} y = y_2 . \end{array} \right. \end{aligned}$$
(3.A.5)

Let us next discuss the continuity of the cdf (3.A.5). When \(y \uparrow y_1\), because \(g^{-1} (y) \rightarrow x_{11}^-\), we get \(\lim \limits _{y \uparrow y_1} F_Y ( y ) = \lim \limits _{y \uparrow y_1} F_X \left( g^{-1} (y) \right) \), i.e.,

$$\begin{aligned} \lim _{y \uparrow y_1} F_Y ( y )= & {} F_X \left( x_{11}^- \right) . \end{aligned}$$
(3.A.6)

When \(y \downarrow y_1\), we have \(\lim \limits _{y \downarrow y_1} F_Y ( y ) = \lim \limits _{y \downarrow y_1} \big \{ F_X \left( x_3 \right) + F_X \left( x_5 \right) - F_X \left( x_4 \right) + \mathsf {P}\big ( X = x_4 \big ) \big \} = F_X \left( x_{11}^+ \right) + F_X \left( x_1^+ \right) - F_X \left( x_1^- \right) + \mathsf {P}\left( X = x_1^- \right) \), i.e.,

$$\begin{aligned} \lim _{y \downarrow y_1} F_Y ( y )= & {} F_X \left( x_{11} \right) + \mathsf {P}\left( X = x_1 \right) \end{aligned}$$
(3.A.7)

because \(x_3 \rightarrow x_{11}^+\), \(x_4 \rightarrow x_1^-\), \(x_5 \rightarrow x_1^+\), \(F_X \left( x_{11}^+ \right) =F_X \left( x_{11} \right) \), \(F_X \left( x_{1}^+ \right) - F_X \left( x_{1}^- \right) = \mathsf {P}\left( X = x_1 \right) \), and, for any typeFootnote 14 of random variable X, \( \mathsf {P}\left( X = x_1^- \right) = 0\). Thus, from the second line of (3.A.5), (3.A.6), and (3.A.7), the continuity of the cdf \(F_Y(y)\) at \(y = y_1\) can be summarized as follows: The cdf \(F_Y(y)\) is (A) continuous from the right-hand side at \(y = y_1\) and (B) continuous from the left-hand side at \(y = y_1\) only if \(F_X \left( x_{11} \right) + \mathsf {P}\left( X = x_1 \right) - F_X \left( x_{11}^- \right) = \mathsf {P}\left( X = x_1 \right) + \mathsf {P}\left( X = x_{11} \right) \) is 0 or, equivalently, only if \( \mathsf {P}\left( X = x_1 \right) = \mathsf {P}\left( X = x_{11} \right) =0\).

Next, when \(y \uparrow y_2\), recollecting that \(x_3 \rightarrow x_2^-\), \(x_4 \rightarrow x_2^+\), \(x_5 \rightarrow x_{22}^-\), \(F_X \left( x_{2}^+ \right) - F_X \left( x_{2}^- \right) = \mathsf {P}\left( X = x_2 \right) \), and \( \mathsf {P}\left( X = x_2^+ \right) = 0\), we get \(\lim \limits _{y \uparrow y_2} F_Y ( y ) = \lim \limits _{y \uparrow y_2} \left\{ F_X \left( x_3 \right) + F_X \left( x_5 \right) - F_X \left( x_4 \right) + \mathsf {P}\left( X = x_4 \right) \right\} = F_X \left( x_2^- \right) + F_X \left( x_{22}^- \right) - F_X \left( x_2^+ \right) + \mathsf {P}\left( X = x_2^+ \right) \), i.e.,

$$\begin{aligned} \lim _{y \uparrow y_2} F_Y ( y )= & {} F_X \left( x_{22}^- \right) - \mathsf {P}\left( X = x_2 \right) . \end{aligned}$$
(3.A.8)

In addition,

$$\begin{aligned} \lim _{y \downarrow y_2} F_Y ( y )= & {} \lim _{y \downarrow y_2} F_X \left( g^{-1} (y) \right) \nonumber \\= & {} F_X \left( x_{22} \right) \end{aligned}$$
(3.A.9)

because \(F_X \left( x_{22}^+ \right) =F_X \left( x_{22} \right) \) and \(g^{-1} (y) \rightarrow x_{22}^+\) for \(y \downarrow y_2\). Thus, from the fourth case of (3.A.5), (3.A.8), and (3.A.9), the continuity of the cdf \(F_Y(y)\) at \(y = y_2\) can be summarized as follows: The cdf \(F_Y(y)\) is (A) continuous from the right-hand side at \(y = y_2\) and (B) continuous from the left-hand side at \(y = y_2\) only if \(F_X \left( x_{22} \right) - F_X \left( x_{22}^- \right) + \mathsf {P}\left( X = x_2 \right) = \mathsf {P}\left( X = x_2 \right) + \mathsf {P}\left( X = x_{22}\right) \) is 0 or, equivalently, only if \( \mathsf {P}\left( X = x_2 \right) = \mathsf {P}\left( X = x_{22} \right) =0\). Exercises 3.56 and 3.57 also deal with the continuity of the cdf. \(\diamondsuit \)

Let \(\left\{ x_{\nu } \right\} \) be the set of discontinuities of the cdf F(x), and \(p_{x_{\nu }} = F\left( x_{\nu }^+ \right) -F\left( x_{\nu }^- \right) \) be the jump of F(x) at \(x=x_{\nu }\). Denote by \(\varPsi (x)\) the sum of jumps of F(x) at discontinuities not larger than x, i.e.,

$$\begin{aligned} \varPsi (x) \, = \ \sum \limits _{x_{\nu } \le x } p_{x_{\nu }}. \end{aligned}$$
(3.A.10)

The function \(\varPsi (x)\) is increasing only at \(\left\{ x_{\nu } \right\} \) and is constant in a closed interval not containing \(x_{\nu }\): thus, it is a step-like function described following Definition 3.1.7.

If we now let

$$\begin{aligned} \psi (x) \, = \ F(x)-\varPsi (x), \end{aligned}$$
(3.A.11)

then \(\psi (x)\) is a continuous function while \(\varPsi (x)\) is continuous only from the right-hand side. In addition, \(\varPsi (x)\) and \(\psi (x)\) are both non-decreasing and satisfy \(\varPsi (-\infty ) = \psi (-\infty ) = 0\), \(\varPsi (+\infty ) = a_1 \le 1\), and \(\psi (+\infty ) = b \le 1\). Here, the functions \(F_d(x)=\frac{1}{a_1} \varPsi (x)\) and \(F_c(x)=\frac{1}{b} \psi (x)\) are both cdf, where \(F_d(x)\) is a step-like function and \(F_c(x)\) is a continuous function. Rewriting (3.A.11), we get

$$\begin{aligned} F(x) \, = \ \varPsi (x)+\psi (x) , \end{aligned}$$
(3.A.12)

i.e.,

$$\begin{aligned} F(x) \, = \ a_1 F_d(x)+b F_c(x) . \end{aligned}$$
(3.A.13)

Figure 3.21 shows an example of the cdf \(F(x) = \varPsi (x)+\psi (x)\) with a step-like function \(\varPsi (x)\) and a continuous function \(\psi (x)\).

The decomposition (3.A.13) is unique because the decomposition shown in (3.A.12) is unique. Let us prove this result. Assume that two decompositions of \(F(x) = \varPsi (x)+\psi (x) = \varPsi _1(x)+\psi _1(x)\) are possible. Rewriting this equation, we get \(\varPsi (x)-\varPsi _1(x) = \psi _1(x)-\psi (x)\). The left-hand side is a step-like function because it is the difference of two step-like functions and, similarly, the right-hand side is a continuous function. Therefore, both sides should be 0. In other words, \(\varPsi _1(x)=\varPsi (x)\) and \(\psi _1(x)=\psi (x)\). By the discussion so far, we have shown the following theorem:

Fig. 3.21
figure 21

The decomposition of cdf \(F(x) = \varPsi (x)+\psi (x)\) with a step-like function \(\varPsi (x)\) and a continuous function \(\psi (x)\)

Theorem 3.A.1

Any cdf F(x) can be decomposed as

$$\begin{aligned} F(x) \, = \ a_1 F_d (x)+bF_c(x) \end{aligned}$$
(3.A.14)

into a step-like cdf \(F_d(x)\) and a continuous cdf \(F_c(x)\), where \(0 \le a_1 \le 1\), \(0 \le b \le 1\), and \(a_1 + b=1\).

In Theorem 3.A.1, \(F_d(x)\) and \(F_c(x)\) are called the discontinuous or discrete part and continuous part, respectively, of F(x). Now, from Theorem 3.1.3, we see that there exists a countable set D such that \(\int _D dF_d(x)=1\), where the integral is the Lebesgue-Stieltjes integral. The function \(F_c (x)\) is continuous but not differentiable at all points. Yet, any cdf is differentiable at almost every point and thus, based on the Lebesgue decomposition theorem, the continuous part \(F_c (x)\) in (3.A.14) can be decomposed into two continuous functions \(F_{ac} (x)\) and \(F_s (x)\) as

$$\begin{aligned} F_c(x) \, = \ b_1 F_{ac} (x) + b_2 F_s (x) , \end{aligned}$$
(3.A.15)

where \(b_1 \ge 0\), \(b_2 \ge 0\), and \(b_1 +b_2 =1\). The function \(F_{ac} (x)\) is the integral of the derivative of \(F_{ac} (x)\): in other words, \(F_{ac} (x) = \int _{-\infty }^{x} F'_{ac} (y) dy\), indicating that \(F_{ac} (x)\) is an absolutely continuous function and that \(\int _{N} dF_{ac} (x) = 0\) for a set N of Lebesgue measure 0. The function \(F_s (x)\) is a continuous function but the derivative is 0 at almost every point. In addition, for a suitably chosen set N of Lebesgue measure 0, we have \(\int _{N} dF_{s} (x) = 1\), indicating that \(F_{s} (x)\) is a singular function. Combining (3.A.14) and (3.A.15), we have the following theorem (Lukacs 1970):

Theorem 3.A.2

Any cdf F(x) can be decomposed into a step-like function \(F_d (x)\), an absolutely continuous function \(F_{ac} (x)\), and a singular function \(F_s (x)\) as

$$\begin{aligned} F (x) \, = \ a_1 F_d (x) + a_2 F_{ac} (x) + a_3 F_s (x), \end{aligned}$$
(3.A.16)

where \(a_1\ge 0\), \(a_2\ge 0\), \(a_3\ge 0\), and \(a_1 +a_2 +a_3 =1\).

In (3.A.16), when one of the three coefficients \(a_1\), \(a_2\), and \(a_3\) is 1, the cdf is called pure: when \(a_1=1\), \(a_2=1\), or \(a_3=1\), the cdf is a discrete cdf, an absolutely continuous cdf, or a singular cdf, respectively. Almost all cdf’s are practically discrete cdf’s or absolutely continuous cdf’s. Because a singular cdf is only theoretically meaningful with very rare practical applications, an absolutely continuous cdf is considered to be a continuous cdf and no singular cdf is considered in statistics. In this book, a continuous cdf indicates an absolutely continuous cdf unless specified otherwise.

When the intervals between discontinuity points are all the same for a distribution, the discrete distribution is called a lattice distribution, the discontinuities are called lattice points, and the interval between adjacent two lattice points is called the span.

Example 3.A.2

Let us consider an example of a singular cdf. Assume the closed interval [0, 1] and the ternary expression

$$\begin{aligned} x= & {} \sum \limits _{i=1}^{\infty } \frac{a_i (x)}{3^i} \nonumber \\= & {} 0.a_1(x)a_2(x)a_3(x)\cdots \end{aligned}$$
(3.A.17)

of a point x in the Cantor set \(\mathbb {C}\) discussed in Example 1.1.46, where \(a_i(x) \in \{0, 1\}\). Now, let n(x) be the location of the first 1 in the ternary expression (3.A.17) of x with \(n(x)=\infty \) if no 1 appears eventually. Define a cdf as (Romano and Siegel 1986)

$$\begin{aligned} F(x)= & {} \left\{ \begin{array}{ll} 0, &{} ~~ x<0,\\ g(x), &{} ~~ x\in \left( [0,1]-\mathbb {C} \right) ,\\ \sum \limits _{j} \frac{c_j}{2^j}, &{} ~~ x=\sum \limits _j \frac{2c_j}{3^j} \in \mathbb {C} ,\\ 1, &{}~~ x\ge 1 , \end{array} \right. \end{aligned}$$
(3.A.18)

i.e., as

$$\begin{aligned} F(x)= & {} \left\{ \begin{array}{ll} 0, &{} ~ x<0 ,\\ \phi _C(x), &{} ~ 0 \le x \le 1 ,\\ 1, &{}~ x\ge 1 \end{array} \right. \nonumber \\= & {} \left\{ \begin{array}{ll} 0, &{} x \le 0 ,\\ \frac{1}{2^{1+n(x)}} + \sum \limits _{i=1}^{n(x)} \frac{a_i (x)}{2^{i+1}}, &{} 0 \le x \le 1 ,\\ 1, &{} x\ge 1 \end{array} \right. \end{aligned}$$
(3.A.19)

based on the Cantor function \(\phi _C(x)\) discussed in Example 1.3.11, where

$$\begin{aligned} g(x) \, =\ \left\{ \begin{array}{ll} \frac{1}{2} , &{} ~ \frac{1}{3}< x < \frac{2}{3} , \\ \frac{1}{2^k} + \sum \limits _{j=1}^{k-1} \frac{ c_j}{2^j}, &{} ~ x \in A_{2c_1,2c_2,\ldots ,2c_{k-1}} \end{array}\right. \end{aligned}$$
(3.A.20)

with the open interval \(A_{2c_1,2c_2,\ldots ,2c_{k-1}}\) defined by (1.1.41) and (1.1.42). Then, it is easy to see that F(x) is a continuous cdf. In addition, as we have observed in Example 1.3.11, the derivative of F(x) is 0 at almost every point in \(\left( [0,1]-\mathbb {C} \right) \) and thus is not a pdf. In other words, F(x) is not an absolutely continuous cdf but is a singular cdf.

Some specific values of the cdf (3.A.19) are as follows: We have \(F\left( \frac{1}{9}\right) = \frac{1}{2^{1+2}} + \sum \limits _{i=1}^{2} \frac{a_i \left( \frac{1}{9}\right) }{2^{i+1}} = \frac{1}{8} + \frac{1}{8} = \frac{1}{4}\) from \(\frac{1}{9}=0.01_3\) and \(n \left( \frac{1}{9}\right) =2\), we have \(F \left( \frac{2}{9}\right) = \frac{1}{2^{\infty }} + \sum \limits _{i=1}^{\infty } \frac{a_i \left( \frac{2}{9}\right) }{ 2^{i+1}} = \frac{1}{4}\) from \(\frac{2}{9}=0.02_3\) and \(n \left( \frac{2}{9}\right) =\infty \), and we have \(F \left( \frac{1}{3}\right) = \frac{1}{2^{1+1}} + \sum \limits _{i=1}^{1} \frac{a_i \left( \frac{1}{3} \right) }{ 2^{i+1}} = \frac{1}{2} \) from \(\frac{1}{3}=0.1_3\) and \(n \left( \frac{1}{3}\right) =1\). Similarly, from \(\frac{2}{3}=0.2_3\) and \(n \left( \frac{2}{3}\right) =\infty \), we have \(F \left( \frac{2}{3}\right) = 0 + \frac{2}{4}= \frac{1}{2} \); from \(0=0.0_3\) and \(n(0)=\infty \), we have \(F(0)=0\); and from \(1=0.22\cdots _3\) and \(n(1)=\infty \), we have \(F(1)= 0+ \sum \limits _{i=1}^{\infty } \frac{2}{2^{i+1}} =1\). \(\diamondsuit \)

3.1.1.2 (B) Inverse Cumulative Distribution Functions

The inverse cdf \(F^{-1}\), the inverse function of the cdf F, can be defined specifically as

$$\begin{aligned} F^{-1} (u) \, = \ \inf \{ x: \, F(x) \ge u \} \end{aligned}$$
(3.A.21)

for \(0< u < 1\). Because F is a non-decreasing function, we have \(\{ x: \, F(x) \ge y_2 \} \subseteq \{ x: \, F(x) \ge y_1 \}\) when \(y_1 < y_2\) and, consequently, \( \inf \{ x: \, F(x) \ge y_1 \} \le \inf \{ x: \, F(x) \ge y_2 \}\). Thus,

$$\begin{aligned} F^{-1} \left( y_1 \right) \ \le \ F^{-1} \left( y_2 \right) \end{aligned}$$
(3.A.22)

for \(y_1 < y_2\). In other words, like a cdf, an inverse cdf is a non-decreasing function.

Example 3.A.3

Consider the cdf

$$\begin{aligned} F_1(x) \, = \ \left\{ \begin{array}{llll} 0, &{} x \le 0; &{} \quad \frac{x}{3}, &{} 0 \le x < 1; \\ \frac{1}{2} , &{} 1 \le x \le 2; &{} \quad \frac{1}{2} (x-1), &{} 2 \le x \le 3; \\ 1, &{} x \ge 3. \end{array} \right. \end{aligned}$$
(3.A.23)

Then, the inverse cdf is

$$\begin{aligned} F_1^{-1}(u) \, = \ \left\{ \begin{array}{llll} 3u, &{} 0< u \le \frac{1}{3}; &{} \quad 1, &{} \frac{1}{3} \le u \le \frac{1}{2} ; \\ 2u+1, &{} \frac{1}{2}< u < 1 . \end{array} \right. \end{aligned}$$
(3.A.24)

For example, we have \(F_1^{-1} \left( \frac{1}{3} \right) = \inf \left\{ x: \, F_1(x) \ge \frac{1}{3} \right\} = \inf \left\{ x: \, x \ge 1 \right\} = 1\) and \( F_1^{-1} \left( \frac{1}{2} \right) = \inf \left\{ x: \, F_1(x) \ge \frac{1}{2} \right\} = \inf \left\{ x: \, x \ge 1 \right\} = 1\). Note that, unlike a cdf, an inverse cdf is continuous from the left-hand side. Figure 3.22 shows the cdf \(F_1\) and the inverse cdf \(F_1^{-1}\). \(\diamondsuit \)

Fig. 3.22
figure 22

The cdf \(F_1\) and the inverse cdf \(F_1^{-1}\)

Theorem 3.A.3

(Hajek et al. 1999) Let F and \(F^{-1}\) be a cdf and its inverse, respectively. Then,

$$\begin{aligned} F \left( F^{-1} (u) \right) \ \ge \ u \end{aligned}$$
(3.A.25)

for \(0< u < 1\): in addition,

$$\begin{aligned} F \left( F^{-1} (u) \right) \, = \ u \end{aligned}$$
(3.A.26)

if F is continuous.

Proof

Let \(S_u\) be the setFootnote 15 of all x such that \(F(x) \ge u\), and \(x_L\) be the smallest number in \(S_u\). Then,

$$\begin{aligned} F^{-1} (u) \, = \ x_L \end{aligned}$$
(3.A.27)

and

$$\begin{aligned} F \left( x_L \right) \ \ge \ u \end{aligned}$$
(3.A.28)

because \(F(x) \ge u\) for any point \(x \in S_u\). In addition, because \(S_u\) is the set of all x such that \(F(x) \ge u\), we have \(F(x) < u\) for any point \(x \in S_u^c\). Now, \(x_L\) is the smallest number in \(S_u\) and thus \(x_L - \epsilon \in S_u^c\) because \(x_L - \epsilon < x_L\) when \(\epsilon > 0\). Consequently, we have

$$\begin{aligned} F \left( x_L - \epsilon \right) < u \end{aligned}$$
(3.A.29)

when \(\epsilon > 0\). Using (3.A.27) in (3.A.28), we get \(F \left( F^{-1} (u) \right) \ge u\). Next, recollecting (3.A.27), and combining (3.A.28) and (3.A.29), we get \(F \left( F^{-1} (u) - \epsilon \right) < u \le F \left( F^{-1} (u) \right) \). In other words, u is a number between \(F \left( F^{-1} (u) - \epsilon \right) \) and \(F \left( F^{-1} (u) \right) \). Now, \(u=F \left( F^{-1} (u) \right) \) because \(\lim \limits _{\epsilon \rightarrow 0} F \big ( F^{-1} (u) - \epsilon \big ) = F \left( F^{-1} (u) \right) \) if F is a continuous function. \(\spadesuit \)

Example 3.A.4

For the cdf

$$\begin{aligned} F_2(x) \, = \ \left\{ \begin{array}{llll} 0, &{} x< 0; &{} \quad \frac{x}{4}, &{} 0 \le x< 1; \\ \frac{1}{2} , &{} 1 \le x< 2; &{} \quad \frac{1}{4}(x+1), &{} 2 \le x < 3; \\ 1, &{} x \ge 3 \end{array} \right. \end{aligned}$$
(3.A.30)

and the inverse cdf

$$\begin{aligned} F_2^{-1}(u) \, = \ \left\{ \begin{array}{llll} 4u, &{} 0< u \le \frac{1}{4}; &{} \quad 1, &{} \frac{1}{4} \le u \le \frac{1}{2} ; \\ 2, &{} \frac{1}{2}< u \le \frac{3}{4}; &{} \quad 4u-1, &{} \frac{3}{4} \le u < 1 , \end{array} \right. \end{aligned}$$
(3.A.31)

we have \(F_2\left( F_2^{-1}\left( \frac{1}{4} \right) \right) =F_2(1)= \frac{1}{2} \ge \frac{1}{4}\), \(F_2\left( F_2^{-1}\left( \frac{1}{2} \right) \right) =F_2(1)= \frac{1}{2} \ge \frac{1}{2} \), \(F_2^{-1}(F_2(1)) = F_2^{-1}\left( \frac{1}{2} \right) =1 \le 1\), and \(F_2^{-1}\left( F_2\left( \frac{3}{2} \right) \right) = F_2^{-1}\left( \frac{1}{2} \right) =1 \le \frac{3}{2}\). Figure 3.23 shows the cdf \(F_2\) and the inverse cdf \(F_2^{-1}\). \(\diamondsuit \)

Fig. 3.23
figure 23

The cdf \(F_2\) and the inverse cdf \(F_2^{-1}\)

Note that even if F is a continuous function, we have not \(F^{-1} (F (x)) = x\) but

$$\begin{aligned} F^{-1} (F (x)) \ \le \ x . \end{aligned}$$
(3.A.32)

It is also noteworthy that

$$\begin{aligned} \mathsf {P}\left( F^{-1} (F (X)) \ne X \right) \, = \ 0 \end{aligned}$$
(3.A.33)

when \(X \sim F\).

Example 3.A.5

Consider the cdf \(F_1(x)\) and the inverse cdf \(F_1^{-1}(u)\) discussed in Example 3.A.3. Then, we have

$$\begin{aligned} F_1\left( F_1^{-1}(u) \right)= & {} \left\{ \begin{array}{ll} 0, &{} F_1^{-1}(u) \le 0,\\ \frac{1}{3}F_1^{-1}(u), &{} 0 \le F_1^{-1}(u)< 1,\\ \frac{1}{2} , &{} 1 \le F_1^{-1}(u) \le 2,\\ \frac{1}{2} \left\{ F_1^{-1}(u) -1 \right\} , &{} 2 \le F_1^{-1}(u) \le 3,\\ 1, &{} F_1^{-1}(u) \ge 3 \end{array} \right. \nonumber \\= & {} \left\{ \begin{array}{llll} u, &{} 0< u< \frac{1}{3}; &{} \quad \frac{1}{2} , &{} \frac{1}{3} \le u \le \frac{1}{2} ; \\ u, &{} \frac{1}{2}< u < 1 \end{array} \right. \end{aligned}$$
(3.A.34)

and

$$\begin{aligned} F_1^{-1}\left( F_1(x) \right)= & {} \left\{ \begin{array}{llll} 3F_1(x), &{} 0< F_1(x) \le \frac{1}{3}; &{} \quad 1, &{} \frac{1}{3} \le F_1(x) \le \frac{1}{2} ;\\ 2F_1(x)+1, &{} \frac{1}{2}< F_1(x)< 1 \end{array} \right. \nonumber \\= & {} \left\{ \begin{array}{llll} x, &{} 0< x< 1; &{} \quad 1, &{} 1 \le x \le 2 ; \\ x, &{} 2< x < 3 , \end{array} \right. \end{aligned}$$
(3.A.35)

which are shown in Fig. 3.24. The results (3.A.34) and (3.A.35) clearly confirm \(F_1\left( F_1^{-1}(u) \right) \ge u\) shown in (3.A.26) and \(F_1^{-1}\left( F_1(x) \right) \le x \) shown in (3.A.32). \(\diamondsuit \)

Fig. 3.24
figure 24

The results \(F_1\left( F_1^{-1}(u) \right) \) and \(F_1^{-1}\left( F_1(x) \right) \) from the cdf \(F_1\)

The results of Example 3.A.5 and Exercises 3.79 and 3.80 imply the following: In general, even when a cdf is a continuous function, if it is constant over an interval, its inverse is discontinuous. Specifically, if \(F(a) = F \left( b^- \right) = \alpha \) for \(a < b\) or, equivalently, if \(F(x)=\alpha \) over \(a \le x < b\), the inverse cdf \(F^{-1}(u)\) is discontinuous at \(u=\alpha \) and \(F^{-1}(\alpha ) =a\). On the other hand, even if a cdf is not a continuous function, its inverse is a continuous function when the cdf is not constant over an interval. Figure 3.25 shows an example of a discontinuous cdf with a continuous inverse.

Fig. 3.25
figure 25

A discontinuous cdf F with a continuous inverse \(F^{-1}\)

Theorem 3.A.4

(Hajek et al. 1999) If a cdf F is continuous, its inverse \(F^{-1}\) is strictly increasing.

Proof

When F is continuous, assume \(F^{-1} \left( y_1 \right) = F^{-1} \left( y_2 \right) \) for \(y_1 < y_2\). Then, from (3.A.26), we have \(y_1 = F \left( F^{-1} \left( y_1 \right) \right) = F \left( F^{-1} \left( y_2 \right) \right) = y_2\), which is a contradiction to \(y_1 < y_2\). In other words,

$$\begin{aligned} F^{-1} \left( y_1 \right) \ \ne \ F^{-1} \left( y_2 \right) \end{aligned}$$
(3.A.36)

when \(y_1 < y_2\). From (3.A.22) and (3.A.36), we have \(F^{-1} \left( y_1 \right) < F^{-1} \left( y_2 \right) \) for \(y_1 < y_2\) when the cdf F is continuous. \(\spadesuit \)

Theorem 3.A.5

(Hajek et al. 1999) When the pdf f is continuous, we have

$$\begin{aligned} \frac{d}{du} F^{-1} (u) \, = \ \frac{1}{f \left( F^{-1} (u) \right) } \end{aligned}$$
(3.A.37)

if \(f \left( F^{-1} (u) \right) \ne 0\), where F is the cdf.

Proof

When f is continuous, F is also continuous. Letting \(F^{-1} (u)=v\), we get \(\frac{d}{du} F^{-1} (u) = \frac{1}{f(v)} \frac{dv}{dv}\), i.e.,

$$\begin{aligned} \frac{d}{du} F^{-1} (u)= & {} \frac{1}{f \left( F^{-1} (u) \right) } \end{aligned}$$
(3.A.38)

because \(F(v)=F\left( F^{-1} (u) \right) = u\) and \(du = f(v) dv\) from (3.A.26). \(\spadesuit \)

3.1.2 Appendix 3.2 Proofs of Theorems

3.1.2.1 (A) Proof of Theorem 3.5.1

Letting \(k-np = l\), rewrite \(P_n(k)= \, _{n}\text{ C}_{k} p^{k}(1-p)^{n-k}\) as

$$\begin{aligned} \, _{n}\text{ C}_{k} p^{k}(1-p)^{n-k}= & {} \frac{n!}{(np+l)!(nq-l)!} p^{np+l} q^{nq-l}\nonumber \\= & {} \frac{n! p^{np} q^{nq} }{(np)!(nq)!} \times \frac{(np)!(nq)!}{(np+l)!(nq-l)!} \frac{p^l}{q^l} . \end{aligned}$$
(3.A.39)

First, using the Stirling approximation \(n! \approx \sqrt{2 \pi n} \left( \frac{n}{e} \right) ^{n}\), which can be obtained from \(\sqrt{2 \pi n} \left( \frac{n}{e} \right) ^{n}< n! < \sqrt{2 \pi n} \left( 1+ \frac{1}{4n} \right) \left( \frac{n}{e} \right) ^{n}\), the first part of the right-hand side of (3.A.39) becomes

$$\begin{aligned} \frac{n! p^{np} q^{nq} }{(np)!(nq)!}\approx & {} \frac{\sqrt{2\pi n} n^ne^{-n}}{\sqrt{2\pi np} (np)^{np}e^{-np} \sqrt{2\pi nq} (nq)^{nq}e^{-nq}} p^{np} q^{nq} \nonumber \\= & {} \frac{\sqrt{2\pi n}}{\sqrt{2\pi np}\sqrt{2\pi nq} }\times \frac{n^n p^{np} q^{nq}}{(np)^{np}(nq)^{nq}} \times \frac{e^{-n}}{e^{-np} e^{-nq}}\nonumber \\= & {} \frac{1}{\sqrt{2\pi npq}} . \end{aligned}$$
(3.A.40)

The second part of the right-hand side of (3.A.39) can be rewritten as

$$\begin{aligned} \frac{(np)!(nq)!}{(np+l)!(nq-l)!} \frac{p^{l}}{q^{l}} \, = \ \frac{(np)!(npq)^l}{(np+l)!q^l} \times \frac{(nq)!p^l}{(nq-l)!(npq)^l}. \end{aligned}$$
(3.A.41)

Letting \(t_j = \frac{j}{npq}\), and using \(e^x \approx 1+x\) for \(x \approx 0\), the first part of the right-hand side of (3.A.41) becomes \(\frac{(np)!(npq)^l}{(np+l)!q^l} = \frac{(np)^l}{(np+l)(np+l-1)\cdots (np+1)} = \left\{ \prod \limits _{j=1}^l \left( 1 + \frac{j}{np} \right) \right\} ^{-1} = \left\{ \prod \limits _{j=1}^l \left( 1 + q t_j \right) \right\} ^{-1} \approx \bigg ( \prod \limits _{j=1}^l e^{qt_j} \bigg )^{-1}\) and the second part of the right-hand side of (3.A.41) can be rewritten as \(\frac{(nq)!p^l}{(nq-l)!(npq)^l} = \frac{\prod \limits _{j=1}^l \left( nq+1-j\right) }{(nq)^l}= \prod \limits _{j=1}^l \left( \frac{nq+1}{nq} - \frac{j}{nq} \right) \approx \prod \limits _{j=1}^l \left( 1 - \frac{j}{nq} \right) = \prod \limits _{j=1}^l \left( 1 -p t_j \right) \approx \prod \limits _{j=1}^l e^{-p t_j}\). Employing these two results in (3.A.41), we get \(\frac{(np)!(nq)!}{(np+l)!(nq-l)!} \frac{p^{l}}{q^{l}} \approx \prod \limits _{j=1}^l \frac{e^{-p t_j}}{e^{q t_j}} = \prod \limits _{j=1}^l e^{- t_j} = \exp \left\{ -\frac{l(l+1)}{2npq} \right\} \), i.e.,

$$\begin{aligned} \frac{(np)!(nq)!}{(np+l)!(nq-l)!} \frac{p^{l}}{q^{l}}\approx & {} \exp \left( -\frac{l^2}{2npq} \right) . \end{aligned}$$
(3.A.42)

Now, recollecting \(l=k-np\), we get the desired result (3.5.16) from (3.A.39), (3.A.40), and (3.A.42).

3.1.2.2 (B) Proof of Theorem 3.5.2

For \(k=0\), we have

$$\begin{aligned} \lim _{n \rightarrow \infty } P_{n}(0) \, = \ e^{-\lambda } \end{aligned}$$
(3.A.43)

because \(\lim \limits _{n \rightarrow \infty }(1-p)^n = \lim \limits _{n \rightarrow \infty } \left( 1- \frac{\lambda }{n} \right) ^n =e^{-\lambda }\) when \(np \rightarrow \lambda \). Next, for \(k=1, 2, \ldots , n\), we have

$$\begin{aligned} P_{n}(k)= & {} \frac{n(n-1) \cdots (n-k+1)}{k!}p^{k}(1-p)^{n-k} \nonumber \\= & {} \frac{(np)^{k}}{k!}(1-p)^{n} \frac{\left( 1-\frac{1}{n} \right) \left( 1-\frac{2}{n} \right) \cdots \left( 1- \frac{k-1}{n} \right) }{(1-p)^{k}}. \end{aligned}$$
(3.A.44)

Now, \(1-p \approx e^{-p}\) when p is small, and

$$\begin{aligned} \frac{\left( 1-\frac{1}{n} \right) \left( 1-\frac{2}{n} \right) \cdots \left( 1- \frac{k-1}{n} \right) }{(1-p)^{k}}\rightarrow & {} \frac{1-\frac{1}{n}}{1- \frac{\lambda }{n}} \frac{1-\frac{2}{n}}{1- \frac{\lambda }{n}} \ldots \frac{1-\frac{k-1}{n}}{1- \frac{\lambda }{n}} \frac{1}{1- \frac{\lambda }{n}} \nonumber \\\rightarrow & {} 1 \end{aligned}$$
(3.A.45)

when \(k= 1, 2, \ldots \) is fixed, \(p \rightarrow \frac{\lambda }{n}\), and \(n \rightarrow \infty \). Thus, letting \(np \rightarrow \lambda \), we get \(P_{n}(k)\rightarrow \left( e^{-p} \right) ^n \frac{\lambda ^{k}}{k!} = e^{-\lambda } \frac{\lambda ^{k}}{k!}\): from this result and (3.A.43), we have (3.5.19).

3.1.3 Appendix 3.3 Distributions and Moment Generating Functions

3.1.3.1 (A) Discrete distributions: pmf p(k) and mgf M(t)

Bernoulli distribution: \(\alpha \in (0, 1)\)

$$\begin{aligned} p(k)= & \left\{ \begin{array}{ll} 1-\alpha , &\!\!\! k=0, \\ \alpha , &\!\!\! k=1 \end{array}\right. \end{aligned}$$
(3.A.46)
$$\begin{aligned} M (t)= & {} 1-\alpha + \alpha e^t \end{aligned}$$
(3.A.47)

Binomial distribution: \(n=1, 2, \ldots ,\) \(\alpha \in (0, 1)\)

$$\begin{aligned} p(k)= & \, _{n}\text{C}_{k} \alpha ^k (1-\alpha )^{n-k}, ~ k=0,1,\ldots \end{aligned}$$
(3.A.48)
$$\begin{aligned} M (t)= & {} \left( 1-\alpha + \alpha e^t \right) ^n \end{aligned}$$
(3.A.49)

Geometric distribution 1: \(\alpha \in (0, 1)\)

$$\begin{aligned} p(k)= & {} \alpha (1-\alpha )^{k}, ~ k=0, 1,\ldots \end{aligned}$$
(3.A.50)
$$\begin{aligned} M (t)= & {} \frac{\alpha }{1-(1-\alpha )e^t} \end{aligned}$$
(3.A.51)

Geometric distribution 2: \(\alpha \in (0, 1)\)

$$\begin{aligned} p(k)= & {} \alpha (1-\alpha )^{k-1}, ~ k=1,2,\ldots \end{aligned}$$
(3.A.52)
$$\begin{aligned} M (t)= & {} \frac{\alpha e^t}{1-(1-\alpha )e^t} \end{aligned}$$
(3.A.53)

Negative binomial distribution: \(r > 0,\) \(\alpha \in (0, 1)\)

$$\begin{aligned} p(k)= & {} \, _{-r}\text{C}_k \alpha ^r (\alpha -1)^k , ~ k=0, 1, \ldots \end{aligned}$$
(3.A.54)
$$\begin{aligned} M (t)= & {} \left\{ \frac{\alpha }{1-(1-\alpha )e^t} \right\} ^r \end{aligned}$$
(3.A.55)

Pascal distribution: \(r > 0,\) \(\alpha \in (0, 1)\)

$$\begin{aligned} p(k)= & {} \, _{k-1}\text{C}_{r-1} \alpha ^r (1-\alpha )^{k-r}, ~ k=r,r+1,\ldots \end{aligned}$$
(3.A.56)
$$\begin{aligned} M (t)= & {} \left\{ \frac{\alpha e^t}{1-(1-\alpha )e^t} \right\} ^r \end{aligned}$$
(3.A.57)

Poisson distribution: \(\lambda >0\)

$$\begin{aligned} p(k)= & {} \frac{\lambda ^k}{k!} e^{-\lambda }, ~ k=0,1,\ldots \end{aligned}$$
(3.A.58)
$$\begin{aligned} M (t)= & {} \exp \left\{ - \lambda (1- e^t) \right\} \end{aligned}$$
(3.A.59)

Uniform distribution

$$\begin{aligned} p(k)= & {} \frac{1}{n}, ~ k=0,1,\ldots ,n-1 \end{aligned}$$
(3.A.60)
$$\begin{aligned} M (t)= & {} \frac{1-e^{nt}}{n(1-e^t)} \end{aligned}$$
(3.A.61)

3.1.3.2 (B) Continuous distributions: pdf f(x) and mgf M(t)

Cauchy distribution: \(\alpha >0\) (cf \(\varphi ( \omega )\) instead of mgf)

$$\begin{aligned} f(x)= & {} \frac{\alpha }{\pi } \frac{1}{(x-\beta )^2 + \alpha ^2} \end{aligned}$$
(3.A.62)
$$\begin{aligned} \varphi ( \omega )= & {} \exp \left( j \beta \omega - \alpha | \omega | \right) \end{aligned}$$
(3.A.63)

Central chi-square distribution: \(n=1, 2, \ldots \)

$$\begin{aligned} f(x)= & {} \frac{x^{\frac{n}{2}-1}}{\varGamma \left( \frac{n}{2} \right) 2^{\frac{n}{2}}} \exp \left( - \frac{x}{2} \right) u(x) \end{aligned}$$
(3.A.64)
$$\begin{aligned} M (t)= & {} (1-2t)^{-\frac{n}{2}} \end{aligned}$$
(3.A.65)

Exponential distribution: \(\lambda >0\)

$$\begin{aligned} f(x)= & {} \lambda e^{-\lambda x} u(x) \end{aligned}$$
(3.A.66)
$$\begin{aligned} M (t)= & {} \frac{\lambda }{\lambda -t} \end{aligned}$$
(3.A.67)

Gamma distribution: \(\alpha >0\) , \(\beta >0\)

$$\begin{aligned} f(x)= & {} \frac{x^{\alpha -1}}{\varGamma (\alpha ) \beta ^{\alpha }} \exp \left( -\frac{x}{\beta } \right) u(x) \end{aligned}$$
(3.A.68)
$$\begin{aligned} M (t)= & {} \frac{1}{(1-\beta t)^{\alpha }} \end{aligned}$$
(3.A.69)

Laplace (double exponential) distribution: \(\lambda >0\)

$$\begin{aligned} f(x)= & {} \frac{\lambda }{2} e^{- \lambda | x| } \end{aligned}$$
(3.A.70)
$$\begin{aligned} M (t)= & {} \frac{\lambda ^2}{\lambda ^2 - t^2} \end{aligned}$$
(3.A.71)

Normal distribution

$$\begin{aligned} f(x)= & {} \frac{1}{\sqrt{2 \pi \sigma ^2}} \exp \left\{ - \frac{(x-m)^2}{2 \sigma ^2} \right\} \end{aligned}$$
(3.A.72)
$$\begin{aligned} M (t)= & {} \exp \left( m t + \frac{\sigma ^2 t^2}{2} \right) \end{aligned}$$
(3.A.73)

Rayleigh distribution Footnote 16 : \(\alpha >0\)

$$\begin{aligned} f(x)= & {} \frac{x}{\alpha ^2} \exp \left( -\frac{x^2}{2\alpha ^2}\right) u(x) \end{aligned}$$
(3.A.74)
$$\begin{aligned} M (t)= & {} 1 + \sqrt{2 \pi } \, \alpha t \, \exp \left( \frac{\alpha ^2 t^2}{2} \right) \varPhi \left( \alpha t \right) \end{aligned}$$
(3.A.75)

Uniform distribution: \(b > a\)

$$\begin{aligned} f(x)= & {} \frac{1}{b-a}u(x-a)u(b-x) \end{aligned}$$
(3.A.76)
$$\begin{aligned} M (t)= & {} \frac{e^{bt}-e^{at}}{(b-a)t} \end{aligned}$$
(3.A.77)

Exercises

Exercise 3.1

Show that

$$\begin{aligned} \lim \limits _{x \rightarrow \infty } xF(-x) \, = \ 0 \end{aligned}$$
(3.E.1)

for an absolutely continuous cdf F. Using this result, show that

$$\begin{aligned} \mathsf {E}\{X\} \, = \ \int _{0}^{\infty } \left\{ 1 - F_X (x) \right\} dx - \int _{-\infty }^0 F_X (x) dx \end{aligned}$$
(3.E.2)

for a random variable X with a continuous and absolutely integrable pdf.

Exercise 3.2

Express the cdf of

$$\begin{aligned} g(X) \, = \ \left\{ \begin{array}{ll} X-c, &{} X > c ,\\ 0, &{} -c \le X \le c, \\ X+c, &{} X < -c \end{array} \right. \end{aligned}$$
(3.E.3)

in terms of the cdf \(F_X\) of a continuous random variable X, where \(c>0\).

Exercise 3.3

Express the cdf of

$$\begin{aligned} g(X) \, = \ \left\{ \begin{array}{ll} X+c, &{} X \ge 0 ,\\ X-c, &{} X < 0 \end{array} \right. \end{aligned}$$
(3.E.4)

in terms of the cdf \(F_X\) of X, where \(c>0\).

Exercise 3.4

 Express the pdf \(f_Y\) of \(Y=a\sin (X+\theta )\) in terms of the pdf of X, where \(a>0\) and \(\theta \) are constants.

Exercise 3.5

Obtain the pmf of X in Example 3.1.17 assuming that each ball taken is replaced into the box before the following trial.

Exercise 3.6

Obtain the pdf and cdf of \(Y= X^2 +1\) when \(X \sim U[-1, 2)\).

Exercise 3.7

Obtain the cdf of \(Y=X^3-3X\) when the pmf of X is \(p_{X}(k) = \frac{1}{7}\) for \(k \in \{0, \pm 1, \pm 2, \pm 3\}\).

Exercise 3.8

Obtain the expected value \(\mathsf {E}\left\{ X^{-1} \right\} \) when the pdf of X is \(f_X(r) = \frac{1}{2^{\frac{n}{2}} \varGamma \left( \frac{n}{2}\right) } r^{\frac{n}{2}-1} \exp \left( - \frac{r}{2} \right) u(r)\).

Exercise 3.9

Express the pdf of the output \(Y=Xu(X)\) of a half-wave rectifier in terms of the pdf \(f_X\) of X. 

Exercise 3.10

Obtain the pdf of \(Y=\left( X- \frac{1}{\theta } \right) ^2\) when the pdf of X is \(f_X (x) = \theta e^{-\theta x} u(x)\).

Exercise 3.11

Let the pdf and cdf of a continuous random variable X be \(f_X\) and \(F_X\), respectively. Obtain the conditional cdf \( F_{X| b< X \le a} (x)\) and the conditional pdf \( f_{X| b< X \le a} (x)\) in terms of \(f_X\) and \(F_X\), where \(a > b\).

Exercise 3.12

For \(X \sim U[0,1)\), obtain the conditional mean \(\mathsf {E}\{X|X>a\}\) and conditional variance \(\mathsf {Var}\{X|X>a\} = \mathsf {E}\left. \left\{ \left[ X-\mathsf {E}\{X|X>a\} \right] ^2 \right| X>a \right\} \) when \(0<a<1\). Obtain the limits of the conditional mean and conditional variance when \(a\rightarrow 1\).

Exercise 3.13

Obtain the probability \(\mathsf {P}(950 \le R < 1050 )\) when the resistance R of a resistor has the uniform distribution U(900, 1100).

Exercise 3.14

The cost of being early and late by s minutes for an appointment is cs and ks, respectively. Denoting by \(f_X\) the pdf of the time X taken to arrive at the location of the appointment, find the time of departure for the minimum cost.

Exercise 3.15

Let \(\omega \) be the outcome of a random experiment of taking one ball from a box containing one each of red, green, and blue balls. Obtain \( \mathsf {P}( X \le \alpha )\), \( \mathsf {P}( X \le 0 )\), and \( \mathsf {P}(2 \le X < 4 )\), where

$$\begin{aligned} X ( \omega ) \, = \ \left\{ \begin{array}{ll} \pi , &{} \quad \omega = \text{green } \text{ball } \text{or } \text{blue } \text{ball },\\ 0, &{} \quad \omega = \text{red } \text{ball } \end{array} \right. \end{aligned}$$
(3.E.5)

with \(\alpha \) a real number.

Exercise 3.16

For \(V \sim U[-1, 1]\), obtain \( \mathsf {P}(V >0)\), \( \mathsf {P}\left( |V|<\frac{1}{3}\right) \), \( \mathsf {P}\left( |V| \ge \frac{3}{4} \right) \), and \( \mathsf {P}\left( \frac{1}{3}< V < \frac{1}{2} \right) \).

Exercise 3.17

In successive tosses of a fair coin, let the number of tosses until the first head be K. For the two events \(A= \{ K >5\}\) and \(B= \{ K > 10\}\), obtain the probabilities of \(A\), \(B\), \(B^c\), \(A\cap B\), and \(A\cup B\).

Exercise 3.18

Data is transmitted via a sequence of N bits through two independent channels \(C_A\) and \(C_B\). Due to channel noise during the transmission, \(w_A\) and \(w_B\) bits are in error among the sequences A and B of N bits received through channels \(C_A\) and \(C_B\), respectively. Assume that the noise on a bit does not influence that on others.

  1. (1)

    Obtain the probability \( \mathsf {P}(D=d)\) that the number D of error bits common to A and B is d.

  2. (2)

    Assume the sequence of N bits is reconstructed by selecting each bit from A with probability p or from B with probability \(1-p\). Obtain the probability \( \mathsf {P}(K=k)\) that the number K of error bits is k in the reconstructed sequence of N bits.

  3. (3)

    When \(N=3\) and \(w_A=w_B=1\), obtain \( \mathsf {P}(D=d)\) and \( \mathsf {P}(K=k)\).

Exercise 3.19

Assume the pdf \(f(x)=\frac{c}{x^3} u(x-1)\) of X.

  1. (1)

    Determine the constant c.

  2. (2)

    Obtain the mean \(\mathsf {E}\{X\}\) of X.

  3. (3)

    Show that the variance of X does not exist.

Exercise 3.20

Assume the cdf

$$\begin{aligned} F(x) \, = \ \left\{ \begin{array}{llll} 0, &{} x<0; &{} \quad \frac{1}{4}(x^2+1), &{} 0\le x<1;\\ \frac{1}{4}(x+2), &{} 1\le x <2; &{} \quad 1, &{} x \ge 2 \end{array} \right. \end{aligned}$$
(3.E.6)

of X.

  1. (1)

    Obtain \( \mathsf {P}(0<X<1)\) and \( \mathsf {P}(1 \le X <1.5)\).

  2. (2)

    Obtain the mean \(\mu = \mathsf {E}\{X\}\) and variance \(\sigma ^2 = \mathsf {Var}\{X\}\) of X.

Exercise 3.21

For \(Y \sim U[0,1)\) and \(a<b\), consider \(W = a+(b-a)Y\).

  1. (1)

    Obtain the cdf of W.

  2. (2)

    Obtain the distribution of W.

Exercise 3.22

Obtain the pdf of \(Y=\frac{X}{1+X}\) when \(X \sim U[0, 1)\).

Exercise 3.23

Express the pdf \(f_Y\) of \(Y=\frac{X}{1-X}\) in terms of the pdf \(f_X\) of X, and obtain \(f_Y\) when \(X \sim U[0, 1)\).

Exercise 3.24

Consider \(Y=\frac{X}{1+X}\) and \(Z=\frac{Y}{1-Y}\). Then, for the pdf’s \(f_Z\) and \(f_X\) of Z and X, respectively, it should hold true that \(f_Z (z) = f_X (z)\) because \(Z = \frac{Y}{1-Y} = \frac{\frac{X}{1+X}}{1-\frac{X}{1+X}} = X\). Confirm this fact from the results of Exercise 3.22 and 3.23.

Exercise 3.25

Obtain the pdf of \(Z= \frac{X}{1-X}\) when the pdf of X is \(f_X (y) = \frac{1}{(1-y)^2} \left\{ u(y)-u\left( y- \frac{1}{2} \right) \right\} \).

Exercise 3.26

When \(X \sim U[0, 1)\), obtain the pdf of \(Y = \frac{1+X}{1-X}\) and the pdf of \(Z = \frac{X-1}{X+1}\).

Exercise 3.27

Assuming the pdf

$$\begin{aligned} f_{X}(x) \, = \ \left\{ \begin{array}{ll} 1+x,&{} -1 \le x< 0 , \\ 1-x,&{} 0 \le x< 1 , \\ 0,&{} x < -1 \text{or } x \ge 1 \end{array} \right. \end{aligned}$$
(3.E.7)

of X, obtain the pdf of \(Y=|X|\).

Exercise 3.28

Consider \(Y=a\cos (X+\theta )\) for a random variable X, where \(a>0\) and \(\theta \) are constants.

  1. (1)

    Obtain the pdf \(f_Y\) of Y when \(X \sim U[-\pi , \pi )\).

  2. (2)

    Obtain the pdf \(f_Y\) of Y when \(X \sim U\left( -\frac{\pi }{2}, \frac{\pi }{2}\right) \), \(a=1\), and \(\theta =0\).

  3. (3)

    Obtain the cdf \(F_{Y}\) of Y when \(X \sim U\left( 0, \frac{3}{2}\pi \right) \).

Exercise 3.29

Consider \(Y=\tan X \) for a random variable X.

  1. (1)

    Express the pdf \(f_Y\) of Y in terms of the pdf \(f_X\) of X.

  2. (2)

    Obtain the pdf \(f_Y\) of Y when \(X \sim U \left( - \frac{\pi }{2}, \frac{\pi }{2} \right) \).

  3. (3)

    Obtain the pdf \(f_Y\) of Y when \(X \sim U \left( 0, 2\pi \right) \).

Exercise 3.30

Find a function g for \(Y=g(X)\) with which we can obtain the exponential random variable \(Y \sim f_Y (y) = \lambda e^{- \lambda y} u(y)\) from the uniform random variable \(X \sim U[0, 1)\).  

Exercise 3.31

For a random variable X with pmf \(p_{X}(v) = \frac{1}{6}\) for \(v = 1, 2, \ldots , 6\), obtain the expected value, mode, and median of X.

Exercise 3.32

Assume \(p_{X} (k) = \frac{1}{7}\) for \( k \in \{1, 2, 3, 5, 15, 25, 50\}\). Show that the value of c that minimizes \(\mathsf {E}\{|X-c|\}\) is 5. Compare this value with the value of b that minimizes \(\mathsf {E}\left\{ (X-b)^2 \right\} \).

Exercise 3.33

Obtain the mean and variance of a random variable X with pdf \(f(x) = \frac{\lambda }{2} e^{- \lambda | x| }\), where \(\lambda >0\).    

Exercise 3.34

For a random variable X with pdf \(f(r)= \frac{r^{\alpha -1} (1-r)^{\beta -1}}{\tilde{B}(\alpha , \beta )} u(r) u(1-r)\), show that the k-th moment is

$$\begin{aligned} \mathsf {E}\left\{ X^k\right\}= & {} \frac{\varGamma (\alpha +k)\varGamma (\alpha +\beta )}{\varGamma (\alpha +\beta +k)\varGamma (\alpha )} \end{aligned}$$
(3.E.8)

and obtain the mean and variance of X.

Exercise 3.35

Show that \(\mathsf {E}\{X\} = R^{\prime }(0)\) and \(\mathsf {Var}(X) = R^{\prime \prime }(0)\), where \(R(t)=\ln M(t)\) and M(t) is the mgf of X.

Exercise 3.36

Obtain the pdf’s \(f_Y\), \(f_Z\), and \(f_W\) of \(Y=X-2\), \(Z=2Y\), and \(W=Z+1\), respectively, when the pdf of X is \(f_X (x)=u(x)-u(x-1)\).

Exercise 3.37

Obtain the pmf’s \(p_Y\), \(p_Z\), and \(p_W\) of \(Y = X+2\), \(Z=X-2\), and \(W = \frac{X-2}{X+2}\), respectively, when the pmf of X is \(p_{X} (k) = \frac{1}{4}\) for \(k=1, 2, 3, 4\).

Exercise 3.38

For a random variable X such that \( \mathsf {P}(0\le X \le a)=1\), show the following:

  1. (1)

    \(\mathsf {E}\left\{ X^2 \right\} \le a\mathsf {E}\{X\}\). Specify when the equality holds true.

  2. (2)

    \(\mathsf {Var}\{X\} \le \mathsf {E}\{X\}(a-\mathsf {E}\{X\})\).

  3. (3)

    \(\mathsf {Var}\{X\} \le \frac{a^2}{4}\). Specify when the equality holds true.

Exercise 3.39

The value of a random variable X is not less than 0.

  1. (1)

    When X can assume values in \(\{0, 1, 2, \ldots \}\), show that the expected value \(\mathsf {E}\{X\} = \sum \limits _{n=0}^{\infty } n \mathsf {P}(X = n)\) is

    $$\begin{aligned} \mathsf {E}\{X\}= & {} \sum _{n=0}^{\infty } \mathsf {P}(X>n)\nonumber \\= & {} \sum _{n=1}^{\infty } \mathsf {P}(X \ge n) . \end{aligned}$$
    (3.E.9)
  2. (2)

    Express \(\mathsf {E}\left\{ X_c^- \right\} \) in terms of \(F_X(x)= \mathsf {P}(X \le x)\), where \( X_c^- = \min (X, c)\) for a constant c.

  3. (3)

    Express \(\mathsf {E}\left\{ X_c^+ \right\} \) in terms of \(F_X(x)\), where \( X_c^+ =\max (X, c)\) for a constant c.

Exercise 3.40

Assume the cf \(\varphi _X (\omega ) = \exp \left( \mu \left[ \exp \left\{ \lambda \left( e^{j \omega }-1 \right) \right\} -1 \right] \right) \) of X, where \(\lambda >0\) and \(\mu > 0\).

  1. (1)

    Obtain \(\mathsf {E}\{X\}\) and \(\mathsf {Var}\{X\}\).

  2. (2)

    Show that \( \mathsf {P}(X=0)=\exp \left\{ -\mu \left( 1-e^{-\lambda } \right) \right\} \).

Exercise 3.41

Let N be the number of hydrogen molecules in a sphere of radius r with volume \(V= \frac{4\pi }{3} r^3\). Assuming that N has the Poisson pmf \(p_N (n) = \frac{1}{n!} e^{-\rho V} (\rho V)^n\) for \(n= 0, 1, 2, \ldots \), obtain the pdf of the distance X from the center of the sphere to the closest hydrogen molecule. (Hint. Try to express \( \mathsf {P}(X>x)\) via \(p_N\).)

Exercise 3.42

Determine the constant A when

$$\begin{aligned} f(x) \, = \ \left\{ \begin{array}{ll} Ax ,\quad &{} 0 \le x \le 4, \\ A(8-x) , \quad &{} 4 \le x \le 8, \\ 0 ,&{} \text{otherwise }\end{array} \right. \end{aligned}$$
(3.E.10)

is the pdf of X. Sketch the cdf and pdf, and obtain \( \mathsf {P}(X \le 6)\).

Exercise 3.43

The median \(\alpha \) of a continuous random variable X can be defined via \(\int _{-\infty }^{\alpha }f_X(x)dx = \int _{\alpha }^{\infty }f_X(x)dx = \frac{1}{2} \). Show that the value of b minimizing \(\mathsf {E}\{ |X-b| \}\) is \(\alpha \).

Exercise 3.44

When F is the cdf of continuous random variable X, obtain the expected value \(\mathsf {E}\{F(X)\}\) of F(X).

Exercise 3.45

Obtain the mgf and cf of a negative exponential random variable X with pdf \(f_X(x) = e^x u(-x)\). Using the mgf, obtain the first four moments. Compare the results with those obtained directly.

Exercise 3.46

Show that \(f_X(x) = \frac{1}{\cosh (\pi x)}\) can be a pdf. Obtain the corresponding mgf.

Exercise 3.47

When \(f(x) = \frac{\alpha }{\cosh ^n (\beta x)}\) is a pdf, determine the value of \(\alpha \) in terms of \(\beta \). Note that this pdf is the same as the logistic pdf (2.5.30) when \(n=2\) and \(\beta = \frac{k}{2}\).

Exercise 3.48

Show that the mgf is as shown in (3.A.75) for the Rayleigh random variable with pdf \(f(x) = \frac{x}{\alpha ^2} \exp \left( - \frac{x^2}{2 \alpha ^2} \right) u(x)\).

Exercise 3.49

For \(X \sim b(n,p)\) with \(q=1-p\), show

$$\begin{aligned} \mathsf {P}(X=\text{an } \text{even } \text{number}) \, = \ \frac{1}{2} \left\{ 1+(q-p)^n \right\} \end{aligned}$$
(3.E.11)

and the recurrence formula

$$\begin{aligned} \mathsf {P}(X=k+1) \, = \ \frac{(n-k)p}{(k+1)q} \mathsf {P}(X=k) \end{aligned}$$
(3.E.12)

for \(k= 0, 1, \ldots , n-1\).

Exercise 3.50

When \(X \sim P (\lambda )\), show

$$\begin{aligned} \mathsf {P}(X=\text{an } \text{even } \text{number}) \, = \ \frac{1}{2} \left( 1+e^{-2\lambda }\right) \end{aligned}$$
(3.E.13)

and the recurrence formula

$$\begin{aligned} \mathsf {P}(X = k+1) \, = \ \frac{\lambda }{k+1} \mathsf {P}(X=k) \end{aligned}$$
(3.E.14)

for \(k= 0, 1, 2, \ldots \).

Exercise 3.51

When the pdf of X is

$$\begin{aligned} f_X(x) \, = \ \left\{ \begin{array}{lr} \frac{1}{2} ,&{} -1<x\le 0 , \\ \frac{1}{4}(2-x), &{} 0<x\le 2 , \\ 0 , &{} \text{otherwise }, \end{array} \right. \end{aligned}$$
(3.E.15)

obtain the pdf \(f_Y\) of \(Y=|X|\).

Exercise 3.52

Find a cdf that has infinitely many jumps in the finite interval (a, b).

Exercise 3.53

Assume that

$$\begin{aligned} F_X ( x ) \, = \ \left\{ \begin{array}{llll} 0, &{} x< -2; &{} \quad \frac{ 1 }{3}(x + 2), &{} -2 \le x< -1; \\ ax + b, &{} -1 \le x < 3; &{} \quad 1, &{} x \ge 3 \end{array} \right. \end{aligned}$$
(3.E.16)

is the cdf of X.

  1. (1)

    Obtain the condition that the two constants a and b should satisfy and sketch the region of the condition on a plane with the a-b coordinates.

  2. (2)

    When \(a= \frac{1}{8}\) and \(b= \frac{5}{8}\), obtain the cdf of \(Y = X^2\), \( \mathsf {P}( Y=1 )\), and \( \mathsf {P}( Y=4 )\).

Exercise 3.54

Obtain the cdf of \(Y = X^2\) for the cdf

$$\begin{aligned} F_X ( x ) \, = \ \left\{ \begin{array}{llll} 0, &{} x< -1; &{} \quad \frac{ 1 }{2}(x+1), &{} -1 \le x< 0;\\ \frac{ 1 }{8}(x+4), &{} 0 \le x < 4; &{} \quad 1, &{} x \ge 4 \end{array}\right. \end{aligned}$$
(3.E.17)

of X.

Exercise 3.55

Show that

$$\begin{aligned} F_Y ( y ) \, = \ \left\{ \begin{array}{ll} F_X ( \alpha ), &{} {y< -2 \text{or } y >2 ,}\\ F_X ( -2 ) + \mathsf {P}( X= 1 ), &{} {y = -2 , } \\ F_X \left( \beta _3 \right) -F_X \left( \beta _2 \right) \\ \quad +F_X \left( \beta _1 \right) + \mathsf {P}\left( X= \beta _2 \right) , &{} {-2< y < 2, }\\ F_X ( 2 ), &{} {y =2} \end{array} \right. \end{aligned}$$
(3.E.18)

is the cdf of \(Y=X^3-3X\) expressed in terms of the cdf \(F_X\) of X. Here, \(\alpha \) is the only real root of the equation \(y=x^3-3x\) when \(y > 2\) or \(y < -2\), and \(\beta _1< \beta _2 < \beta _3\) are the three real roots of the equation when \(-2< y < 2\) with \(\alpha \), \(\beta _1\), \(\beta _2\), and \(\beta _3\) being all functions of y.

Exercise 3.56

Discuss the continuity of the cdf \(F_Y (y)\) shown in (3.E.18) when X is a continuous random variable.

Exercise 3.57

For a random variable X with pmf

$$\begin{aligned} p_{X}(k) \, =\ \mathsf {P}(X=k), \quad k \in \{\cdots , -1, 0, 2, \ldots \}, \end{aligned}$$
(3.E.19)

obtain the cdf \(F_Y\) of \(Y=X^3-3X\) and discuss its continuity.

Exercise 3.58

Obtain the cdf of \(Y = X^2\) for the cdf

$$\begin{aligned} F_X ( x ) \, = \ \left\{ \begin{array}{ll} 0, &{} x< -1, \\ - \frac{1}{2} \left( x^2 -1 \right) , &{} -1 \le x< 0,\\ \frac{1}{2} \left( x^2 +1 \right) , &{} 0 \le x < 1, \\ 1, &{} x \ge 1 \end{array}\right. \end{aligned}$$
(3.E.20)

of X.

Exercise 3.59

Assume the cf \(\varphi ( \omega ) = \frac{1}{2 \pi } \int _0^{2 \pi } \exp \left\{ - \frac{\omega ^2 }{2} \alpha (\theta ) \right\} d \theta \) of a random variable.

  1. (1)

    Obtain the cf’s by completing the integral when \(\alpha (\theta ) = \frac{1}{2} \) and when \(\alpha (\theta ) = \cos ^2 \theta \).

  2. (2)

    When \(\alpha (\theta ) = \frac{1}{2} \), obtain the pdf \(f_1\).

  3. (3)

    Show that

    $$\begin{aligned} f_2 (x) \, = \ \frac{1}{ \sqrt{ 2 \pi ^3 } } \exp \left( - \frac{x^2}{4} \right) K_0 \left( \frac{x^2}{4} \right) \end{aligned}$$
    (3.E.21)

    is the pdf when \(\alpha (\theta ) = \cos ^2 \theta \). Here,

    $$\begin{aligned} K_0 (x) \, = \ \int _0^{\infty } \frac{ \cos (xt) }{ \sqrt{1 + t^2 } } \, dt \, u(x) \end{aligned}$$
    (3.E.22)

    is the zeroth order modified Bessel function of the second type.

  4. (4)

    Obtain the mean and variance for \(f_1\) and those for \(f_2\).

  5. (5)

    Show that the pdf \(f_2\) is heavier-tailed than \(f_1\): that is, \(\frac{f_2 (x)}{f_1 (x)} > 1\) when x is sufficiently large.

Exercise 3.60

From (3.3.31), we have

$$\begin{aligned} \mathsf {E}\left\{ X^n \right\} \, = \ \left\{ \begin{array}{ll} 0, &{} ~ n \text{is } \text{odd },\\ 1\times 3 \times \cdots \times (n-1), &{} ~ n \text{is } \text{even } \end{array} \right. \end{aligned}$$
(3.E.23)

when \(X \sim \mathcal {N}(0,1)\). Show that

$$\begin{aligned} \mathsf {E}\left\{ Y^p \right\} \, = \ m^p \sum \limits _{n=0}^{\lfloor \frac{p}{2}\rfloor } \frac{p!}{2^{n} n! (p-2n)!} \left( \frac{\sigma }{m} \right) ^{2n} \end{aligned}$$
(3.E.24)

for \(p=0, 1, \ldots \) when \(Y \sim \mathcal {N}\left( m, \sigma ^2 \right) \). The result (3.E.24) implies that we have \(\mathsf {E}\left\{ Y^0 \right\} =1\), \(\mathsf {E}\left\{ Y^1 \right\} =m\), \(\mathsf {E}\left\{ Y^2 \right\} =m^2 + \sigma ^2\), \(\mathsf {E}\left\{ Y^3 \right\} =m^3 + 3 m \sigma ^2\), \(\mathsf {E}\left\{ Y^4 \right\} =m^4 + 6 m^2 \sigma ^2 + 3 \sigma ^4\), \(\mathsf {E}\left\{ Y^5 \right\} =m^5 + 10 m^3 \sigma ^2 + 15 m \sigma ^4\), \(\ldots \) when \(Y \sim \mathcal {N}\left( m, \sigma ^2 \right) \). (Hint. Note that \(Y= \sigma X + m\).)

Exercise 3.61

Show that

$$\begin{aligned} \mathsf {E}\left\{ X^n \right\} \, = \ \left\{ \begin{array}{ll} 1\times 3 \times \cdots \times n \alpha ^n \sqrt{\frac{\pi }{2}}, &{} ~ n=2k+1, \\ 2^k k! \alpha ^{2k}, &{} ~ n=2k \end{array} \right. \end{aligned}$$
(3.E.25)

and obtain the mean and variance for a Rayleigh random variable X with pdf \(f_X(x) = \frac{x}{\alpha ^2} \exp \left( -\frac{x^2}{2\alpha ^2} \right) u(x)\).

Exercise 3.62

Show that

$$\begin{aligned} \mathsf {E}\left\{ X^n \right\} \, = \ \left\{ \begin{array}{ll} 1\times 3 \times \cdots \times (n+1) \alpha ^n , &{} ~ n=2k, \\ 2^k k! \alpha ^{2k-1}\sqrt{\frac{2}{\pi }}, &{} ~ n=2k-1 \end{array} \right. \end{aligned}$$
(3.E.26)

for a random variable X with pdf \(f_X(x) = \frac{\sqrt{2} x^2}{\alpha ^3 \sqrt{\pi }} \exp \left( -\frac{x^2}{2\alpha ^2} \right) u(x)\).

Exercise 3.63

For a random variable with pdf \(f_X(x) = \frac{1}{2} u(x)u(\pi -x)\sin x\), obtain the mean and second moment.

Exercise 3.64

Show that the mean is \(\mathsf {E}\{ X \} = \frac{1-\alpha }{\alpha } r\) and variance is \(\mathsf {Var}\{ X \} = \frac{1-\alpha }{\alpha ^2} r\) for the NB random variable X with the pmf \(p( x) = \, _{-r}\text{C}_x \alpha ^r (\alpha -1)^x\), \(x \in \mathbb {J}_0\) shown in (2.5.13). When the pmf of Y is \(p_Y (y) = \, _{y-1}\text{C}_{r-1} \alpha ^r (1-\alpha )^{y-r}\) for \(y =r, r+1, \ldots \) as shown in (2.5.17), show that

$$\begin{aligned} \mathsf {E}\{ Y \} \, = \ \frac{r}{\alpha } \end{aligned}$$
(3.E.27)

and \(\mathsf {Var}\{ Y \} = \frac{1-\alpha }{\alpha ^2}\).

Exercise 3.65

Consider the absolute value \(Y = |X|\) for a continuous random variable X with pdf \(f_X\). If we consider the half meanFootnote 17

$$\begin{aligned} m_X^{\pm } \, = \ \int _{- \infty }^{\infty } x f_X (x)u(\pm x) dx , \end{aligned}$$
(3.E.28)

then the mean of X can be expressed as \(m_X \, = \ m_X^+ + m_X^-\). Show that the mean of \(Y = |X|\) can be expressed as

$$\begin{aligned} \mathsf {E}\{ |X| \} \, = \ m_X^+ - m_X^- , \end{aligned}$$
(3.E.29)

and obtain the variance of \(Y = |X|\) in terms of the variance and half means of X. 

Exercise 3.66

For a continuous random variable X with cdf \(F_X\), show that

$$\begin{aligned} \mathsf {E}\{ Z \} \, = \ 1 - 2 F_X(0) \end{aligned}$$
(3.E.30)

and

$$\begin{aligned} \mathsf {Var}\{ Z \} \, = \ 4 F_X(0) \left\{ 1 - F_X(0) \right\} \end{aligned}$$
(3.E.31)

are the mean and variance, respectively, of \(Z = \mathrm {sgn}(X)\).

Exercise 3.67

For a random variable X with pmf \(p(k) = (1-\alpha )^{k}\alpha \) for \(k \in \{ 0, 1, \ldots \}\), obtain the mean and variance. For a random variable X with pmf \(p(k) = (1-\alpha )^{k-1}\alpha \) for \(k \in \{ 1, 2, \ldots \}\), obtain the mean and variance.

Exercise 3.68

Consider a hypergeometric random variable X with pmf

$$\begin{aligned} p_{X} (x)= & {} \frac{1}{ {{\alpha +\beta }\atopwithdelims (){\gamma }} }{{\alpha }\atopwithdelims (){\gamma -x}} {{\beta }\atopwithdelims (){x}} , \end{aligned}$$
(3.E.32)

where \(\alpha \), \(\beta \), and \(\gamma \) are natural numbers such that \(\alpha +\beta \ge \gamma \). Note that \(\min (\beta , \gamma ) \ge \max (0, \gamma -\alpha )\) and that the pmf (3.E.32) is zero when \(x \notin \left\{ \max (0, \gamma - \alpha ), \max (0, \right. \left. \gamma -\alpha )+1, \ldots , \min (\beta , \gamma )\right\} \). Obtain the mean and variance of X. Show that the \(\nu \)-th moment of X is

(3.E.33)

Here,

(3.E.34)

is the Stirling number of the second kind, and \(\alpha +\beta \), \(\beta \), and \(\gamma \) represent the size of the group, number of ‘successes’, and number of trials, respectively.

       

Exercise 3.69

For a random variable X with pdf \(f (x) = \frac{ke^{-kx}}{\left( 1+e^{-kx} \right) ^2} \), show that \(\mathsf {E}\left\{ X^2 \right\} = \frac{\pi ^2}{3k^2}\), \(\mathsf {E}\left\{ X^4 \right\} = \frac{7\pi ^4}{15k^4}\), and \(m_L^+ = \frac{\ln 2}{k} = - m_L^-\), where the half means \(m_L^+\) and \(m_L^-\) are defined in (3.E.28).        

Exercise 3.70

Obtain the mgf, expected value, and variance of a random variable Y with pdf \(f_Y(x) = \frac{\lambda ^n }{(n-1)!} x^{n-1} e^{-\lambda x}u(x)\).

Exercise 3.71

A coin with probability p of head is tossed twice in one trial. Define \(X_n\) as

$$\begin{aligned} X_n \, = \ \left\{ \begin{array}{ll} 1, &{} \text{if } \text{the } \text{outcome } \text{is } {} { head} \text{ and } \text{then } {} { tail},\\ -1, &{} \text{if } \text{the } \text{outcome } \text{is } {} { tail} \text{ and } \text{then } {} { head},\\ 0, &{} \text{if } \text{the } \text{two } \text{outcomes } \text{are } \text{the } \text{same } \end{array} \right. \end{aligned}$$
(3.E.35)

based on the two outcomes from the n-th trial. Obtain the cdf and mean of \(Y=\min \{n: \, n\ge 1, ~ X_n=1 \text{or } -1\}\).

Exercise 3.72

Assume a cdf F such that \(F(x) =0\) for \(x<0\), \(F(x)<1\) for \(0 \le x < \infty \), and

$$\begin{aligned} \frac{1-F(x+y)}{1-F(y)} \, = \ 1-F(x) \end{aligned}$$
(3.E.36)

for \(0 \le x < \infty \) and \(0 \le y < \infty \). Show that there exists a positive number \(\beta \) satisfying \(1-F(x) = \exp \left( - \frac{x}{\beta }\right) u(x)\).

Exercise 3.73

For a geometric random variable X, show

$$\begin{aligned} \mathsf {P}(X>m+n | X> m ) \, =\ \mathsf {P}( X > n ) . \end{aligned}$$
(3.E.37)

Exercise 3.74

In the distribution \(b \left( 10, \frac{1}{3} \right) \), at which value of k is \(P_{10} (k)\) the largest? At which value of k is \(P_{11} (k)\) the largest in the distribution \(b \left( 11, \frac{1}{2} \right) \)?

Exercise 3.75

The probability of a side effect from a flu shot is 0.005. When 1000 people get the flu shot, obtain the following probabilities and their approximate values:

  1. (1)

    The probability \(P_{01}\) that at most one person experiences the side effect.

  2. (2)

    The probability \(P_{456}\) that four, five, or six persons experience the side effect.

Exercise 3.76

Show that the skewness and kurtosisFootnote 18 of b(n, p) are \(\frac{1-2p}{\sqrt{np (1-p)}}\) and \(\frac{3(n-2)}{n} + \frac{1}{np(1-p)}\), respectively, based on Definition 3.3.10.  

Exercise 3.77

  Consider a Poisson random variable X with rate \(\lambda \).

  1. (1)

    Show that \(\mathsf {E}\left\{ X^3 \right\} = \lambda + 3\lambda ^2 + \lambda ^3\), \(\mathsf {E}\left\{ X^4 \right\} = \lambda + 7\lambda ^2 + 6\lambda ^3 + \lambda ^4\), \(\mu _2 = \mu _3 = \lambda \), and \(\mu _4 = \lambda +3\lambda ^2\).

  2. (2)

    Obtain the coefficient of variation.

  3. (3)

    Obtain the skewness and kurtosis, and compare them with those of normal distribution.

Exercise 3.78

When X is an exponential random variable with parameter \(\lambda \), show that \(Y= \sqrt{ 2 \sigma ^2 \lambda X}\) is a Rayleigh random variable with parameter \(\sigma \).

Exercise 3.79

Consider the continuous cdf

$$\begin{aligned} F_3(x) \, = \ \left\{ \begin{array}{llll} 0, &{} x \le 0; &{} \quad \frac{x}{2}, &{} 0 \le x \le 1; \\ \frac{1}{2} , &{} 1 \le x \le 2; &{} \quad \frac{1}{2} (x-1), &{} 2 \le x \le 3; \\ 1, &{} x \ge 3 . \end{array} \right. \end{aligned}$$
(3.E.38)

Confirm that \(F_3\left( F_3^{-1}(u) \right) = u\), \(F_3^{-1}\left( F_3(x) \right) \le x \), and \( \mathsf {P}\left( F_3^{-1} \left( F_3(X) \right) \ne X \right) = 0\) when \(X \sim F_3\). Sketch \(F_3(x)\), \(F_3^{-1}(u)\) and \(F_3^{-1} \left( F_3(x) \right) \).

Exercise 3.80

Consider the continuous cdf

$$\begin{aligned} F_4(x) \, = \ \left\{ \begin{array}{llll} 0, &{} x \le 0; &{} \quad \frac{2x}{3}, &{} 0 \le x \le 1; \\ \frac{2}{3}, &{} 1 \le x \le 2; &{} \quad \frac{x}{3}, &{} 2 \le x \le 3; \\ 1, &{} x \ge 3 . \end{array} \right. \end{aligned}$$
(3.E.39)

Confirm that \(F_4\left( F_4^{-1}(u) \right) = u\), \(F_4^{-1}\left( F_4(x) \right) \le x \), and \( \mathsf {P}\left( F_4^{-1} \left( F_4(X) \right) \ne X \right) =0\) when \(X \sim F_4\). Sketch \(F_4(x)\), \(F_4^{-1}(u)\), and \(F_4^{-1} \left( F_4(x) \right) \).

Exercise 3.81

When the pdf of X is

$$\begin{aligned} f_{X}(x)= & {} \frac{1}{2} u(x) u(1-x) + \frac{1}{4} u(x-1)u (3-x) \nonumber \\= & {} \left\{ \begin{array}{ll} \frac{1}{2} ,&{} 0 \le x< 1 , \\ \frac{1}{4},&{} 1 \le x < 3 , \\ 0,&{} \text{otherwise }, \end{array} \right. \end{aligned}$$
(3.E.40)

obtain and sketch the pdf of \(Y=\sqrt{X}\).

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Song, I., Park, S.R., Yoon, S. (2022). Random Variables. In: Probability and Random Variables: Theory and Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-97679-8_3

Download citation

Publish with us

Policies and ethics