Skip to main content

Rank Statistics

  • Chapter
  • First Online:
Fundamentals of Order and Rank Statistics
  • 69 Accesses

Abstract

Ranks and magnitude ranks are closely related with order and magnitude order statistics discussed in Chap. 2, and are often used together with sign statistics. In this chapter, we first address the distributions of ranks and magnitude ranks in Sect. 3.1. Then, the notion of score functions are considered in Sect. 3.2, focusing mostly on locally optimum score functions. In Sect. 3.3, we consider the correlation coefficients among various statistics from i.i.d. random vectors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Discussed similarly on the joint pdf (1.2.58) in Chap. 1 already, it is usually assumed \(i \ne j\) implicitly in \(p_{R_i, R_j}\) when we say ‘joint’ pmf in the strict sense. In the discussions in this chapter, we will in many cases adopt the concept ‘joint’ in a wider sense as in the joint pmf (3.1.4), taking the case \(i=j\) into account also.

  2. 2.

    Here, let \(t= F^{-1}\left (\frac {1+v}{2}\right )\). Then, \(v= 2F(t)-1 = G_F(t)\) and, consequently, \(t= G_F^{-1}(v)= F^{-1}\left (\frac {1+v}{2}\right )\) when the pdf \(f(x)\) is an even symmetric function of x.

  3. 3.

    Refer to Definition 3.A3.2 for more detail.

  4. 4.

    More specifically, the function \(\gamma _L (\alpha ,x)= \int _0^x e^{-t}t^{\alpha -1} dt\) is called the lower incomplete gamma function and the function \(\gamma _U (\alpha ,x)= \int _x^{\infty } e^{-t}t^{\alpha -1} dt\) is called the upper incomplete gamma function.

  5. 5.

    The rising factorial is also called the ascending factorial, rising sequential product, upper factorial, Pochhammer’s symbol, Pochhammer function, or Pochhammer polynomial, and is the same as Appell’s symbol \((z,n)\).

References

  1. M. Abramowitz, I.A. Stegun (eds.), Handbook of Mathematical Functions (Dover, Mineola, 1972)

    Google Scholar 

  2. J. Bae, H. Kwon, S.R. Park, J. Lee, I. Song, Explicit correlation coefficients among random variables, ranks, and magnitude ranks. IEEE Trans. Inf. Theory IT-52(5), 2233–2240 (2006)

    MathSciNet  Google Scholar 

  3. J. Bae, I. Song, On rank-based nonparametric detection of composite signals in purely-additive noise. Signal Process. 62(2), 257–264 (1997)

    Article  Google Scholar 

  4. J. Bae, I. Song, H. Joo, An asymptotic relative performance measure for signal detectors based on correlation information of statistics. IEICE Trans. Commun. E88-B(12), 4643–4646 (2005)

    Article  Google Scholar 

  5. J. Bae, I. Song, H. Morikawa, T. Aoyama, Nonparametric detection of known signals based on ranks in multiplicative noise. Signal Process. 60(2), 255–261 (1997)

    Article  Google Scholar 

  6. H.A. David, Order Statistics, 2nd edn. (Wiley, Hoboken, 1980)

    Google Scholar 

  7. H.A. David, H.N. Nagaraja, Order Statistics, 3rd edn. (Wiley, Hoboken, 2003)

    Book  Google Scholar 

  8. P.P. Gandhi, I. Song, S.A. Kassam, Nonlinear smoothing filters based on rank estimates of location. IEEE Trans. Acoust. Speech Signal Process. ASSP-37(9), 1359–1379 (1989)

    Article  MathSciNet  Google Scholar 

  9. J.D. Gibbons, Nonparametric Statistical Inference, 2nd edn. (Marcel Dekker, New York, 1985)

    Google Scholar 

  10. I.S. Gradshteyn, I.M. Ryzhik, Table of Integrals, Series, and Products, 7th edn. (Academic, Cambridge, 2007)

    Google Scholar 

  11. J. Hajek, Nonparametric Statistics (Holden-Day, San Francisco, 1969)

    Google Scholar 

  12. J. Hajek, Z. Sidak, P.K. Sen, Theory of Rank Tests, 2nd edn. (Academic, Cambridge, 1999)

    Google Scholar 

  13. T.P. Hettmansperger, Statistical Inference Based on Ranks (Wiley, Hoboken, 1984)

    Google Scholar 

  14. S.A. Kassam, Signal Detection in Non-Gaussian Noise (Springer, Berlin, 1988)

    Book  Google Scholar 

  15. S.A. Kassam, J.B. Thomas, Array detectors for random signals in noise. IEEE Trans. Sonics Ultrason. SU-23(3), 107–112 (1976)

    Article  Google Scholar 

  16. S.Y. Kim, I. Song, On the score functions of the two-sample locally optimum rank test statistic for random signals in additive noise. IEEE Trans. Inform. Theory IT-41(3), 842–846 (1995)

    Google Scholar 

  17. E.L. Lehmann, Nonparametrics: Statistical Methods Based on Ranks (Holden-Day, San Francisco, 1975)

    Google Scholar 

  18. J.L. Sanz-Gonzalez, Nonparametric rank detectors on quantized radar video signals. IEEE Trans. Aerospace Electron. Syst. AES-26(11), 969–975 (1990)

    Article  Google Scholar 

  19. I. Song, J. Bae, S.Y. Kim, Advanced Theory of Signal Detection (Springer, Berlin, 2002)

    Book  Google Scholar 

  20. I. Song, S.A. Kassam, Locally optimum rank detection of correlated random signals in additive noise. IEEE Trans. Inform. Theory IT-38(4), 1311–1322 (1992)

    Article  Google Scholar 

  21. I. Song, C.H. Park, K.S. Kim, S.R. Park, Random Variables and Stochastic Processes (in Korean) (Freedom Academy, Kendallville, 2014)

    Google Scholar 

  22. I. Song, S.R. Park, S. Yoon, Probability and Random Variables: Theory and Applications (Springer, Berlin, 2022)

    Book  Google Scholar 

  23. A. Stuart, J.K. Ord, Advanced Theory of Statistics: Vol. 1. Distribution Theory, 5th edn. (Oxford University, Oxford, 1987)

    Google Scholar 

  24. S. Tantaratana, Sequential CFAR detector using a dead-zone limiter. IEEE Trans. Commun. COM-38(9), 1375–1383 (1990)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendices

Appendix 1: Proofs of Theorems

3.1.1 Proof of Theorem 3.2.2

Proof

Using \(\sum \limits _{k=1}^n \, {{ }_{n-1}\mbox{C}}_{k-1} w^{k-1} (1-w)^{n-k} =1\) shown in (1.A3.8) and based on (3.A3.16), we can show the sum (3.2.41) of the score function \(a_1 (i)\) as \(\sum \limits _{i=1}^n a_1 (i) = \int _{0}^{1} \tilde {A}_{11} (w)\sum \limits _{i=1}^n n \, {{ }_{n-1}\mbox{C}}_{i-1} w^{i-1} (1 -w)^{n-i} dw = n \int _{0}^{1} \tilde {A}_{11} (w) dw = 0 \), and easily obtain the sum (3.2.43) of the score function \(a_0 (i, i)\). Similarly, the sum (3.2.42) of the score function \(b_1 (i)\) can be shown as \(\sum \limits _{i=1}^n b_1 (i) = n \int _{0}^{1} \tilde {A}_{21} (w) dw = 0 \) based on (3.A3.17).

Next, when the pdf \(f(x)\) is an even symmetric function of x, based on (1.A3.8) and (3.A3.21), we can show the sum (3.2.45) of the score function \(c_1 (i)\) as \(\sum \limits _{i=1}^n c_1 (i) = \int _{0}^{1} \tilde {A}_{12} (w) \sum \limits _{i=1}^n n \, {{ }_{n-1}\mbox{C}}_{i-1} w^{i-1} (1-w)^{n-i} dw = n \int _{0}^{1} \tilde {A}_{12} (w) dw = 2nf (0) \), and the sum (3.2.47) of the score function \(c_0 (i,i)\) as \(\sum \limits _{i=1}^n c_0 (i,i) = n \int _{0}^{1} \tilde {A}_{12}^2 (w) dw = n I_1(f)\). We can also show the sum (3.2.46) of the score function \(d_1 (i)\) as \(\sum \limits _{i=1}^n d_1 (i) = n \int _{0}^{1} \tilde {A}_{22} (w) dw = 0 \) based on (1.A3.8) and (3.A3.22).

Next, recollect the equalities \(\tilde {C}_{n,k,i} = n(n-1) \, {{ }_{n-2}\mbox{C}}_{k-2} \, {{ }_{k-2}\mbox{C}}_{i-1}\) from (2.3.29) and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{k=2}^n \, {{}_{n-2}\mbox{C}}_{k-2} (1-v)^{n-k}(v-w)^{k-1} \sum_{i=1}^{k-1} \, {{}_{k-2}\mbox{C}}_{i-1} w^{i-1} (v-w)^{-i} \ = \ 1 \qquad \qquad \quad \end{array} \end{aligned} $$
(3.A1.1)

shown in (2.E.18) and (2.E.19). Then, based on the sum (3.2.43) and (3.A3.20), we get the sum (3.2.44) of the score function \(a_0 (k,i)\) as \(\sum \limits _{k=1}^n \sum \limits _{i=1}^n a_0 (k,i) = \sum \limits _{i=1}^{n} a_0 (i,i) + 2 \sum \limits _{k=2}^n \sum \limits _{i=1}^{k-1} a_0 (k,i) = 2 \int _{0}^{1} \int _0^v \tilde {A}_{11} ( w) \tilde {A}_{11} ( v) \sum \limits _{k=2}^n \sum \limits _{i=1}^{k-1} \tilde {C}_{n,k,i} w^{i-1} (v-w)^{k-i-1} (1-v)^{n-k}dw dv + n I_1(f) = 2n(n-1) \int _{0}^{1} \int _0^v \tilde {A}_{11} ( w) \tilde {A}_{11} ( v) dw dv + n I_1(f) = n I_1(f) \). Similarly, when the pdf \(f(x)\) is an even symmetric function of x, based on the sum (3.2.47) and (3.A3.25), we can get the sum (3.2.48) of the score function \(c_0 (k,i)\) as \(\sum \limits _{k=1}^n \sum \limits _{i=1}^n c_0 (k,i) = 2n(n-1) \int _{0}^{1} \int _0^v \tilde {A}_{12} (w) \tilde {A}_{12} (v) dw dv + n I_1(f) = 4n(n-1)f^2(0) + n I_1(f)\). \({\spadesuit }\)

3.1.2 Proof of Theorem 3.3.8

Proof

Let the median of \(X_i\) be a so that \(F(a) = \frac {1}{2} \). Then,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} \{2F(t+a)-1\} f(t+a) dt & =&\displaystyle \int_{0}^{1} (2u-1) du \\ & =&\displaystyle 0 . \end{array} \end{aligned} $$
(3.A1.2)

Thus, we have \( \int _{-\infty }^{\infty } x \{2F(x)-1\} f(x) dx = \int _{-\infty }^{\infty } (t+a) \{2F(t+a)-1\} f(t+a) dt = a \int _{-\infty }^{\infty } \{2F(t+a)-1\} f(t+a) dt + \int _{- \infty }^0 t \{2F(t+a)-1\} f(t+a) dt + \int _{0}^{\infty } t \{2F(t+a)-1\} f(t+a) dt\), i.e.,

(3.A1.3)

The first term on the right-hand side of (3.A1.3) is no smaller than 0 because \(2F(t+a) -1 \leq 0\) for \(t \leq 0 \) and \(f(t+a) \ge 0\). In addition, the second term on the right-hand side of (3.A1.3) is also no smaller than 0 because \(2F(t+a) -1 \geq 0\) for \(t \geq 0 \) and \(f(t+a) \ge 0\). In other words,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} x \{2F(x)-1\} f(x) dx \ \geq \ 0 , {} \end{array} \end{aligned} $$
(3.A1.4)

resulting in \(\rho _{X_i R_i} \geq 0\).

Next, recollect \(\lim \limits _{x \to \infty } xF(-x) =0\) shown in (1.E.2), \(F(x)-F(-x) \geq 0\) for \(x \geq 0\), \( \int _{0}^{\infty } x f(x)F(x) dx \geq 0\), and \(-[F(-x)\{F(x)-F(-x)\}]^{\prime } = f(-x)F(x) -f(x)F(-x)-2f(-x)F(-x)\). From integration by parts, we get

(3.A1.5)

confirming the inequality (3.3.53). \({\spadesuit }\)

3.1.3 Proof of Theorem 3.3.9

Proof

First, note that \(-\frac {d}{dx} [ F(-x)\{ F(x) -F(-x) \} ] = f(-x) F(x) - f(x)F(-x) -2f(-x)F(-x)\). Then, the inequality (3.3.55) can be proved as

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{\left | X_i \right | Q_i} - \tilde{\rho}_{\left | X_i \right | R_i} & =&\displaystyle \frac{2\sqrt{3}}{ \sqrt{ \sigma_X^2+4m_X^+m_X^-}} \int_{0}^{\infty} F(-x)\{F(x)-F(-x) \} dx \\ & \ge&\displaystyle 0 \end{array} \end{aligned} $$
(3.A1.6)

based on \( \int _{-\infty }^{\infty } |x| f(x) \left \{G_F(|x|)-F(x) \right \} dx = \int _{0}^{\infty } x \{ f(-x) F(x) - f(x)F(-x) -2f(-x)F(-x) \} dx = \int _{0}^{\infty } F(-x)\{F(x)-F(-x) \} dx\).

Let us next show the inequality (3.3.54) in two methods.

(Method 1) Assume a number b such that \(G_F(b) = \frac {1}{2} \). Then, \(b >0\) because \(G_F(0) =0\) and \(G_F(x)\) is a non-decreasing function. Recollecting that \(G_F(0) =0\), \(G_F(\infty ) =1\), and \(G_F\) is a non-decreasing function, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-b}^{\infty} \left\{ 2G_F(t+b)-1 \right\} g_F(t+b) dt & =&\displaystyle \int_{0}^{1} (2u-1) du \\ & =&\displaystyle 0 . {} \end{array} \end{aligned} $$
(3.A1.7)

From (3.A1.7) and noting that \(g_F(y)\) is an even symmetric function of y, we have \( \frac {1}{2} \int _{-\infty }^{\infty } |y| \left \{ 2G_F(|y|) -1 \right \}g_F(y)dy = \int _{0}^{\infty } y \left \{ 2G_F(y)-1 \right \} g_F(y) dy = \int _{-b}^{\infty } (t+b) \big \{ 2G_F(t +b)-1 \big \} g_F(t+b) dt = b \int _{-b}^{\infty } \left \{ 2G_F(t+b)-1 \right \} g_F(t+b) dt + \int _{-b}^0 t \big \{ 2G_F (t+b)-1 \big \} g_F(t+b) dt+ \int _{0}^{\infty } t \left \{ 2G_F(t+b)-1 \right \} g_F(t+b) dt\), i.e.,

(3.A1.8)

The first term on the right-hand side of (3.A1.8) is no smaller than 0 because \(2G_F(t+b) -1 \leq 0\) for \(t \leq 0 \) and \(g_F(t+b) \geq 0\). In addition, the second term is no smaller than 0 because \(2G_F(t+b) -1 \geq 0\) for \(t \geq 0 \). Consequently, \(\rho _{ \left | X_i \right | Q_i} \geq 0\).

(Method 2) What we have shown in (3.A1.4) is equivalent essentially to \(\int _{a}^{b} x \{2H(x) -1\} h(x) dx = \int _{0}^{1} (2u-1) H^{-1}(u) du \ge 0\) when a function H, with the derivative \(h(x) = \frac {d}{dx}H(x)\), is non-decreasing and satisfies \(0 \le H(x) \le 1\), \(H(a)=0\), and \(H(b)=1\). Therefore, we have \( \int _{-\infty }^{\infty } |y|\left \{ 2G_F(|y|)-1 \right \}f(y)dy = \int _{0}^{\infty } y \left \{ 2G_F(y) \right . \left . -1 \right \} g_F(y) dy = \int _{0}^{1} (2u-1) G_F^{-1}(u) du \ge 0\) by letting \(a=0\) and \(b=\infty \). \({\spadesuit }\)

3.1.4 Proof of Theorem 3.3.10

Proof

First, the inequality (3.3.56) can be shown easily from the scaled correlation coefficient \(\tilde {\rho }_{Z_i R_i} = \sqrt { 3 F(0) \left \{ 1- F(0) \right \} }\) shown in (3.3.19). Next, because \(0 \leq F(-x) \leq F(0) \leq F(x) \leq 1\) for \(x \geq 0\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} 0 \ \leq \ \int_{0}^{\infty} F(-x)f(x) dx \ \leq \ F(0) \int_{0}^{\infty} f(x) dx {} \end{array} \end{aligned} $$
(3.A1.9)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} F(0) \int_{0}^{\infty} f(-x) dx \ \leq \ \int_{0}^{\infty} F(x)f(-x) dx \ \leq \ \int_{0}^{\infty} f(-x) dx . {}\qquad \qquad \end{array} \end{aligned} $$
(3.A1.10)

Noting that \( \int _{0}^{\infty } f(-x) dx = F(0)\), \( \int _{0}^{\infty } f(x) dx = 1-F(0)\), and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} F(-x)f(x) dx \ =\ \int_{0}^{\infty} \{ F(-x)f(x) + F(x)f(-x) \} dx ,\qquad \qquad \end{array} \end{aligned} $$
(3.A1.11)

we get \(F^2(0) \leq \int _{-\infty }^{\infty } F(-x)f(x) dx \leq 2F(0)- F^2(0)\) from (3.A1.9) and (3.A1.10), and thus

$$\displaystyle \begin{aligned} \begin{array}{rcl} -F(0)\{1-F(0)\} & \leq&\displaystyle F(0) - \int_{-\infty}^{\infty} F(-x)f(x) dx \\ & \leq&\displaystyle F(0)\{1-F(0)\} , {} \end{array} \end{aligned} $$
(3.A1.12)

which can be used in (3.3.34) to obtain the inequality (3.3.57). Finally, recollecting the scaled correlation coefficients (3.3.19) and (3.3.34), we get the inequality (3.3.58) from the second inequality of (3.A1.12). \({\spadesuit }\)

Appendix 2: Derivations of Probability Functions

3.1.1 Conditional pmf of \(R_i\) when \(X_i = x\)

(Method 1) Using the pdmf (3.1.53), we can obtain the conditional pmf \(p_{R_i | X_i}(r|x) = \frac { \tilde {f}_{X_i , R_i}(x,r)} {f_{X_i} (x)}\) of \(R_i\) when \(X_i = x\) for \(i \in \mathbb {J}_{1,n}\) as

(3.A2.1)

(Method 2) The conditional pmf \(p_{R_i | X_i}(r|x)\) for \(i \in \mathbb {J}_{1,n}\) can also be obtained, using the pmf’s (3.1.31) and (3.1.32), as \(p_{R_i | X_i}(r|x) = \mathsf {P} \Big ( R_1 =r\) when \(\left \{ X_j \right \}_{j=1}^n\) are i.s.i.d. with the pdf pair \( ( \delta (t-x), f(t) ) \Big )\), i.e.,

(3.A2.2)

3.1.2 Conditional pmf of \(R_i\) when \(\left | X_i \right | = y\)

Using the pdmf (3.1.55) and \(f_{\left | X_i \right |} (y) = \{f(y)+f(-y)\}u(y)= g_F(y) u(y)\), we get the conditional pmf of \(R_i\) when \(\left | X_i \right | = y\) as \(p_{R_i \big | \left | X_i \right |}(r|y) = \frac { \tilde {f}_{\left | X_i \right | , R_i}(y,r)} {f_{\left | X_i \right |} (y)}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{R_i \big | \left | X_i \right |}(r|y) & =&\displaystyle \frac{ f_{X_{[r]}} (y) + f_{X_{[r]}} (-y) } { n \, g_F(y) } u(y) {} \end{array} \end{aligned} $$
(3.A2.3)

for \(i, r \in \mathbb {J}_{1,n}\). Noting that the conditional pdf of \(X_i\) when \(\left | X_i \right | = y\) is

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{X_i \big| \left | X_i \right |} (x | y) \ = \ \frac{f(x)}{g_F(x)} \{ u(x)\delta (x-y) + u(-x)\delta (x+y)\} \end{array} \end{aligned} $$
(3.A2.4)

as observed in (1.E.25), the conditional pmf (3.A2.3) can also be obtained as \(p_{R_i \big | \left | X_i \right |}(r|y) = \mathsf {P} \left ( R_i =r \big | \left | X_i \right | =y \right ) = \mathsf {P} \bigg ( R_1 =r \) when \(\left \{ X_j \right \}_{j=1}^n\) are i.s.i.d. with the pdf pair \(\left ( f_{X_i \big | \left |X_i \right |} (t | y), f(t) \right ) \bigg ) = \ {{ }_{n-1}\mbox{C}}_{r-1} \int _{-\infty }^{\infty } F^{r-1} (t)\{1 -F(t)\}^{n-r} \frac {f(t)}{g_F(t)} \{u(-t)\delta (t+y)+ u(t)\delta (t-y)\} dt\), i.e.,

(3.A2.5)

for \(i, r \in \mathbb {J}_{1,n}\) using the pmf’s (3.1.31) and (3.1.32) and \(g_F(-y) = g_F (y)\).

3.1.3 The Second Method for Obtaining the Joint pmf of Rank and Sign

The joint pmf \(p_{Z_i,R_i} (z,r)\) of the sign and rank of a random variable has already been obtained in (3.3.16). Here, we derive the result again based on the relation \(p_{Z_i,R_i} (z,r) =p_{R_i | Z_i} (r|z)p_{Z_i} (z)\).

First, when \(Z_i = 1 \), the conditional pdf of \(X_i\) is \(f_{X_i | Z_i} (x|1) = \frac {f(x)}{1-F(0)}u(x)\) and thus the random variables \(\left \{ X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \( \left ( \frac {f(x)u(x)}{1-F(0)}, f(x) \right ) \). Then, we get \(p_{R_i | Z_i }(r|1)= \mathsf {P} \big ( R_i =r \big | Z_i =1 \big ) = \mathsf {P} \Big ( R_1 =r\) when \(\left \{X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \(\left ( \frac {f(x)u(x)}{1-F(0)}, f(x)\right ) \Big )\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{R_i | Z_i }(r|1) & =&\displaystyle \int_{-\infty}^{\infty} \, {{}_{n-1}\mbox{C}}_{r-1} F^{r-1} (x)\{1-F(x)\}^{n-r} \frac{f(x)u(x)}{1-F(0)} dx \\ & =&\displaystyle \frac{1}{n\{1-F(0)\}}\int_0^{\infty} f_{X_{[r]}} (x)dx {} \end{array} \end{aligned} $$
(3.A2.6)

for \(i, r \in \mathbb {J}_{1,n}\) using the pmf’s (3.1.31) and (3.1.32). Subsequently, recollecting \(p_{Z_i , R_i }(1,r) = p_{R_i | Z_i }(r|1)p_{Z_i}(1)\) and \( p_{Z_i}(1)=1-F(0)\), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{Z_i , R_i }(1,r) \ = \ \frac{1}{n}\int_0^{\infty} f_{X_{[r]}} (x)dx \end{array} \end{aligned} $$
(3.A2.7)

for \(i, r \in \mathbb {J}_{1,n}\).

Similarly, when \( Z_i = -1\), the conditional pdf of \(X_i\) is \(f_{X_i | Z_i} (x|-1) = \frac {f(x)}{F(0)} u(-x)\) and thus \(\left \{ X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \( \left ( \frac {f(x)u(-x)}{F(0)}, f(x) \right ) \). Using the pmf’s (3.1.31) and (3.1.32) again, we get \(p_{R_i | Z_i }(r|-1) = \mathsf {P} \left ( R_i =r \big | Z_i =-1\right ) = \mathsf {P} \Big ( R_1 =r\) when \(\left \{X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \(\left ( \frac {f(x)u(-x)}{F(0)}, f(x) \right )\Big )\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{R_i | Z_i }(r|-1) & =&\displaystyle \frac{1}{n F(0)} \int_{-\infty}^{0} f_{X_{[r]} }(x)dx {} \end{array} \end{aligned} $$
(3.A2.8)

for \(i, r \in \mathbb {J}_{1,n}\). Recollecting \(p_{Z_i }(-1) = F(0)\), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{Z_i , R_i }(-1, r) \ = \ \frac{1}{n} \int_{-\infty}^{0} f_{X_{[r]}} (x)dx {} \end{array} \end{aligned} $$
(3.A2.9)

for \(i, r \in \mathbb {J}_{1,n}\) from (3.A2.8).

3.1.4 Conditional pmf of \(Q_i\) when \(X_i = x\)

The conditional pmf (3.1.56) of \(Q_i\) when \(X_i = x\) can also be obtained using the pmf’s (3.1.37) and (3.1.38) as \( p_{Q_i | X_i}(q|x) = \mathsf {P} \left ( Q_i =q \big | X_i =x \right ) = \mathsf {P} \Big ( Q_1 =q\) when \(\left \{ X_j \right \}_{j=1}^n\) are i.s.i.d. with the pdf pair \( ( \delta (t-x), f(t) ) \Big ) = \, {{ }_{n-1}\mbox{C}}_{q-1} \int _{-\infty }^{\infty } \left \{ 1-G_F(t) \right \}^{n-q} G_F^{q-1} (t) \{ \delta (t-x) + \delta (-t-x) \} u(t) dt = \, {{ }_{n-1}\mbox{C}}_{q-1} \Big [ G_F^{q-1} (x)\left \{ 1-G_F(x) \right \}^{n-q} u(x) + G_F^{q-1}(-x) \big \{ 1-G_F (-x) \big \}^{n-q} u(-x)\Big ]\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{Q_i | X_i}(q|x) & =&\displaystyle {{}_{n-1}\mbox{C}}_{q-1} G_F^{q-1}(|x|) \left\{ 1-G_F(|x|) \right\}^{n-q} {} \end{array} \end{aligned} $$
(3.A2.10)

for \(i, q \in \mathbb {J}_{1,n}\).

3.1.5 Conditional pmf of \(Q_i\) when \(\left | X_i \right | = y\)

First, from \(u(y)u(-y)=0\) for \(y \ne 0\) and \(G_F(-y)=0\) for \(y=0\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} G_F^{q-1} (-y)\{1-G_F(-y)\}^{n-q} \frac{f( \pm y)u(y)u(-y)}{f(y) + f(-y)} \ = \ 0 . {} \end{array} \end{aligned} $$
(3.A2.11)

Using the pmf’s (3.1.37) and (3.1.38), and noting (3.A2.11), we can obtain the conditional pmf of \(Q_i\) when \(\left | X_i \right | = y\) as \(p_{Q_i \big | \left | X_i \right |}(q|y) = \mathsf {P} \big ( Q_i =q \big | \left | X_i \right | =y \big ) = \mathsf {P} \bigg ( Q_1 =q \) when \(\left \{ X_j \right \}_{j=1}^n\) are i.s.i.d. with the pdf pair \(\Big ( f_{X \big | |X| } (t | y), f(t) \Big ) \bigg )\) as

(3.A2.12)

for \(i, q \in \mathbb {J}_{1,n}\).

3.1.6 Joint pmf of Magnitude Rank and Sign

As mentioned in section “The Second Method for Obtaining the Joint pmf of Rank and Sign” in Appendix 2, Chap. 3, when \(Z_i = 1\), the conditional pdf of \(X_i\) is \(\frac {f(x)u(x)}{1-F(0)}\) and thus \(\left \{ X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \( \left ( \frac {f(x)u(x)}{1-F(0)} , f(x) \right ) \). Therefore, from the pmf’s (3.1.37) and (3.1.38), we get the joint pmf of \(Z_i\) and \(Q_i\) as \(p_{ Z_i, Q_i }(1, q) = \mathsf {P} \left (Z_i =1 , Q_i =q \right ) = \mathsf {P} \left (Q_i =q \big |Z_i =1 \right )\mathsf {P} \left (Z_i =1\right ) = \{ 1-F(0) \} \mathsf {P} \Big ( Q_1 =q\)when \(\left \{X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \(\left (\frac {f(x)u(x)}{1-F(0)} , f(x)\right ) \Big ) = \{ 1-F(0) \} \, {{ }_{n-1}\mbox{C}}_{q-1} \int _{-\infty }^{\infty } G_F^{q-1}(x)\left \{ 1-G_F(x) \right \}^{n-q} \frac { \{f(x)u(x)+f(-x)u(-x) \}u(x)}{1-F(0)} dx = \ {{ }_{n-1}\mbox{C}}_{q-1} \int _{0}^{\infty } G_F^{q-1} (x) \left \{ 1-G_F(x) \right \}^{n-q} f(x)dx \), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{ Z_i, Q_i }(1,q) \ = \ \frac{1}{n} \int_{0}^{\infty} f_{|X|{}_{[q]}} (x) \ \frac{f(x)}{g_F(x)} dx {} \end{array} \end{aligned} $$
(3.A2.13)

for \(i, q \in \mathbb {J}_{1,n}\). In obtaining (3.A2.13), we have used that \(\{f(x)u(x)+f(-x)u(-x) \} u(x) = f(x)u(x)\) except possibly at \(x = 0\).

Similarly, when \(Z_i= -1\), \(\left \{ X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \( \Big ( \frac {f(x)}{F(0)}u(-x) , f(x) \Big ) \), and thus we get \(p_{ Z_i, Q_i }(-1, q) = \mathsf {P} \left (Z_i =-1 , Q_i =q \right ) = \mathsf {P} \Big ( Q_1 =q\) when \(\left \{X_j \right \}_{j=1}^{n}\) are i.s.i.d. with the pdf pair \( \left ( \frac {f(x)}{F(0)}u(-x) , f(x) \right )\Big ) F(0) = \ {{ }_{n-1}\mbox{C}}_{q-1} \int _{0}^{\infty } G_F^{q-1}(x) \left \{ 1-G_F(x) \right \}^{n-q}f(-x)dx\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{ Z_i, Q_i }(-1, q) & =&\displaystyle \frac{1}{n} \int_{0}^{\infty} f_{|X|{}_{[q]}} (x) \ \frac{f(-x)}{g_F(x)} dx {} \end{array} \end{aligned} $$
(3.A2.14)

for \(i, q \in \mathbb {J}_{1,n}\) from the pmf’s (3.1.37) and (3.1.38). The joint pmf of \(Z_i\) and \(Q_i\) for \(i \in \mathbb {J}_{1,n}\) can be expressed as (3.3.32) by combining (3.A2.13) and (3.A2.14).

Appendix 3: Miscellaneous Topics

3.1.1 Calculations of Restricted Sums

The restricted sums \({\underset {i \ne j}{\sum \limits _{i=1}^n\sum \limits _{j=1}^n}} a_i b_j\), \({\underset {i \ne j \ne k \ne i}{\sum \limits _{i=1}^n\sum \limits _{j=1}^n \sum \limits _{k=1}^n}} a_i b_j c_k\), and \({\underset {i, j, k, l \mbox{ all distinct}} {\sum \limits _{i=1}^n \sum \limits _{j=1}^n\sum \limits _{k=1}^n\sum \limits _{l=1}^n}} a_i b_j c_k d_l\) of sequences \(\left \{ a_i \right \}_{i=1}^{n}\), \(\left \{ b_j \right \}_{j=1}^{n}\), \(\left \{ c_k \right \}_{k=1}^{n}\), and \(\left \{ d_l \right \}_{l=1}^{n}\) are here addressed and expressed in terms of the unrestricted sums \(\sum \limits _{i=1}^n a_i^{m_a} b_i^{m_b} c_i^{m_c} d_i^{m_d}\) for \(m_a, m_b, m_c, m_d \in \{ 0,1 \}\). Let us use the notations, for instance, \({\hat S}_a=\sum \limits _{i=1}^n a_i\), \({\hat S}_{ab}=\sum \limits _{i=1}^n a_i b_i\), \({\hat S}_{abc}=\sum \limits _{i=1}^n a_i b_i c_i\), and \({\hat S}_{abcd}=\sum \limits _{i=1}^n a_i b_i c_i d_i\) for convenience.

Theorem 3.A3.1

We have

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\underset{i \ne j}{\sum_{i=1}^n\sum_{j=1}^n}} a_i b_j \ = \ {\hat S}_a {\hat S}_b - {\hat S}_{ab} , {} \end{array} \end{aligned} $$
(3.A3.1)
$$\displaystyle \begin{aligned} \begin{array}{rcl} {\underset{i \ne j \ne k \ne i} {\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i b_j c_k & =&\displaystyle {\hat S}_a {\hat S}_b {\hat S}_c - {\hat S}_{ab}{\hat S}_c - {\hat S}_{bc}{\hat S}_a - {\hat S}_{ca}{\hat S}_b + 2 {\hat S}_{abc} , \qquad {} \end{array} \end{aligned} $$
(3.A3.2)

and

(3.A3.3)

Proof

First, from \( {\hat S}_a {\hat S}_b = \left (\sum \limits _{i=1}^n a_i \right )\left ( \sum \limits _{j=1}^n b_j \right ) = {\underset {i \ne j}{\sum \limits _{i=1}^n\sum \limits _{j=1}^n}} a_i b_j + {\underset {i =j }{\sum \limits _{i=1}^n\sum \limits _{j=1}^n}} a_i b_j\), we easily have (3.A3.1).

Next, the product \( {\hat S}_a {\hat S}_b {\hat S}_c = \left ( \sum \limits _{i=1}^n a_i \right ) \left ( \sum \limits _{j=1}^n b_j \right ) \left ( \sum \limits _{k=1}^n c_k \right )\) can be expanded as

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\hat S}_a {\hat S}_b {\hat S}_c & =&\displaystyle {\underset{i \ne j \ne k \ne i} {\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i b_j c_k + {\underset{i=j \ne k}{\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i b_j c_k + {\underset{j=k \ne i}{\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i b_j c_k \\ & &\displaystyle \quad + {\underset{k=i \ne j}{\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i b_j c_k + {\underset{i=j=k}{\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i b_j c_k , {} \end{array} \end{aligned} $$
(3.A3.4)

which can clearly be written as

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\hat S}_a {\hat S}_b {\hat S}_c & =&\displaystyle {\underset{i \ne j \ne k \ne i} {\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i b_j c_k + {\underset{j \ne k}{\sum_{j=1}^n\sum_{k=1}^n}} a_j b_j c_k + {\underset{k \ne i}{\sum_{k=1}^n\sum_{i=1}^n}} a_i b_k c_k \\ & &\displaystyle \quad + {\underset{i \ne j}{\sum_{i=1}^n\sum_{j=1}^n}} a_i b_j c_i + {\hat S}_{abc} . {} \end{array} \end{aligned} $$
(3.A3.5)

Using (3.A3.1) in the second–fourth terms on the right-hand side, (3.A3.5) can be written as \({\hat S}_a {\hat S}_b {\hat S}_c = {\underset {i \ne j \ne k \ne i} {\sum \limits _{i=1}^n\sum \limits _{j=1}^n\sum \limits _{k=1}^n}} a_i b_j c_k + {\hat S}_{ab}{\hat S}_c + {\hat S}_{bc}{\hat S}_a + {\hat S}_{ca}{\hat S}_b -2 {\hat S}_{abc}\), from which we can obtain (3.A3.2).

The product \( {\hat S}_a {\hat S}_b {\hat S}_c {\hat S}_d = \sum \limits _{i=1}^n \sum \limits _{j=1}^n \sum \limits _{k=1}^n \sum \limits _{l=1}^n a_i b_j c_k d_l\) can be expanded as

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\hat S}_a {\hat S}_b {\hat S}_c {\hat S}_d & =&\displaystyle {\underset{i, j, k, l \mbox{ all distinct}} {\sum_{i=1}^n \sum_{j=1}^n\sum_{k=1}^n\sum_{l=1}^n}} a_i b_j c_k d_l + \{6\}{\underset{i \ne k \ne l \ne i} {\sum_{i=1}^n \sum_{k=1}^n\sum_{l=1}^n}} \left\{ a_i b_i c_k d_l \right\} \\ & &\displaystyle \quad + \{3\}{\underset{i \ne k } {\sum_{i=1}^n \sum_{k=1}^n}} \left\{ a_i b_i c_k d_k \right\} + \{4\} {\underset{i \ne l} {\sum_{i=1}^n \sum_{l=1}^n}} \left\{ a_i b_i c_i d_l \right\} \\ & &\displaystyle \quad + \sum_{i=1}^n a_i b_i c_i d_i , {} \end{array} \end{aligned} $$
(3.A3.6)

where \(\{m\}{\underset {i \ne k \ne l \ne i} {\sum \limits _{i=1}^n \sum \limits _{k=1}^n \sum \limits _{l=1}^n}} \left \{ e_{ikl} \right \}\) and \(\{m\}{\underset {i \ne k } {\sum \limits _{i=1}^n \sum \limits _{k=1}^n}} \left \{ g_{ik} \right \}\) denote groups of m sums with \(e_{ikl}\) and \(g_{ik}\), respectively, the representative terms in the sums: for example, the third term \(\{3\}{\underset {i \ne k } {\sum \limits _{i=1}^n \sum \limits _{k=1}^n}} \left \{ a_i b_i c_k d_k\right \}\) on the right-hand side denotes the group \({\underset {i \ne k } {\sum \limits _{i=1}^n \sum \limits _{k=1}^n}} \left ( a_i b_i c_k d_k + a_i b_k c_i d_k +a_i b_k c_k d_i \right )\) of three sums. Now, using (3.A3.2) in the first group of sums on the right-hand side of (3.A3.6), we can rewrite each of the six sums in the group: for instance, \({\underset {i \ne k \ne l \ne i} {\sum \limits _{i=1}^n \sum \limits _{k=1}^n\sum \limits _{l=1}^n}} a_i b_i c_k d_l\) can be rewritten as \({\hat S}_{ab}{\hat S}_{c}{\hat S}_{d} - {\hat S}_{abc}{\hat S}_{d}- {\hat S}_{ab}{\hat S}_{cd}- {\hat S}_{abd}{\hat S}_{c} + 2{\hat S}_{abcd}\). Using (3.A3.1) in the second and third groups of sums on the right-hand side of (3.A3.6) in addition, (3.A3.6) can be written specifically as

(3.A3.7)

which is equivalent to (3.A3.3). \({\spadesuit }\)

Let us note in passing that the left-hand side of (3.A3.6) represents a sum of \(n^4\) terms. On the right-hand side also, we have \(\frac {\, {{ }_n\mbox{P}}_4}{4!}\frac {4!}{1!1!1!1!}= n(n-1)(n-2)(n-3) = n^4-6n^3+11n^2-6n\), \(\frac {\, {{ }_n\mbox{P}}_3}{2!}\frac {4!}{2!1!1!}= 6n(n-1)(n-2) = 6n^3-18n^2+12n\), \(\frac {\, {{ }_n\mbox{P}}_2}{2!}\frac {4!}{2!2!}= 3n(n-1) = 3n^2-3n\), \(\, {{ }_n\mbox{P}}_2\frac {4!}{3!1!}= 4n(n-1) = 4n^2-4n\), and \(\, {{ }_n\mbox{P}}_1\frac {4!}{4!}= n\) terms, or equivalently a total of \(n^4\) terms, added.

Remark 3.A3.1

The restricted sums of one sequence \(\left \{a_i\right \}_{i=1}^{n}\) can specifically be expressed as

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\underset{i \ne j}{\sum_{i=1}^n\sum_{j=1}^n}} a_i a_j \ = \ {\hat S}_a^2 - {\hat S}_{a^2} , {} \end{array} \end{aligned} $$
(3.A3.8)
$$\displaystyle \begin{aligned} \begin{array}{rcl} {\underset{i \ne j \ne k \ne i} {\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} a_i a_j a_k & =&\displaystyle {\hat S}_a^3 - 3 {\hat S}_{a^2}{\hat S}_{a} + 2 {\hat S}_{a^3} , {} \end{array} \end{aligned} $$
(3.A3.9)

and

(3.A3.10)

from (3.A3.1), (3.A3.2), and (3.A3.3), respectively. \({\clubsuit }\)

Example 3.A3.1

When \(a_i = i\) for \(i \in \mathbb {J}_{1,n}\), we have \({\underset {i \ne j}{\sum \limits _{i=1}^n\sum \limits _{j=1}^n}} ij = \left (\sum \limits _{i=1}^n i \right )^2 - \sum \limits _{i=1}^n i^2 = \frac {1}{4}n^2(n+1)^2 - \frac {1}{6}n(n+1)(2n+1)\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\underset{i \ne j}{\sum_{i=1}^n\sum_{j=1}^n}} ij & =&\displaystyle \frac{1}{12}(n-1)n(n+1)(3n+2) {} \end{array} \end{aligned} $$
(3.A3.11)

and \({\underset {i \ne j \ne k \ne i} {\sum \limits _{i=1}^n\sum \limits _{j=1}^n\sum \limits _{k=1}^n}} ijk = \left (\sum \limits _{i=1}^n i \right )^3 - 3 \left (\sum \limits _{i=1}^n i \right )\left (\sum \limits _{j=1}^n j^2 \right ) + 2\sum \limits _{i=1}^n i^3 = \frac {1}{8}n^3(n+1)^3 - \frac {1}{4} n^2(n+1)^2(2n+1) + \frac {1}{2}n^2(n+1)^2\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\underset{i \ne j \ne k \ne i} {\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^n}} ijk & =&\displaystyle \frac{1}{8}(n-2)(n-1)n^2(n+1)^2 {} \end{array} \end{aligned} $$
(3.A3.12)

from (3.A3.8) and (3.A3.9), respectively. We also have \({\underset {i, j, k, l \mbox{ all distinct}} {\sum \limits _{i=1}^n \sum \limits _{j=1}^n\sum \limits _{k=1}^n\sum \limits _{l=1}^n}} ijkl = \left ( \sum \limits _{i=1}^n i \right )^4 - 6 \left ( \sum \limits _{i=1}^n i^2 \right )\left ( \sum \limits _{i=1}^n i \right )^2 + 3 \left ( \sum \limits _{i=1}^n i^2 \right )^2 +8 \left ( \sum \limits _{i=1}^n i^3 \right )\left ( \sum \limits _{i=1}^n i \right ) -6 \sum \limits _{i=1}^n i^4\), i.e.,

(3.A3.13)

from (3.A3.10). Letting the right-hand side of (3.A3.13) as \(\frac {n}{240}(n+1) C_n\), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\underset{i, j, k, l \mbox{ all distinct}} {\sum_{i=1}^n \sum_{j=1}^n\sum_{k=1}^n\sum_{l=1}^n}} ijkl & =&\displaystyle \frac{n}{240}(n+1)(n-1)(n-2)(n-3) \\ & &\displaystyle \ \times \big(15n^3+15n^2-10n-8\big ) {} \end{array} \end{aligned} $$
(3.A3.14)

from \(C_n = 15n^3(n+1)^3 - 60 n^2(n+1)^2(2n+1) + 240 n^2(n+1)^2 + 20 n(n+1) (2n+1)^2 - 48(2n+1) \big (3n^2+3n-1\big ) = 15n^2(n+1)^2 \big ( n^2 - 7n +12 \big ) +4 (n- 3)(2n+1) \big (10n^2+9n-4\big ) =(n-3) \Big \{ 15n^2(n+1)^2 ( n-4) +4(2n+1) \big (10n^2+ 9n-4\big )\Big \}=(n-1)(n-2)(n-3) \big (15n^3+15n^2-10n-8\big )\). \({\diamondsuit }\)

3.1.2 Some Results for Locally Optimum Score Functions

Assume that the pdf \(f(x) = \frac {d}{dx} F(x)\) is absolutely continuous and satisfies

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} \left | f^{\prime} (x) \right | dx \ < \ \infty . {} \end{array} \end{aligned} $$
(3.A3.15)

(1) General Distributions

First, we can obtain \( \int _{0}^{1} \tilde {A}_{11}^k (w) dw = \int _{-\infty }^{\infty } g_{LO}^k(x) f(x) dx = \mathsf {E} \left \{ g_{LO}^k(X)\right \}\) for \(k=1, 2\) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} \tilde {A}_{11}^k (w) dw &=& \left\{ \begin{array}{ll} - \int_{-\infty}^{\infty} f^{\prime}(x) dx, & k = 1 ,\\ I_1 (f), & k = 2 \end{array} \right. \\ &=& \left\{ \begin{array}{ll} 0, & k = 1,\\ I_1 (f), & k = 2 . \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.16)

Here, \( \int _{0}^{1} \tilde {A}_{11} (v) dv =0\) can be shown also as \( \int _{0}^{1} \tilde {A}_{11} (v) dv = \int _{0}^{1} \left \{ -\frac {f^{\prime }\left (F^{-1}(v) \right )}{f\left (F^{-1}(v) \right )} \right \} dv = - \int _{0}^{1} f^{\prime }\left (F^{-1}(v) \right ) d F^{-1}(v) = -f \left (F^{-1}(1) \right ) + f \left (F^{-1}(0) \right ) = - f (\infty ) + f (- \infty )= 0\) based on \(d F^{-1}(v) = \frac {dv}{ f \left (F^{-1}(v)\right ) } \) from (1.A1.15) and \(f(\pm \infty )=0\) as mentioned in (3.2.7). We also get \( \int _{0}^{1} \tilde {A}_{21} (w) dw = \int _{-\infty }^{\infty } h_{LO}(x) f(x) dx = \mathsf {E} \left \{ h_{LO}(X)\right \}= \int _{-\infty }^{\infty } f^{\prime \prime }(x) dx = f^{\prime }(\infty ) - f^{\prime }(-\infty )\) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} \tilde {A}_{21} (w) dw & =&\displaystyle 0 {} \end{array} \end{aligned} $$
(3.A3.17)

using (3.2.8).

Next, \( \int _{0}^{1} w^k \tilde {A}_{11}(w) dw = - \int _{-\infty }^{\infty } F^k(x) f^{\prime }(x) dx = - F^k (x)f(x) \big |{ }_{-\infty }^{\infty } + \int _{-\infty }^{\infty } k F^{k-1}(x) f^2(x) dx\) for \(k=1, 2\) can be obtained as

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} w^k \tilde {A}_{11} (w) dw &=& \left\{ \begin{array}{ll} \int_{-\infty}^{\infty} f^2(x) dx, & k = 1, \\ 2 \int_{-\infty}^{\infty} F(x) f^2(x)dx, & k = 2 \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.18)

and \( \int _{0}^{1} w^k \tilde {A}_{21} (w) dw = \int _{-\infty }^{\infty } F^k(x)f^{\prime \prime }(x) dx = F^k (x)f^{\prime }(x) \big |{ }_{-\infty }^{\infty }- \int _{-\infty }^{\infty } k f^{\prime }(x) f(x) F^{k-1}(x) dx\) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} w^k \tilde {A}_{21} (w) dw &=& \left\{ \begin{array}{ll} 0, & k = 1, \\ \int_{-\infty}^{\infty} f^3 (x) dx, & k = 2 . \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.19)

In (3.A3.19), we have used \( \int _{-\infty }^{\infty } \left \{ 2 f^{\prime }(x) f(x) \right \} F(x) dx = F (x)f^2(x) \big |{ }_{-\infty }^{\infty } - \int _{-\infty }^{\infty } f^{3}(x) dx\) for \(k=2\).

Now, letting \(B_1 (v) = \int _{0}^{v} \tilde {A}_{11} (w) dw\) and noting \( B_1 (1) = 0\) from (3.A3.16) and \(B_1 (0) =0\), we have \( \int _{0}^{1} \int _{0}^{v} \tilde {A}_{11} (w) \tilde {A}_{11}(v) dw dv = B_1 (v)B_1 (v) \Big |{ }_{0}^{1} - \int _{0}^{1} \int _{0}^{v} \tilde {A}_{11} (v) \tilde {A}_{11}(w) dw dv = \int _{0}^{1} \int _{0}^{v} \tilde {A}_{11} (w) \tilde {A}_{11}(v) dw dv = - \int _{0}^{1} \int _{0}^{v} \tilde {A}_{11} (v) \tilde {A}_{11} (w) dw dv\) from integration by parts: this result implies

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} \int_{0}^{v} \tilde {A}_{11} (w) \tilde {A}_{11} (v) dw dv \ = \ 0 . {} \end{array} \end{aligned} $$
(3.A3.20)

(2) Symmetric Distributions

We now assume that the pdf \(f(x)\) is an even symmetric function of x. First, recollect (3.2.16) and (3.2.18). Then, letting \(F^{-1}\left ( \frac {1+w}{2} \right )=t\), we get \( \int _{0}^{1} \tilde {A}_{12}(w) dw = \int _{0}^{1} \tilde {A}_{11} \left ( \frac {1+w}{2}\right ) dw = 2 \int _{0}^{\infty } g_{LO}(t) f(t) dt = - 2 \int _{0}^{\infty } f^{\prime }(t) dt = 2f(0)\)from \(f(\infty ) =0\). We also get \( \int _{0}^{1} \tilde {A}_{12}^2(w) dw = 2 \int _{0}^{\infty } g_{LO}^2(t) f(t) dt = I_1 (f)\). In short, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} \tilde {A}_{12}^k (w) dw \ = \ \left\{ \begin{array}{ll} 2f (0), & k = 1 , \\ I_1 (f), & k = 2 . \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.21)

In addition, we get \( \int _{0}^{1} \tilde {A}_{22}(w) dw = 2 \int _{0}^{\infty } h_{LO}(t) f(t) dt = 2 \int _{0}^{\infty } f^{\prime \prime }(t) dt = - 2f^{\prime }(0)\), that is,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} \tilde {A}_{22} (w) dw \ = \ 0 {} \end{array} \end{aligned} $$
(3.A3.22)

by noting that \(f^{\prime }(0) =0\) when \(f(x)\) is an even symmetric function of x and that \(f^{\prime }(\infty ) =0\). Here, let us note that \( \int _{0}^{1} \tilde {A}_{22} (w) dw=0\) because \(f^{\prime }(0)=0\) when the pdf \(f(x)\) is an even symmetric function of x and is differentiable at \(x=0\). On the other hand, when the pdf \(f(x)\) is an even symmetric function of x but is not differentiable at \(x=0\) as, for instance, in the case of the double-exponential pdf, we let \(f^{\prime } (0) =0\) based on \(f^{\prime } \left (0^+ \right ) + f^{\prime } \left (0^- \right )=0\), and we have \( \int _{0}^{1} \tilde {A}_{22} (w) dw=0\).

Next, we get \( \int _{0}^{1} w \tilde {A}_{12}(w) dw = \int _{0}^{1} w \tilde {A}_{11}\left ( \frac {1+w}{2}\right ) dw = 2 \int _{0}^{\infty } \{2F(t) -1 \} g_{LO}(t) f(t) dt = - 2 \int _{0}^{\infty } \big \{2F(t)f^{\prime }(t) - f^{\prime }(t) \big \} dt =4 \int _{0}^{\infty } f^2(t) dt\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} w \tilde {A}_{12}(w) dw \ = \ 2 \int_{-\infty}^{\infty} f^2(t) dt {} \end{array} \end{aligned} $$
(3.A3.23)

by noting that \(F(0)= \frac {1}{2} \) and \(f(\infty ) =0\). Similarly, we get \( \int _{0}^{1} w \tilde {A}_{22}(w) dw = 2 \int _{0}^{\infty } \left \{2F(t)f^{\prime \prime }(t) - f^{\prime \prime }(t) \right \} dt \), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} w \tilde {A}_{22}(w) dw \ = \ 2 f^2(0) {} \end{array} \end{aligned} $$
(3.A3.24)

by noting that \(f^{\prime }(0)=0\) and \(f^{\prime }(\infty )=0\).

Finally, letting \(B_2 (v) = \int _{0}^{v} \tilde {A}_{12} (w) dw\) and noting \(B_2 (1) = 2f (0)\) from (3.A3.21) and \(B_2 (0) = 0\), we get \( \int _{0}^{1} \int _{0}^{v} \tilde {A}_{12} (w) \tilde {A}_{12}(v) dw dv = 4f^2 (0) - \int _{0}^{1} \int _{0}^{v} \tilde {A}_{12} (v) \tilde {A}_{12}(w) dw dv\) from integration by parts. Thus, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{1} \int_0^v \tilde {A}_{12} (w) \tilde {A}_{12} (v) dw dv \ = \ 2f^{2}(0) . {} \end{array} \end{aligned} $$
(3.A3.25)

3.1.3 Examples of Results in the Uniform Distribution \(U(a,b)\)

Assume the uniform distribution \(U(a,b)\), where \(a < b\), and recollect \(B_{ab} = \max \{a, -b\}\), \(D_{ab} = \mathrm {sgn}(b) - \mathrm {sgn}(a)\), and \(S_{ab} = \min \{a, -b\}\) defined in (3.3.12), (3.3.13), and (3.3.14), respectively.

First, recollect the pdf \(f(x) = \frac {1}{b-a}\{u(x-a) - u(x-b)\}\) and cdf \(F(x) = \frac {x-a}{b-a}\{u(x-a) - u(x-b)\} + u(x-b)\). We then get the half means

$$\displaystyle \begin{aligned} \begin{array}{rcl} m_X^{-} &=& \left\{ \begin{array}{ll} \frac{a+b}{2}, & a < b \le 0, \\ \frac{-a^2}{2(b-a)}, & a < 0 < b, \\ 0, & 0 \le a < b \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.26)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} m_X^{+} &=& \left\{ \begin{array}{ll} 0, & a < b \le 0,\\ \frac{b^2}{2(b-a)}, & a < 0 < b, \\ \frac{a+b}{2}, & 0 \le a < b \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.27)

and, using \(\sigma _X^2=\frac {(b-a)^2}{12}\) shown in (1.1.72), the variance

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sqrt{\sigma_X^2 + 4m_X^+ m_X^-} &=& \left\{ \begin{array}{ll} \frac{b-a}{\sqrt{12}}, & a < b \le 0 \mbox{ or } 0 \le a < b, \\ \frac{b-a}{\sqrt{12}}\sqrt{1- \frac{12a^2b^2}{(b-a)^4}}, & a <0< b \end{array} \right. \\ &=& \frac{\sqrt{ (b-a)^4 - 6a^2b^2 D_{ab}}} {\sqrt{12} (b-a)} . {} \end{array} \end{aligned} $$
(3.A3.28)

We also get the integral \( \int _{-\infty }^{\infty } F(-x) f(x) dx= \frac {1}{b-a} \int _{a}^{b} F(-x) dx\) as

(3.A3.29)

the integral \( \int _{-\infty }^{\infty } x \left \{ 2F(x)-1 \right \} f(x) dx = \frac {1}{(b-a)^2}\int _a^b x (2x-a-b) dx = \frac {1}{(b-a)^2} \Big \{ \frac {2}{3} \left ( b^3 - a^3 \right ) - \frac {1}{2} (b+a) \left ( b^2 - a^2 \right ) \Big \}\) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} x \left\{ 2F(x)-1 \right\} f(x) dx & =&\displaystyle \frac{b-a}{6} , {} \end{array} \end{aligned} $$
(3.A3.30)

and the integral \( \int _{-\infty }^{\infty } |x| \left \{ 2F(x)-1 \right \} f(x) dx= \frac {1}{(b-a)^2}\int _{a}^{b} |x| (2x-a-b) dx\) as

(3.A3.31)

Note that the third line after the first equality in (3.A3.31) is equal to the fifth line. Specifically, \(-\frac {b-a}{6} \left \{ 1+ \frac {2b^2(3a-b)}{(b-a)^3} \right \} = \frac {b-a}{6} \frac {b^3 -3 ab^2 -3 a^2b + a^3}{(b-a)^3} = \frac {(b-a)}{6}(b+a) \frac {b^2 -4 ab + a^2}{(b-a)^3} =\frac {b-a}{6} \left \{ 1+\frac {2a^2(a-3b)}{(b-a)^3} \right \}\).

Next, when \(a < b \le 0\) or \(0 \le a < b\), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} G_F (x) \ = \ \left\{ \begin{array}{ll} -1 , & x \le S_{ab}, \\ \frac{x+B_{ab}}{b-a}, & S_{ab} \le x \le -B_{ab}, \\ 0 , & -B_{ab} \le x \le B_{ab}, \\ \frac{x-B_{ab}}{b-a}, & B_{ab} \le x \le -S_{ab}, \\ 1 , & x \ge -S_{ab} . \\ \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.32)

Similarly, when \(a \le -b < 0\) or \(-b \le a <0\), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} G_F (x) \ = \ \left\{ \begin{array}{ll} -1 , & x \le S_{ab}, \\ \frac{x+B_{ab}}{b-a}, & S_{ab} \le x \le B_{ab}, \\ \frac{2x}{b-a} , & B_{ab} \le x \le -B_{ab}, \\ \frac{x-B_{ab}}{b-a}, & -B_{ab} \le x \le -S_{ab}, \\ 1 , & x \ge -S_{ab}. \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.33)

Using (3.A3.32) and (3.A3.33), the integral \( \int _{-\infty }^{\infty } x \left \{ 2G_F(|x|)-1 \right \} f(x) dx = \int _{a}^{b} x \left \{ 2G_F(|x|)-1 \right \} f(x) dx\) can be obtained as

(3.A3.34)

the integral \( \int _{-\infty }^{\infty } |x| \left \{ 2G_F(|x|)-1 \right \} f(x) dx = \int _{a}^{b} |x| \left \{ 2G_F(|x|)-1 \right \} f(x) dx\) as

(3.A3.35)

and the integral \( \int _{-\infty }^{\infty } G_F(|x|)F(x) f(x) dx = \int _{a}^{b} G_F(|x|)F(x) f(x) dx\) as

(3.A3.36)

Note that \(4 B_{ab}^3 D_{ab} = \left ( B_{ab} D_{ab} \right )^3\) in (3.A3.34) because \(D_{ab} \in \{0, 1, 2\}\) and \(B_{ab} =0\) when \(D_{ab} =1\). In addition, we have \(8 B_{ab}^3 = \left ( B_{ab} D_{ab} \right )^3\) for \(a < 0 < b\).

3.1.4 Fisher Information

Definition 3.A3.1 (Fisher Information)

When the function \(f(x)\) is absolutely continuous and

$$\displaystyle \begin{aligned} \begin{array}{rcl} I_1 ( f) \ = \ \int_{-\infty}^{\infty} \left\{ \frac{ f^\prime (x)}{f(x)} \right\}^2 f(x) dx {} \end{array} \end{aligned} $$
(3.A3.37)

is finite, the quantity \(I_1 (f)\) is called the Fisher information of \(f(x)\). When the function \(f(x)\) is not absolutely continuous, then we let \(I_1 ( f) = \infty \). \({\heartsuit }\)

Example 3.A3.2

For the uniform pdf \(f_U\), the Fisher information is \(I_1 \left ( f_U \right )= 0\). For the exponential pdf \(f_E (x) = \lambda e^{- \lambda x } u(x)\), the Fisher information is \(I_1 \left ( f_E \right ) =\lambda ^2\). In addition, for the double-exponential pdf \(f_{DE}(x) = \frac {\lambda }{2} e^{- \lambda | x | }\), the Fisher information is\(I_1 \left ( f_{DE} \right ) = \int _{0}^{\infty } (-\lambda )^2 \frac {\lambda }{2} e^{-\lambda x} dx + \int _{-\infty }^{0} \lambda ^2 \frac {\lambda }{2} e^{\lambda x} dx = \lambda ^2\). \({\diamondsuit }\)

Definition 3.A3.2 (Strongly Unimodal Function)

Assume an open interval \((a,b)\) such that \(-\infty \leq a < b \leq \infty \). A function \(f(x)\) is called strongly unimodal in the interval \((a,b)\) if \(- \log f(x)\) is a convex function and \(\int _a^b f(x) dx =1\). \({\heartsuit }\)

Example 3.A3.3

The pdf \(f(x) = \exp \left (x-e^x \right )\) and the normal, double-exponential, exponential, logistic, and uniform pdf’s are all strongly unimodal functions. \({\diamondsuit }\)

Example 3.A3.4

The Cauchy pdf is not a strongly unimodal function. \({\diamondsuit }\)

Theorem 3.A3.2

The convolution of two strongly unimodal pdf’s results in a strongly unimodal pdf.

A strongly unimodal function is absolutely continuous, but the converse does not necessarily hold true. When a pdf \(f(x)\) is strongly unimodal in an interval, it is absolutely continuous in the interval and the locally optimum nonlinearity \(g_{LO} (x) = - \frac {f^{\prime }(x)}{f(x)}\) discussed in (3.2.1) is a non-decreasing function.

Theorem 3.A3.3

The Fisher information\(I_1 (f)\)can be written as

$$\displaystyle \begin{aligned} \begin{array}{rcl} I_1 (f) \ = \ \int_{0}^{1} \tilde {A}_{11} ^2 (v) dv \end{array} \end{aligned} $$
(3.A3.38)

if it is finite.

3.1.5 Examples of Results in the \(t(k)\) Distribution

From the pdf \(f_T (t) = \frac {\varGamma \left ( \frac {k+1}{2} \right )}{ \sqrt {k \pi } \varGamma \left (\frac {k}{2}\right ) } \left ( 1 + \frac { x^2} {k}\right )^{-\frac {k+1}{2} }\) of the \(t(k)\) distribution introduced in (3.2.20), we can easily get [20] the locally optimum nonlinearities

$$\displaystyle \begin{aligned} \begin{array}{rcl} g_{LO} (x) \ = \ \frac{(k+1)x}{x^2+k} \end{array} \end{aligned} $$
(3.A3.39)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} h_{LO} (x) \ = \ \frac{(k+1)\left\{ (k+2)x^2 -k \right\}}{\left(x^2+k \right)^2} \end{array} \end{aligned} $$
(3.A3.40)

and the Fisher information

$$\displaystyle \begin{aligned} \begin{array}{rcl} I_1 \left( f_T \right) \ = \ \frac{k+1}{k+3}. \end{array} \end{aligned} $$
(3.A3.41)

We also have \( \int _{-\infty }^{\infty } f_T^2 (x) dx = \mathsf {E} \left \{f_T (X) \right \} = \frac {2\varGamma ^2 \left ( \frac {k+1}{2} \right )}{ k \pi \varGamma ^2 \left (\frac {k}{2}\right ) } \int _{0}^{\infty } \left ( 1 + \frac { x^2} {k}\right )^{-(k+1)} dx = \frac {2\varGamma ^2 \left ( \frac {k+1}{2} \right )}{ \sqrt {k} \pi \varGamma ^2 \left (\frac {k}{2}\right ) } \int _{0}^{\frac {\pi }{2}} \cos ^{2k} \theta d \theta \), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} f_T^2 (x) dx \ = \ \frac{(2k-1)!!\varGamma^2 \left( \frac{k+1}{2}\right)} {\sqrt{k}\,(2k)!!\varGamma^2 \left( \frac{k}{2}\right)} \end{array} \end{aligned} $$
(3.A3.42)

using [10] \(\int _{0}^{\frac {\pi }{2}} \cos ^{2k} \theta d \theta = \frac {(2k-1)!!}{(2k)!!} \frac {\pi }{2}\), where \((2k-1) !! = (2k-1) (2k-3) \times \cdots \times 3 \times 1\) as introduced in (1.A2.18) and \((2k)!! = 2 \times 4 \times \cdots \times (2k)\) for \(k \in \mathbb {J}_{1, \infty }\).

Noting that \(F_T (x) = \int _{-\infty }^x \frac {\varGamma \left ( \frac {k+1}{2} \right )}{ \sqrt {k \pi } \varGamma \left (\frac {k}{2}\right ) } \left ( 1 + \frac { y^2} {k}\right )^{-\frac {k+1}{2} } dy = \frac {\varGamma \left ( \frac {k+1} {2} \right )} {\sqrt {\pi } \varGamma \left (\frac {k}{2}\right ) } \int _{-\frac {\pi }{2}}^{\tan ^{-1}\frac {x}{\sqrt {k}} } \cos ^{k-1} y \, dy\), the cdf of the \(t(k)\) distribution can be obtained as

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_T (x) \ = \ \left\{ \begin{array}{ll} \frac{1}{2} + \frac{1}{\pi} \tan^{-1}x , & k=1 , \\ \frac{1}{2} + \frac{x}{2\sqrt{x^2 +2}} , & k=2 , \\ \frac{1}{2} + \frac{\sqrt{3} x}{\pi \left(x^2 +3\right)} + \frac{1}{\pi}\tan^{-1} \frac{x}{\sqrt{3}} , & k=3 \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.43)

for \(k=1, 2, 3\), and subsequently \(G_{F_T} (x) = 2F_T (x)-1\) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} G_{F_T} (x) \ = \ \left\{ \begin{array}{ll} \frac{2}{\pi} \tan^{-1}x , & k=1 , \\ \frac{x}{\sqrt{x^2 +2}} , & k=2 , \\ \frac{2\sqrt{3} x}{\pi \left(x^2 +3\right)} + \frac{2}{\pi}\tan^{-1} \frac{x}{\sqrt{3}} , & k=3 . \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.44)

We also have the inverse cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_T^{-1} (w) &=& \left\{ \begin{array}{ll} \tan \left\{ \frac{1}{2} (2w - 1) \pi \right\} , & k=1 , \\ \frac{2w-1}{\sqrt{2w(1-w)}} , & k=2 \end{array} \right. \\ &=& \left\{ \begin{array}{ll} -\cot (w \pi ), & k=1 , \\ \frac{2w-1}{\sqrt{2w(1-w)}} , & k=2 \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.45)

and the inverse

$$\displaystyle \begin{aligned} \begin{array}{rcl} G_{F_T}^{-1} (v) \ = \ \left\{ \begin{array}{ll} \tan \left( \frac{\pi}{2} v \right) , & k=1 , \\ \frac{\sqrt{2}v}{\sqrt{1-v^2}} , & k=2 \end{array} \right. {} \end{array} \end{aligned} $$
(3.A3.46)

of \(G_{F_T}\). Figure 3.14 shows the cdf \(F_T(x)\) of the \(t(k)\) distribution shown in (3.A3.43).

Fig. 3.14
A 3-line graph of F T of x versus x for k equals 1, 2, and 3. The lines plot sigmoid curves with k equals 1 at the lower range and k equals 3 at the higher range.

The cdf \(F_T (x)\) of the \(t(k)\) distribution

3.1.6 Examples of Results in Generalized Normal and Cauchy Distributions

The pdf \(f_{GG} (x) = \frac {k}{2 A_G(k) \varGamma \left ( \frac {1}{k} \right )} \exp \left \{ - \left ( \frac {|x|}{A_G (k)}\right )^k \right \}\), introduced in (3.3.68), of the generalized normal distribution is an even symmetric and unimodal pdf parameterized by two constants, the rate \(k >0\) of exponential decay of the pdf and the variance \(\sigma _G^2\). The generalized normal pdf is useful in representing various pdf’s. For example, the generalized normal pdf is clearly a normal pdf when \(k = 2\), and is the double-exponential pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{DE} ( x ) = \frac{1}{2 \sigma} \exp\left( - \frac{|x|}{\sigma} \right) {} \end{array} \end{aligned} $$
(3.A3.47)

when \(k = 1\), where \(\sigma = \frac {\sigma _G}{\sqrt {2}}\).

Recollecting [22] \(\lim \limits _{x \to 0} x \varGamma (x) = \lim \limits _{x \to 0}\varGamma (x+1)\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{x \to 0} x \varGamma(x) & =&\displaystyle 1 , {} \end{array} \end{aligned} $$
(3.A3.48)

we get \(\lim \limits _{k \to \infty } \frac {3}{k} \varGamma \left ( \frac {3}{k} \right ) =1\) and \(\lim \limits _{k \to \infty } \frac {1}{k} \varGamma \left ( \frac {1}{k} \right ) =1\). Therefore, \(\lim \limits _{k \to \infty } A_G (k) = \lim \limits _{k \to \infty } \sqrt { \frac {\sigma _G^2 \varGamma \left ( \frac {1}{k} \right )} {\varGamma \left ( \frac {3}{k} \right )}}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{k \to \infty} A_G (k) \ = \ \sqrt{3} \sigma_G {} \end{array} \end{aligned} $$
(3.A3.49)

and \(\lim \limits _{k \to \infty } \frac {k}{2A_G (k)\varGamma \left (\frac {1}{k}\right )} = \frac {1}{2\sqrt {3} \sigma _G}\). Then, for \(k \to \infty \), the limit of the exponential function in the generalized normal pdf \(f_{GG} (x)\) is 1 when \(|x| \le A_G(k)\), or equivalently when \(|x| \le \sqrt {3} \sigma _G\), and 0 when \(|x| > A_G(k)\). Therefore, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{k \to \infty} f_{GG} (x) \ = \ \frac{1}{2\sqrt{3} \sigma_G} u \left( \sqrt{3} \sigma_G - |x| \right) . \end{array} \end{aligned} $$
(3.A3.50)

In other words, the limit of the generalized normal pdf as \(k \to \infty \) is a uniform pdf.

We can obtain [20] the locally optimum nonlinearities

$$\displaystyle \begin{aligned} \begin{array}{rcl} g_{LO} ( x ) & =&\displaystyle \frac{k|x|{}^{k-1} \mathrm{sgn} ( x ) } {A_G^k (k)} {} \end{array} \end{aligned} $$
(3.A3.51)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} h_{LO} ( x ) & =&\displaystyle \frac{k|x|{}^{k-2} }{A_G^{2k} (k)} \left\{k|x|{}^k - (k-1)A_G^k(k) \right\} \end{array} \end{aligned} $$
(3.A3.52)

and the Fisher information

$$\displaystyle \begin{aligned} \begin{array}{rcl} I_1 \left(f_{GG}\right) & =&\displaystyle \frac{k^2 \varGamma \left( \frac{3}{k} \right) \varGamma \left( 2- \frac{1}{k} \right) } { \sigma_G^2 \varGamma^2 \left( \frac{1}{k} \right)} \end{array} \end{aligned} $$
(3.A3.53)

for \(k > \frac {1}{2} \). Specifically, we have \(g_{LO} (x) = \frac {x}{\sigma _G^2}\), \(h_{LO} (x) = \frac {1}{\sigma _G^4} \left (x^2 -\sigma _G^2 \right )\), and \(I_1 \left (f_{GG} \right ) = \frac {1}{\sigma _G^2}\) when \(k=2\); and we get \(g_{LO} (x) = \frac {1}{\sigma }\mathrm {sgn} (x)\), \(h_{LO} (x) = \frac {1}{\sigma ^2} \{ 1-2 \sigma \delta (x) \}\), and \(I_1 \left (f_{D} \right ) = \frac {1}{\sigma ^2}\) when \(k \to 1\). Here, we have used \(\lim \limits _{k \to 1} (k-1) |x|{ }^{k-2} = 2\delta (x)\) as discussed in Exercise 1.16.

The pdf \(f_{GC} (x) = \frac {\tilde {B}_C(k,v)} {\tilde {D}_C^{v+\frac {1}{k}} (x) }\), shown in (3.3.69), of the generalized Cauchy distribution has an algebraic tail behavior: the tails of the pdf decay in proportion to \(|x|{ }^{-(kv+1)}\), which is slower than the exponential decay, when \(|x|\) is large. If \(k = 2\) and 2v is an integer, the generalized Cauchy pdf is a \(t(k)\) pdf. The generalized Cauchy pdf becomes the Cauchy pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_C ( x ) & =&\displaystyle \frac{\sigma_G} { \pi \left(x^2 + \sigma_G^2 \right)} \end{array} \end{aligned} $$
(3.A3.54)

if \(k = 2\) and \(v = \frac {1}{2} \).

We have \(\lim \limits _{v \to \infty }\tilde {D}_c^{v+ \frac {1}{k}} (x) = \lim \limits _{v \to \infty } \bigg [ 1 + \frac {1}{v} \left \{ \frac {|x| } {A_G (k)} \right \}^k \bigg ]^{v+ \frac {1}{k}}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{v \to \infty}\tilde{D}_c^{v+ \frac{1}{k}} (x) & =&\displaystyle \exp \left[ \left\{ \frac{|x|}{ A_G (k)}\right\}^k\right] \end{array} \end{aligned} $$
(3.A3.55)

when the parameters \(\sigma _G^2\) and k are fixed. In addition, \(\lim \limits _{v \to \infty } \tilde {B}_c (k,v) = \frac {k} {2A_G(k)\varGamma \left ( \frac {1}{k} \right ) } \lim \limits _{v \to \infty } \frac {\varGamma \left ( v+\frac {1}{k} \right )} {v^{\frac {1}{k}}\varGamma (v) }\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{v \to \infty} \tilde{B}_c (k,v) \ = \ \frac{k} {2A_G(k)\varGamma \left( \frac{1}{k} \right) } \end{array} \end{aligned} $$
(3.A3.56)

because \(\lim \limits _{v \to \infty } \frac {\varGamma \left ( v+\frac {1}{k} \right )} {v^{\frac {1}{k}}\varGamma (v)}=1\) from [22] \(\lim \limits _{n \to \infty } (cn)^{b-a} \frac {\varGamma ( cn+a)} {\varGamma (cn+b)}= \lim \limits _{n \to \infty } (cn)^{b-a} ( cn+a -1)( cn+a -2) \cdots ( cn+b ) = \lim \limits _{n \to \infty } (cn)^{b-a} ( cn )^ {a-b} \), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{n \to \infty} (cn)^{b-a} \frac{\varGamma( cn+a)} {\varGamma(cn+b)} & =&\displaystyle 1. {} \end{array} \end{aligned} $$
(3.A3.57)

Thus, for \(v \to \infty \), the generalized Cauchy pdf converges to the generalized normal pdf. When \(k=2\) and \(v \to \infty \), the generalized Cauchy pdf is a normal pdf.

Next, we get \(\lim \limits _{k \to \infty } \tilde {B}_c (k,v) = \lim \limits _{k \to \infty } \frac { \varGamma (v)}{2 A_G(k)\varGamma (v) \frac {1}{k}\varGamma \left (\frac {1}{k}\right )}\), i.e.,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \lim\limits_{k \to \infty} \tilde{B}_c (k,v) \ = \ \frac{1}{2 \sqrt{3} \sigma_G} \end{array} \end{aligned} $$
(3.A3.58)

when v is fixed using (3.A3.48) and (3.A3.49). In addition, \(\lim \limits _{k \to \infty } \tilde {D}_c(x) = 1 \) when \(|x| < \sqrt {3} \sigma _G\) and \(\lim \limits _{k \to \infty } \tilde {D}_c(x) = \infty \) when \(|x| > \sqrt {3} \sigma _G\). In short, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{GC} (x) \ \to \ \frac{1}{2\sqrt{3} \sigma_G} \, u \left( \sqrt{3} \sigma_G - |x| \right) , \end{array} \end{aligned} $$
(3.A3.59)

a uniform pdf, when v is fixed and \(k \to \infty \).

For the generalized Cauchy pdf, we have [20] the variance \(\sigma _{GC}^2 = \sigma _G^2 v^{\frac {2}{k}} \frac {\varGamma \left (v- \frac {2}{k}\right )} {\varGamma (v)}\) for \(kv >2\), the locally optimum nonlinearities

$$\displaystyle \begin{aligned} \begin{array}{rcl} g_{LO} ( x ) & =&\displaystyle (kv+1) \frac{|x|{}^{k-1} \mathrm{sgn} ( x ) }{vA_G^k (k) + |x|{}^k} {} \end{array} \end{aligned} $$
(3.A3.60)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} h_{LO} (x) & =&\displaystyle \frac{(kv+1)|x|{}^{k-2}} { \{vA_G^k (k)\tilde{D}_C(x) \}^2} \tilde{D}_{LO,k} (x) \end{array} \end{aligned} $$
(3.A3.61)

and the Fisher information

$$\displaystyle \begin{aligned} \begin{array}{rcl} I_1 \left(f_{GC}\right) & =&\displaystyle \frac{ (kv+1)^2 \varGamma \left( \frac{3}{k} \right) \varGamma \left(v+\frac{1}{k} \right) \varGamma \left(v+\frac{2}{k} \right) \varGamma \left( 2- \frac{1}{k} \right) } { \sigma_G^2 v^{\frac{2}{k}} \varGamma^2 \left( \frac{1}{k} \right) \varGamma (v) \varGamma \left(v+2+\frac{1}{k} \right)} \qquad \qquad \quad \end{array} \end{aligned} $$
(3.A3.62)

for \(k > \frac {1}{2} \), where \(\tilde {D}_{LO,k} (x) = (kv+k+1) |x|{ }^k - (k-1)v A_G^k(k) \tilde {D}_C(x)\). For instance, we have \(g_{LO} (x) = \frac {2x}{x^2+ \sigma _G^2}\), \(h_{LO} (x) = \frac {2\left (3x^2 - \sigma _G^2 \right )}{\left (x^2 + \sigma _G^2 \right )^2}\), and \(I_1 \left (f_{GC} \right ) = \frac {1}{2 \sigma _G^2}\) when \(k=2\) and \(v = \frac {1}{2} \).

Exercises

Exercise 3.1

Consider a continuous i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 \right )\) with the marginal pdf f. Let \(\boldsymbol {Q}\) and \(\boldsymbol {Z}\) be the magnitude rank and sign vectors, respectively, of \(\boldsymbol {X}\). When the pdf \(f(x)\) is an even symmetric function of x, obtain the value of \(\int _{B} f(x) f(y) dx dy\), where \(B= \left \{ (x,y) | \boldsymbol {Q} = \left ( q_1 , q_2 \right ) , \, \boldsymbol {Z} = \left ( z_1 , z_2 \right ) \right \}\). The value will be the same for any \(\left ( q_1 , q_2 \right ) \in \{(1,2), (2,1) \}\) and \(\left ( z_1 , z_2 \right ) \in \{(-1, -1), (-1, 1), (1, -1), (1, 1)\}\).

Exercise 3.2

For a continuous i.i.d. random vector \( \left ( X_1 , X_2 , \, \cdots , X_n \right )\), obtain the expected values \(\mathsf {E} \left \{ R_i R_j R_k \right \}\) and \(\mathsf {E} \left \{ R_i R_j R_k R_l \right \}\), where \(R_i\) is the rank of \(X_i\).

Exercise 3.3

Obtain \(\sum \limits _{i=1}^n \left ( i- \frac {n+1}{2} \right )^4\).

Exercise 3.4

Obtain \( \int _{-\infty }^{\infty } F^{r-1}(x) \{1-F(x)\}^{n-r} f(x)dx\) and show

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} G_F^{q-1}(y) \left\{ 1-G_F(y) \right\}^{n-q}g_F(y)u(y)dy \ = \ \frac{1}{n\, {{}_{n-1}\mbox{C}}_{q-1}} {} \end{array} \end{aligned} $$
(3.E.1)

for a cdf F and the function \(G_F(x) = F(x) -F(-x)\) defined in (1.3.18).

Exercise 3.5

Show

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} F(-x)f(x) dx & =&\displaystyle \frac{1}{2} , \end{array} \end{aligned} $$
(3.E.2)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{\infty} G_F(x)f(x) dx & =&\displaystyle \frac{1}{4}, \end{array} \end{aligned} $$
(3.E.3)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} G_F(|x|) F(x) f(x) dx & =&\displaystyle \frac{1}{4} {} \end{array} \end{aligned} $$
(3.E.4)

when the pdf \(f(x) = \frac {d}{dx} F(x)\) is an even symmetric function of x.

Exercise 3.6

For a cdf F, pdf \(f(x) = \frac {d}{dx}F(x)\), and the function \(G_F(x)= F(x) -F(-x)\) defined in (1.3.18), show the following results.

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} F(x) f(x) dx &=& \frac{1}{2} .{} \end{array} \end{aligned} $$
(3.E.5)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} F(-x) f(x) dx &=&\left\{ \begin{array}{ll} 0, & F(0)= 0,\\ 1, & F(0)= 1. \end{array} \right. \end{array} \end{aligned} $$
(3.E.6)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{\infty} \{ F(x) f(-x) - F(-x) f(x) \} dx &=& F^2(0) . \end{array} \end{aligned} $$
(3.E.7)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{0}^{\infty} F(x)F(-x)\{f(x)-f(-x)\}dx &\leq& \frac{1}{3} \left\{1-2F^3(0) \right\} . \end{array} \end{aligned} $$
(3.E.8)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} G_F(|x|) f(x) dx &=& \frac{1}{2} . {} \end{array} \end{aligned} $$
(3.E.9)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{-\infty}^{\infty} G_F(|x|) F(x) f(x) dx &=&\left\{ \begin{array}{ll} \frac{1}{3}, & F(0)= 0,\\ \\ \frac{1}{6}, & F(0)= 1. \end{array} \right. \end{array} \end{aligned} $$
(3.E.10)

Exercise 3.7

Assume the logistic pdf \(f_L (x) = \frac {e^{-x}}{(1+e^{-x} )^2}\). Obtain the score functions \(b_1\) and \(d_1\). Approximate \(b_1\) and \(d_1\) in terms of the score functions \(a_1\) and \(c_1\), respectively. Obtain the score functions \(a_0\) and \(c_0\). Approximate \(a_0\) and \(c_0\) in terms of \(a_1\) and \(c_1\), respectively.

Exercise 3.8

Assume the double-exponential pdf \(f (x) = \frac {1}{2} e^{-|x|}\). Obtain the score functions \(a_0\), \(c_1\), \(c_0\), \(b_1\), and \(d_1\). (Hint. For \(i=1\), we have \(d_1 (1) = 1- 2^{n+1} n \int _{0}^{\infty } \delta (x) f(x) \{1-F(x) \}^{n-1} dx\). Regarding the lower limit 0 of this integration as \(0^-\) and \(0^+\), we get \(d_1 (1) = 1- 2^{n+1} \frac {n}{2} \frac {1}{2^{n-1}} = 1-2n\) and \(d_1 (1)=1-0 = 1\), respectively: we can put \(d_1 (1) = \frac {1}{2} \{ (1-2n) +1 \} = 1-n\) as in \(f^{\prime } (0) = \frac {1}{2} \left \{ f^{\prime } \left (0^{-} \right ) + f^{\prime } \left (0^{+} \right ) \right \}\) = 0.)

Exercise 3.9

Recollect the pdmf (3.1.53) of \(\left ( X_i, R_i \right )\) for a continuous i.i.d. random vector \(\left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f.

  1. (1)

    Confirm that the pdf of \(X_i\) is \(f(x)\) and that the pmf of \(R_i\) is \(p_{R_i} (r) = \frac {1}{n}\) for \(i, r \in \mathbb {J}_{1,n}\).

  2. (2)

    Show the joint moment \(\mathsf {E} \left \{ X_i R_i \right \} = \int _{-\infty }^{\infty } x \{ 1+ (n-1) F(x)\} f(x)dx\) and the scaled correlation coefficient

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{X_i R_i} & =&\displaystyle \frac{\sqrt{3}}{\sigma_X} \int_{-\infty}^{\infty} x \{2F(x)-1\} f(x) dx. \end{array} \end{aligned} $$
    (3.E.11)

Exercise 3.10

Recollect the pdmf (3.1.55) of \(\left (\left | X_i \right |, R_i\right )\) for a continuous i.i.d. random vector \( \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f.

  1. (1)

    Show that the pdf of \(\left | X_i \right |\) is \(f_{ \left | X_i \right |} (y) = \{f(y) + f(-y) \}u(y)\) and that the pmf of \(R_i\) is \(p_{R_i} (r) = \frac {1}{n}\) for \(i, r \in \mathbb {J}_{1,n}\).

  2. (2)

    Show the joint moment

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathsf{E} \{ | X_i | R_i \} & =&\displaystyle \int_{-\infty}^{\infty} |y| \{ 1+(n-1)F(y) \} f(y) dy {} \end{array} \end{aligned} $$
    (3.E.12)

    and the scaled correlation coefficient

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{\left | X_i \right | R_i} & =&\displaystyle \frac{ \sqrt{3}}{\sigma_{|X|}} \int_{-\infty}^{\infty} |x|\{2F(x)-1\}f(x)dx {} \end{array} \end{aligned} $$
    (3.E.13)

    between the magnitude \(\left | X_i \right |\) and rank \(R_i\) of \(X_i\).

Exercise 3.11

Recollect the pdmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{f}_{ X_i, Q_i} (x, q) \ = \ \left\{ \begin{array}{ll} \frac{1}{n} \ f_{|X|{}_{[q]}} (|x|) \ \frac{f(x)}{g_F(|x|)}, & q \in \mathbb{J}_{1,n}, \\ 0, & q \notin \mathbb{J}_{1,n} \end{array} \right. {} \end{array} \end{aligned} $$
(3.E.14)

of \(\left (X_i, Q_i\right )\) for a continuous i.i.d. random vector \( \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f as shown in (3.1.57).

  1. (1)

    Based on (3.E.14), confirm that the pdf of \(X_i\) is \(f(x)\) and that the pmf of \(Q_i\) is \(p_{Q_i} (q) = \frac {1}{n}\) for \(i, q \in \mathbb {J}_{1,n}\).

  2. (2)

    Obtain the conditional pdf \(f_{\left . X_i \right | Q_i} (x|q)\) of \(X_i\) when \(Q_i = q\).

Exercise 3.12

For a continuous i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f, recollect the pdmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{f}_{\left | X_i \right | , Q_i}(y,q) \ = \ \left\{ \begin{array}{ll} \frac{1}{n} f_{|X|{}_{[q]}}(y), & q \in \mathbb{J}_{1,n}, \\ 0, & q \notin \mathbb{J}_{1,n} \end{array} \right. {} \end{array} \end{aligned} $$
(3.E.15)

of \(\left (\left |X_i\right |, Q_i\right )\) shown in (3.1.61).

  1. (1)

    Based on the pdmf (3.E.15), show that the pdf of \(\left | X_i \right |\) is \(f_{ \left | X_i \right |} (y)= \{f(y) + f(-y) \}u(y)\) and that the pmf of \(Q_i\) is \(p_{Q_i} (q) = \frac {1}{n}\) for \(i, q \in \mathbb {J}_{1,n}\).

  2. (2)

    Confirm the joint moment

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathsf{E} \{\left | X_i \right |Q_i\} \ = \ \int_{-\infty}^{\infty} |y| \left\{1+(n-1)G_F(|y|) \right\}f(y)dy {} \end{array} \end{aligned} $$
    (3.E.16)

    and the scaled correlation coefficient

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{ \left | X_i \right | Q_i} \ = \ \frac{\sqrt{3}}{\sigma_{|X|}} \int_{-\infty}^{\infty} |y|\left\{ 2G_F(|y|)-1 \right\}f(y)dy {} \end{array} \end{aligned} $$
    (3.E.17)

    between the magnitude \(\left | X_i \right |\) and magnitude rank \(Q_i\) of \(X_i\).

Exercise 3.13

For a continuous i.i.d. random vector \( \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f, consider the joint pmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{Z_i , R_i} (z,r) \ = \ \left\{ \begin{array}{ll} \frac{1}{n} \int_{-\infty}^{\infty} f_{X_{[r]} }(x)u\left( z x\right)dx, \\ \qquad r \in \mathbb{J}_{1,n}, z \in \{-1, 1\}, \\ 0, \quad \ \mbox{otherwise} \end{array} \right. {} \end{array} \end{aligned} $$
(3.E.18)

of \(\left ( Z_i, R_i \right )\) as shown in (3.3.16).

  1. (1)

    Based on (3.E.18), show that the pmf of \(Z_i\) is \(p_{Z_i}(z) = \int _{-\infty }^{\infty } f(x)u(zx) dx\) for \(z \in \{-1, 1\}\) and the pmf of \(R_i\) is \(p_{R_i}(r) = \frac {1}{n}\) for \(i, r \in \mathbb {J}_{1,n}\).

  2. (2)

    Show that the covariance between the rank and sign of a random variable is

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathsf{Cov} \left(Z_i , R_i \right) \ = \ (n-1) F(0)\{1-F(0)\} {} \end{array} \end{aligned} $$
    (3.E.19)

    based on the joint pmf \(p_{ Z_i, R_i}\) shown in (3.E.18).

Exercise 3.14

For a continuous i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f, the joint pmf of \(\left ( Z_i, Q_i\right )\) is, as shown in (3.3.32),

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{Z_i, Q_i}(z,q) \ = \ \frac{1}{n} \int_{0}^{\infty} f_{|X|{}_{[q]}} (x) \ \frac{f(zx)}{g_F(x)} dx {} \end{array} \end{aligned} $$
(3.E.20)

for \(z \in \{-1, 1 \}\) and \(q \in \mathbb {J}_{1,n}\), and 0 otherwise.

  1. (1)

    Obtain the conditional probabilities \(\mathsf {P} \big (Q_i =q \big | Z_i = z \big )\) for \(z \in \{-1, 1\}\).

  2. (2)

    Obtain the conditional probabilities \(\mathsf {P} \big (Z_i = z \big | Q_i =q \big )\) for \(z \in \{-1, 1\}\).

  3. (3)

    Based on (3.E.20), show that the pmf of \(Z_i\) is \(p_{Z_i}(z) = \int _{-\infty }^{\infty } f(x)u(zx) dx\) for \(z \in \{-1, 1\}\) and the pmf of \(Q_i\) is \(p_{Q_i}(q) = \frac {1}{n}\) for \(i, q \in \mathbb {J}_{1,n}\).

  4. (4)

    As it is already observed in (3.3.33), even when \(\boldsymbol {X} \) is a continuous i.i.d. random vector, \(Q_i\) and \(Z_i\) are not generally independent. Show that \(Q_i\) and \(Z_i\) are independent when the pdf \(f(x)\) is an even symmetric function of x.

  5. (5)

    Confirm the joint moment

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathsf{E} \{Z_i Q_i \} & =&\displaystyle \int_{-\infty}^{\infty} \mathrm{sgn} (x) \left\{1+(n-1) G_F(|x|) \right\} f(x) dx {} \end{array} \end{aligned} $$
    (3.E.21)

    and the scaled correlation coefficient

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{ Z_i Q_i} & =&\displaystyle \frac{\sqrt{3} \left\{ F(0) - \int_{-\infty}^{\infty} F(-x) f(x)dx \right\} } {\sqrt{ F(0) \{ 1-F(0) \} }} {} \end{array} \end{aligned} $$
    (3.E.22)

    for \(F(0) \ne 0, 1\) between the sign \(Z_i\) and magnitude rank \(Q_i\) of \(X_i\).

Exercise 3.15

For a continuous i.i.d. random vector \(\boldsymbol {X} = \left ( X_1 , X_2 , \, \cdots , X_n \right )\) with the marginal cdf F and pdf f, assume \(F(0) \ne 0, 1\). Recollect the joint pmf

$$\displaystyle \begin{aligned} \begin{array}{rcl} p_{R_i, Q_i} (r,q)\ =\ \frac{1}{n^2} \int_{-\infty}^{\infty} \frac{f_{|X|{}_{[q]}}(x)}{g_F(x)} \left\{f_{X_{[r]}}(x)+f_{X_{[r]}}(-x) \right\} dx {} \end{array} \end{aligned} $$
(3.E.23)

of \(\left ( R_i , Q_i \right )\) shown in (3.3.45).

  1. (1)

    Based on (3.E.23), show that the pmf of \(R_i\) is \(p_{R_i} (r) = \frac {1}{n}\) for \(r \in \mathbb {J}_{1,n}\) and that the pmf of \(Q_i\) is \(p_{Q_i} (q) = \frac {1}{n}\) for \(q \in \mathbb {J}_{1,n}\).

  2. (2)

    Show that the conditional pmf \(p_{Q_i | R_i} (q|r)\) of \(Q_i\) when \(R_i=r\) and the conditional pmf \(p_{R_i | Q_i} (r|q)\) of \(R_i\) when \(Q_i=q\) are

    $$\displaystyle \begin{aligned} \begin{array}{rcl} p_{Q_i | R_i} (q|r) & =&\displaystyle p_{R_i | Q_i} (r|q) \\ & =&\displaystyle \frac{1}{n} \int_{-\infty}^{\infty} \frac{f_{|X|{}_{[q]}}(x)}{g_F(x)} \left\{f_{X_{[r]}}(x)+f_{X_{[r]}}(-x) \right\} dx .\qquad \end{array} \end{aligned} $$
    (3.E.24)
  3. (3)

    When \(0 < F(0)< 1\), show the joint moment

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathsf{E} \left\{ R_i Q_i \right\} & =&\displaystyle \int_{-\infty}^{\infty} \left\{1 +(n-1)G_F(|x|) \right\} \{1 +(n-1)F(x) \} f(x) dx \qquad \qquad {} \end{array} \end{aligned} $$
    (3.E.25)

    and the correlation coefficient

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \rho_{R_i Q_i} & =&\displaystyle \frac{12(n-1)}{n+1} \left\{ \int_{-\infty}^{\infty} G_F(|x|)F(x) f(x) dx -\frac{1}{4} \right\} {} \end{array} \end{aligned} $$
    (3.E.26)

    between the rank \(R_i\) and magnitude rank \(Q_i\) of a random variable \(X_i\).

Exercise 3.16

Obtain the scaled correlation coefficient \(\tilde {\rho }_{X_i R_i}\) when the marginal pdf is the Rayleigh pdf \(f(x) = \frac {x}{\alpha ^2} \exp \left (-\frac {x^2}{2\alpha ^2}\right )u(x)\) for an n-dimensional i.i.d. random vector.

Exercise 3.17

For an i.i.d. random vector of size n with the marginal logistic pdf \(f_L (x) = \frac {be^{-bx}}{(1+e^{-bx} )^2}\), obtain the scaled correlation coefficients \(\tilde {\rho }_{X_i R_i}\) and \(\tilde {\rho }_{\left | X_i \right | Q_i}\).

Exercise 3.18

Assume the marginal pdf and cdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} f_a(x) \ = \ \left\{ \begin{array}{llll} \frac{a}{a+1} , & \quad -1 \leq x < 0; & \quad \frac{1}{a(a+1)}, & \quad 0 \leq x < a; \\ 0, & \quad x < -1, x \ge a \end{array} \right. {} \end{array} \end{aligned} $$
(3.E.27)

shown in (3.3.70) and

$$\displaystyle \begin{aligned} \begin{array}{rcl} F_a(x) \ = \ \left\{ \begin{array}{ll} 0, & \quad x < -1,\\ \frac{a}{a+1} (x+1) , & \quad -1 \leq x < 0,\\ \frac{x+a^2}{a(a+1)} , & \quad 0 \leq x < a,\\ 1, & \quad x \geq a \end{array} \right. {} \end{array} \end{aligned} $$
(3.E.28)

shown in (3.3.71), respectively, for an i.i.d. random vector \( \left ( X_1 , X_2 , \, \cdots , X_n \right )\).

  1. (1)

    Confirm and sketch

    $$\displaystyle \begin{aligned} \begin{array}{rcl} && G_{F_a}(|y|) \ = \ \left\{ \begin{array}{ll} \frac{a^2+1}{a(a+1)} |y| ,\\ \qquad 0 \leq |y| \leq \min \{a,1\}, \\ \\ \frac{\min^2\{a,1\}}{a(a+1)} |y| + \frac{\max \{a,1\}}{a+1},\\ \qquad \min \{a,1\} \leq |y| \leq \max \{a,1\}, \\ \\ 1, \quad |y| \geq \max \{a,1\}. \end{array} \right. {} \end{array} \end{aligned} $$
    (3.E.29)
  2. (2)

    Evaluate \(F_a(0) - \int _{-\infty }^{\infty } F_a(-x) f_a (x) dx\), \( \int _{-\infty }^{\infty } x \left \{ 2F_a(x)-1 \right \} f_a(x) dx\), \( \int _{-\infty }^{\infty } |y| \left \{2G_{F_a}(|y|)-1 \right \} f_a(y) dy\), \( \int _{-\infty }^{\infty } |x| F_a(x) f_a(x) dx\), \( \int _{-\infty }^{\infty } y G_{F_a}(|y|) f_a(y) dy \), and \( \int _{-\infty }^{\infty } G_{F_a}(|y|)F_a(y) f_a(y) dy -\frac {1}{4}\). Confirm

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{X_i R_i} & =&\displaystyle \frac{2\sqrt{a}}{a+1} {} \end{array} \end{aligned} $$
    (3.E.30)
    $$\displaystyle \begin{aligned} \begin{array}{rcl} & =&\displaystyle \frac{2}{\sqrt{3}} {\tilde \rho}_{Z_i R_i} , {} \end{array} \end{aligned} $$
    (3.E.31)
    $$\displaystyle \begin{aligned} \begin{array}{rcl} {\tilde \rho}_{\left | X_i \right | R_i} & =&\displaystyle \frac{ (a-1)\sqrt{a} }{ (a+1) \sqrt{a^2 - a + 1} } , {} \end{array} \end{aligned} $$
    (3.E.32)
    $$\displaystyle \begin{aligned} \begin{array}{rcl} {\tilde \rho}_{X_i Q_i} & =&\displaystyle \frac{a-1}{a+1}\frac{ (3v+1) \sqrt{v} }{v+1} , {} \end{array} \end{aligned} $$
    (3.E.33)
    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{\left | X_i \right | Q_i} & =&\displaystyle \frac{2\sqrt{v}}{v+1} \frac{1 - \frac{v}{2} + \frac{v^2}{2} } {\sqrt{1-v+v^2}} , {} \end{array} \end{aligned} $$
    (3.E.34)
    $$\displaystyle \begin{aligned} \begin{array}{rcl} {\tilde \rho}_{Z_i Q_i} & =&\displaystyle \frac{ (a-1) \sqrt{3v} }{a+1} , {} \end{array} \end{aligned} $$
    (3.E.35)

    and

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{n+1}{n-1} \ \rho_{R_i Q_i} & =&\displaystyle \mathrm{sgn}(1-a) \frac{(v-1)\left(2v^3 + v^2 + 2v -1 \right)}{(v+1)^3} , {} \end{array} \end{aligned} $$
    (3.E.36)

    where \(v = \min \left \{ a, \frac {1}{a} \right \}\).

  3. (3)

    Sketch the scaled correlation coefficients (3.E.30), (3.E.32)–(3.E.36) as a function of a.

Note: The scaled correlation coefficient \({\tilde \rho }_{X_i R_i}\) shown in (3.E.30) remains the same when a is replaced with \(\frac {1}{a}\), has the maximum value of 1 at \(a=1\), and has the value 0 when \(a \to 0\) and \(a \to \infty \).

The scaled correlation coefficient \({\tilde \rho }_{\left | X_i \right | R_i}\) shown in (3.E.32) has the maximum \(\frac {1}{3}\) at \(a=2+\sqrt {3} \approx 3.7321\) and the minimum \(-\frac {1}{3}\) at \(a=2-\sqrt {3} =\frac {1}{2+\sqrt {3}}\approx 0.2679\).

The scaled correlation coefficient \({\tilde \rho }_{X_i Q_i}\) shown in (3.E.33) has the maximum \(\frac {\sqrt {3}}{4} \approx 0.4330\) at \(a=3\) and the minimum \(-\frac {\sqrt {3}}{4} \approx -0.4330\) at \(a=\frac {1}{3}\).

The scaled correlation coefficient \({\tilde \rho }_{\left | X_i \right | Q_i}\) shown in (3.E.34) has the maximum 1 at \(a=1\), and \(\tilde {\rho }_{\left | X_i \right | Q_i} \to 0\) as \(a \to 0\) and \(a \to \infty \). Meanwhile, \(\frac {1 - \frac {v}{2} + \frac {v^2}{2} } {\sqrt {1-v+v^2}}\) reaches the maximum \(\frac {1 - \frac {1}{4} + \frac {1}{8} } {\sqrt {1- \frac {1}{2} + \frac {1}{4}}} = \frac {7\sqrt {3}}{12} \approx 1.0103\) at \(v=0.5\), and minimum 1 at \(v=0\) and \(v=1\). for \(0 \le a \leq 1\). Thus, from (3.E.30) and (3.E.34), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{\left | X_i \right | Q_i} & \approx&\displaystyle \frac{2\sqrt{a}}{a+1} \\ & =&\displaystyle \tilde{\rho}_{X_i R_i}. \end{array} \end{aligned} $$
(3.E.37)

The scaled correlation coefficient \({\tilde \rho }_{Z_i Q_i}\) shown in (3.E.35) has the maximum \(\frac {\sqrt {3}}{2}\left (\sqrt {5}-1 \right ) \sqrt {\sqrt {5}-2} \approx 0.5201\) at \(a=\sqrt {5}+2\approx 4.2361\) and the minimum \(-\frac {\sqrt {3}}{2} \left (\sqrt {5}-1 \right )\sqrt {\sqrt {5}-2} \approx -0.5201\) at \(a=\sqrt {5}-2 =\frac {1}{\sqrt {5}+2}\approx 0.2361\).

Finally, the scaled correlation coefficient \({\hat \rho }_{R_i Q_i} = \frac {n+1}{n-1}{\rho }_{R_i Q_i}\) shown in (3.E.36) has the lower extreme value \({\hat \rho }_{R_i Q_i} = \frac {1}{27}\left ( {112\sqrt {7} - 299} \right ) \approx -0.0991\) at \(a=\sqrt {7} -2 \approx 0.6458\), the upper extreme value \({\hat \rho }_{R_i Q_i} \approx 0.0991\) at \(a=\frac {\sqrt {7}+2}{3} =\frac {1}{\sqrt {7}-2} \approx 1.5486\), and the value 0 at \(a \approx 0.3761\), \(a=1\), and \(a \approx 2.6589\). Here, \(a=\sqrt {7} -2\) is the solution to \(\frac {d}{dv} {\hat \rho }_{R_i Q_i}= 0\) for \(0 < a < 1\): in other words, it is the solution between 0 and 1 among the solutions to \(a^4+4a^3-2a^2+4a-3 = \left ( a^4+4a^3-3a^2 \right ) + \left (a^2 +4a-3 \right )= \left ( a^2 +1 \right ) \left ( a^2+4a-3\right )=0\). The value \(a =\frac {1}{6}\left ( {\sqrt [3]{71+6\sqrt {177}} + \sqrt [3]{71-6\sqrt {177}} -1} \right ) = 0.3761\cdots = \frac {1}{2.659\cdots }\) denotes the solution between 0 and 1 among the solutions to \(2a^3 + a^2 + 2a -1=0\).

Exercise 3.19

The incomplete gamma functionFootnote 4

$$\displaystyle \begin{aligned} \begin{array}{rcl} \gamma (\alpha,x)\ =\ \int_0^x e^{-t}t^{\alpha-1} dt \end{array} \end{aligned} $$
(3.E.38)

and hypergeometric function

$$\displaystyle \begin{aligned} \begin{array}{rcl} {{}_2F}_1 (\alpha, \beta; \gamma; z) & =&\displaystyle \sum_{k=0}^{\infty} \frac{(\alpha)_k (\beta)_k }{ (\gamma)_k} \frac{z^k}{k!} \\ & =&\displaystyle 1+ \frac{\alpha \beta}{\gamma} \frac{z}{1!} + \frac{\alpha (\alpha+1) \beta(\beta+1)} {\gamma(\gamma+1)}\frac{z^2}{2!} + \cdots , {} \end{array} \end{aligned} $$
(3.E.39)

also written as \(F (\alpha , \beta ; \gamma ; z)\), are related as [1, 10]

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \int_{0}^{\infty} x^{\mu-1}e^{-\beta x} \gamma(\nu, \alpha x) dx \\ & &\displaystyle \quad = \ \frac{\alpha^\nu \varGamma(\mu + \nu)}{ (\alpha + \beta)^{\mu + \nu} \nu} ~ {{}_2F}_1 \left(1, \mu + \nu; \nu+1; \frac{\alpha}{\alpha+\beta} \right) {} \end{array} \end{aligned} $$
(3.E.40)

for \(\mbox{Re}(\alpha +\beta )>0\), \(\mbox{Re}(\beta )>0\), and \(\mbox{Re}(\mu +\nu )>0\). In (3.E.39),

$$\displaystyle \begin{aligned} \begin{array}{rcl} (z)_n &=& \left\{ \begin{array}{ll} 1, & n=0 ,\\ z(z+1) \cdots (z+n-1), & n \in \mathbb{J}_{1,\infty} \end{array} \right. {} \end{array} \end{aligned} $$
(3.E.41)

for a complex number z is called the rising factorial,Footnote 5 and can be written also as \((z)_n = \frac {\varGamma (z+n)}{\varGamma (z)}\).

Based on (3.E.40), show that the scaled correlation coefficient is

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{\rho}_{X_i R_i} \ = \ \sqrt{\frac{3}{\alpha}} \left\{ \frac{\varGamma (2\alpha) \, {{}_2 F}_1 \left(1, 2\alpha+1; \alpha+1; \frac{1}{2} \right)} {2^{2\alpha-1} \varGamma^2 (\alpha)} - \alpha \right\} {} \end{array} \end{aligned} $$
(3.E.42)

for the gamma pdf

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} f_G (x)\ =\ \frac{1}{\beta \varGamma(\alpha)} \left( \frac{x}{\beta}\right)^{\alpha -1} \exp \left( - \frac{x}{\beta} \right) u(x) , \end{array} \end{aligned} $$
(3.E.43)

where \(\alpha >0\) and \(\beta >0\). As shown in Fig. 3.11 already, the scaled correlation coefficient \(\tilde {\rho }_{X_i R_i}\) of (3.E.42) is an increasing function of \(\alpha \) with the value approximately \(0.4837\), \(0.6942\), \(0.8261\), \(0.8660\), \(0.9186\), and \(0.9472\) for \(\alpha =0.1\), \(0.3\), \(0.7\), 1, 2, and 4, respectively.

Exercise 3.20

Obtain the Fisher information \(I_1 \left ( f_C \right )\) of the Cauchy pdf \(f_C(x) = \frac {\alpha }{\pi } \frac {1}{x^2 + \alpha ^2 }\).

Exercise 3.21

Show that, for any cdf H, the Fisher information \(I_1 (g)\) of the pdf \(g(x) = \int _{-\infty }^{\infty } f(x-y)dH(y)\) is no larger than the Fisher information \(I_1 (f)\) of the pdf f.

Exercise 3.22

Obtain the limit of the \(t(k)\) pdf \(f(r) = \frac {\varGamma \left (\frac {k+1}{2} \right )} {\varGamma \left (\frac {k}{2} \right ) \sqrt {k \pi }} \left (1+ \frac {r^2}{k} \right )^{- \frac {k+1}{2} }\) shown in (3.2.20) as \(k \to \infty \).

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Song, I., Park, S.R., Zhang, W., Lee, S. (2024). Rank Statistics. In: Fundamentals of Order and Rank Statistics. Springer, Cham. https://doi.org/10.1007/978-3-031-50601-7_3

Download citation

Publish with us

Policies and ethics