Skip to main content
Log in

Properties and generation of representative points of the exponential distribution

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

It is known that the exponential distribution has many nice properties. Graf and Luschgy (2000) pointed out that the mean squared error of the set of representative points of the exponential distribution is fully determined by the smallest representative point. In this paper we concern with the representative points of the exponential distribution and find a number of new interesting properties. A new algorithm is proposed to effectively generate representative points of the exponential distribution. In addition, the performance of representative points of the exponential distribution is evaluated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. f is log-concave if and only if, for all \(x_1,x_2\in X\) and all \(t\in [0,1]\), \(f(tx_1+(1-t)x_2)\le f(x_1)^tf(x_2)^{1-t}\) holds, where X is the domain of f.

  2. The formal proof of Theorem 7 (i) can be found in Proposition 5.4 in Graf and Luschgy (2000).

References

  • Anderberg MR (1973) Cluster analysis for applications. Academic Press, San Diego

    MATH  Google Scholar 

  • Bally V, Pagès G (2003) A quantization algorithm for solving discrete time multi-dimensional optimal stopping problems. Bernoulli 9(6):1003–1049

    Article  MathSciNet  Google Scholar 

  • Bally V, Pagès G, Printems J (2005) A quantization tree method for pricing and hedging multi-dimensional american options. Math Financ 15(1):119–168

    Article  Google Scholar 

  • Bormetti G, Callegaro G, Livieri G, Pallavicini A (2018) A backward Monte Carlo approach to exotic option pricing. Eur J Appl Math 29(1):146–187

    Article  MathSciNet  Google Scholar 

  • Callegaro G, Fiorin L, Grasselli M (2017) Pricing via recursive quantization in stochastic volatility models. Quant Financ 17(6):855–872

    Article  MathSciNet  Google Scholar 

  • Chakraborty S, Roychowdhury MK, Sifuentes J (2020) High precision numerical computation of principal points for univariate distributions. Sankhya B:1–27

  • Chen WY, Mackey L, Gorham J, Briol FX, Oates C (2018) Stein points. In: International Conference on Machine Learning. PMLR, pp 844–853

  • Chow J (1982) On the uniqueness of best \(L_2[0, 1]\) approximation by piecewise polynomials with variable breakpoints. Math Comput 39(160):571–585

    MathSciNet  MATH  Google Scholar 

  • Cox DR (1957) Note on grouping. J Am Stat Assoc 52(280):543–547

    Article  Google Scholar 

  • Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. CRC Press, Boca Raton

    Book  Google Scholar 

  • El Amri MR, Helbert C, Lepreux O, Zuniga MM, Prieur C, Sinoquet D (2020) Data-driven stochastic inversion via functional quantization. Stat Comput 30:525–541

    Article  MathSciNet  Google Scholar 

  • Fang KT, He SD (1982) The problem of selecting a given number of representative points in a normal population and a generalized Mills’ ratio (No. TR-327). Technical Report. Department of Statistics, Stanford University

  • Fang KT, Wang Y (1994) Number-theoretic methods in statistics. Chapman & Hall, London

  • Fang KT, Yuan KH, Benlter PM (1994) Applications of number-theoretic methods to quantizers of elliptically contoured distributions, Multivariate Analysis and Its Applications. IMS Lecture Notes—Monograph Series 24:211–225

  • Fang KT, Zhou M, Wang WJ (2014) Applications of the representative points in statistical simulations. Sci China Math 57(12):2609–2620

    Article  MathSciNet  Google Scholar 

  • Fei RC (1990) The problem of selecting representative points from Pearson distributions population. J Wuxi Inst Light Ind 9:71–78 (in Chinese)

    Google Scholar 

  • Flury B (1990) Principal points. Biometrika 77(1):33–41

    Article  MathSciNet  Google Scholar 

  • Flury B (1993) Estimation of principal points. J R Stat Soc Ser C 42(1):139–151

    MathSciNet  MATH  Google Scholar 

  • Fu HH (1985) The problem of selecting a specified number of representative points from a Gamma population. J China Univ Min. Technol. 4:107–117. (in Chinese)

  • Fu HH (1993) The problem of the best representative points for Weibull population. J China Univ Min. Technol. 22:123–134. (in Chinese)

  • Gersho A, Gray RM (1991) Vector quantization and signal compression. Kluwer Academic Publishers, Dordrecht

    MATH  Google Scholar 

  • Gobet E, Pagès G, Pham H, Printems J (2006) Discretization and simulation of the Zakai equation. SIAM J Numer Anal 44(6):2505–2538

    Article  MathSciNet  Google Scholar 

  • Graf S, Luschgy H (2000) Foundations of quantization for probability distributions. Springer, New York

  • Graf S, Luschgy H, Pagès G (2012) The local quantization behavior of absolutely continuous probabilities. Ann Probab 40(4):1795–1828

    Article  MathSciNet  Google Scholar 

  • Jiang JJ, He P, Fang KT (2015) An interesting property of the arcsine distribution and its applications. Stat Probab Lett 105:88–95

  • Joseph VR, Dasgupta T, Tuo R, Wu CJ (2015) Sequential exploration of complex surfaces using minimum energy designs. Technometrics 57(1):64–74

    Article  MathSciNet  Google Scholar 

  • Joseph VR, Wang D, Gu L, Lyu S, Tuo R (2019) Deterministic sampling of expensive posteriors using minimum energy designs. Technometrics 61(3):297–308

    Article  MathSciNet  Google Scholar 

  • Kurata H, Qiu DX (2011) Linear subspace spanned by principal points of a mixture of spherically symmetric distributions. Commun Stat Theory Methods 40:2737–2750

    Article  MathSciNet  Google Scholar 

  • Lejay A, Reutenauer V (2012) A variance reduction technique using a quantized Brownian motion as a control variate. J Comput Financ 16(2):61–84

    Article  Google Scholar 

  • Lemaire V, Montes T, Pagès G (2020) New weak error bounds and expansions for optimal quantization. J Computati Appl Math 371

  • Linde Y, Buzo A, Gray R (1980) An algorithm for vector quantizer design. IEEE Trans Commun 28(1):84–95

    Article  Google Scholar 

  • Lloyd S (1982) Least squares quantization in PCM. IEEE Trans Inf Theory 28(2):129–137

    Article  MathSciNet  Google Scholar 

  • Mak S, Joseph VR (2018) Support points. Ann Stat 46(6A):2562–2592

    Article  MathSciNet  Google Scholar 

  • Matsuura S, Kurata H (2010) A principal subspace theorem for 2-principal points of general location mixtures of spherically symmetric distributions. Stat Probab Lett 80:1863–1869

  • Matsuura S, Kurata H (2011) Principal points of a multivariate mixture distribution. J Multivar Anal 102:213–224

    Article  MathSciNet  Google Scholar 

  • Matsuura S, Kurata H (2014) Principal points for an allometric extension model. Stat Pap 55(3):853–870

    Article  MathSciNet  Google Scholar 

  • Matsuura S, Kurata H, Tarpey T (2015) Optimal estimators of principal points for minimizing expected mean squared distance. J Stat Plan Inference 167:102–122

    Article  MathSciNet  Google Scholar 

  • Matsuura S, Tarpey T (2020) Optimal principal points estimators of multivariate distributions of location-scale and location-scale-rotation families. Stat Pap 61:1629–1643

    Article  MathSciNet  Google Scholar 

  • Max J (1960) Quantizing for minimum distortion. IEEE Trans Inf Theory 6(1):7–12

    Article  MathSciNet  Google Scholar 

  • Pagès G (1998) A space quantization method for numerical integration. J Comput Appl Math 89(1):1–38

    Article  MathSciNet  Google Scholar 

  • Pagès G, Printems J (2003) Optimal quadratic quantization for numerics: the Gaussian case. Monte Carlo Methods Appl MCMA 9(2):135–165

    Article  MathSciNet  Google Scholar 

  • Pagès G, Sagna A (2012) Asymptotics of the maximal radius of an \(L^r\)-optimal sequence of quantizers. Bernoulli 18(1):360–389

    Article  MathSciNet  Google Scholar 

  • Pagès G (2015) Introduction to vector quantization and its applications for numerics. ESAIM Proc Surv 48:29–79

  • Pagès G, Yu J (2016) Pointwise convergence of the Lloyd I algorithm in higher dimension. SIAM J Control Optim 54(5):2354–2382

  • Qi ZF, Zhou YD, Fang KT (2017) Representative points for location-biased datasets. Commun Stat Simul Comput 48(2):458–471

    Article  MathSciNet  Google Scholar 

  • Tarpey T (1995) Principal points and self-consistent points of symmetrical multivariate distributions. J Multivar Anal 53(1):39–51

    Article  MathSciNet  Google Scholar 

  • Tarpey T, Li L, Flury B (1995) Principal points and self-consistent points of elliptical distributions. Ann Stat 23:103–112

    Article  MathSciNet  Google Scholar 

  • Tarpey T, Petkovab E (2010) Principal point classification: applications to differentiating drug and placebo responses in longitudinal studies. J Stat Plan Inference 140:539–550

    Article  MathSciNet  Google Scholar 

  • Trushkin A (1982) Sufficient conditions for uniqueness of a locally optimal quantizer for a class of convex error weighting functions. IEEE Trans Inf Theory 28(2):187–198

    Article  MathSciNet  Google Scholar 

  • Zhou M, Wang WJ (2016) Representative points of student’s \(t_n\) distribution and their applications in statistical simulation. Acta Math Appl Sin 39:620–640 (in Chinese)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank the Editors and anonymous reviewers for their helpful comments. This work was partially supported by the UIC Grant (No. R201912 and No. R202010).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ping He.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this appendix we provide proofs for all theorems (excluding Theorem 1 and Theorem 7), corollaries and lemmas listed in Sect. 3.

Proof of Corollary 1

Combined Theorem 1 with Lemma 2, it is easy to obtain

$$\begin{aligned} a_1^2&= E(X^2)-E(Y_{MSE}^2)\\&= E(X^2)-(E(X))^2-(E(Y_{MSE}^2)-(E(Y_{MSE}))^2)\\&= \text{ Var }(X)-\text{ Var }(Y_{MSE}). \end{aligned}$$

Thus,

$$\begin{aligned} \text{ Var }(Y_{MSE}) = \text{ Var }(X)-a_1^2 = \frac{1}{\lambda ^2}-a_1^2, \end{aligned}$$

which completes the proof.

Proof of Theorem 2

The last equation in (6) gives us the following equation

$$\begin{aligned} \int _{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }(x-a_n^{(n)})p(x)dx=0. \end{aligned}$$
(10)

Then,

$$\begin{aligned}&\int _{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }(x-a_n^{(n)})p(x)dx = \int _{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }x\lambda e^{-\lambda x}dx-a_n^{(n)}\int _{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }\lambda e^{-\lambda x}dx\\&\quad = -xe^{-\lambda x}|_{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }+\frac{1}{\lambda }\int _{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }\lambda e^{-\lambda x}dx-a_n^{(n)}\int _{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }\lambda e^{-\lambda x}dx\\&\quad =\frac{1}{2}(a_{n-1}^{(n)}+a_n^{(n)})e^{-\frac{\lambda }{2}(a_{n-1}^{(n)}+a_n^{(n)})}+(\frac{1}{\lambda }-a_n^{(n)})\int _{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }\lambda e^{-\lambda x}dx\\&\quad =\frac{1}{2}(a_{n-1}^{(n)}+a_n^{(n)})e^{-\frac{\lambda }{2}(a_{n-1}^{(n)}+a_n^{(n)})}+(\frac{1}{\lambda }-a_n^{(n)})[-e^{-\lambda x}]_{(a_{n-1}^{(n)}+a_n^{(n)})/2}^{\infty }\\&\quad =\frac{1}{2}(a_{n-1}^{(n)}+a_n^{(n)})e^{-\frac{\lambda }{2}(a_{n-1}^{(n)}+a_n^{(n)})}+(\frac{1}{\lambda }-a_n^{(n)})e^{-\frac{\lambda }{2}(a_{n-1}^{(n)}+a_n^{(n)})}\\&\quad =(\frac{a_{n-1}^{(n)}+a_n^{(n)}}{2}+\frac{1}{\lambda }-a_n^{(n)})e^{-\frac{\lambda }{2}(a_{n-1}^{(n)}+a_n^{(n)})}\\&\quad =\frac{1}{2}(a_{n-1}^{(n)}-a_n^{(n)}+\frac{2}{\lambda })e^{-\frac{\lambda }{2}(a_{n-1}^{(n)}+a_n^{(n)})}. \end{aligned}$$

Since \(e^{-\frac{\lambda }{2}(a_{n-1}^{(n)}+a_n^{(n)})}>0\), it follows that \(a_{n-1}^{(n)}-a_n^{(n)}+\frac{2}{\lambda }=0\), which completes the proof.

Remark 1

Let \(y=x-a_n^{(n)}\) in (10), then we can obtain

$$\begin{aligned} \int _{(a_{n-1}^{(n)}-a_n^{(n)})/2}^{\infty }yp(y+a_n^{(n)})dy=0. \end{aligned}$$
(11)

From the memoryless property of the exponential distribution, we have

$$\begin{aligned} p(y+a_n^{(n)})=\lambda e^{-\lambda (y+a_n^{(n)})}=e^{-\lambda a_n^{(n)}}\lambda e^{-\lambda y}=e^{-\lambda a_n^{(n)}}p(y). \end{aligned}$$

Thus, equation (11) becomes

$$\begin{aligned} e^{-\lambda a_n^{(n)}}\int _{(a_{n-1}^{(n)}-a_n^{(n)})/2}^{\infty }yp(y)dy=0. \end{aligned}$$

Since \(e^{-\lambda a_n^{(n)}}>0\), it follows that

$$\begin{aligned} \int _{(a_{n-1}^{(n)}-a_n^{(n)})/2}^{\infty }yp(y)dy=0. \end{aligned}$$
(12)

By using a similar way to analyse (12), the proof is completed.

For proving Theorem 3 we need the following lemma.

Lemma 5

If \(0<\theta _1\le \frac{1}{\lambda }\) and \(\theta _2>0\) satisfy the following equation

$$\begin{aligned} e^{\lambda \theta _2}(\frac{1}{\lambda }-\theta _2)=(\theta _1+\frac{1}{\lambda })e^{-\lambda \theta _1}, \end{aligned}$$
(13)

then \(0<\theta _2<\theta _1\le \frac{1}{\lambda }\). Further, if \(\theta _1\) is given, then \(\theta _2\) must be unique, namely, there is a unique solution in the following equation

$$\begin{aligned} e^{\lambda x}(\frac{1}{\lambda }-x)=(\theta _1+\frac{1}{\lambda })e^{-\lambda \theta _1}, \quad \text{ when } 0<x<\theta _1. \end{aligned}$$

Proof of Lemma 5

Let

$$\begin{aligned} f(x)=(x+\frac{1}{\lambda })e^{-\lambda x},x>0, \end{aligned}$$

then equation (13) becomes \(f(\theta _1)=f(-\theta _2)\). Define

$$\begin{aligned} h(x)&= f(x)-f(-x)\\&=(x+\frac{1}{\lambda })e^{-\lambda x}-(-x+\frac{1}{\lambda })e^{\lambda x},x>0. \end{aligned}$$

First, we show the monotonicity of h(x). The derivative of h(x) is given as follows

$$\begin{aligned} h'(x)&= e^{-\lambda x}+(x+\frac{1}{\lambda })(-\lambda )e^{-\lambda x}-(-e^{\lambda x}+(-x+\frac{1}{\lambda })(\lambda )e^{\lambda x})\\&= -\lambda xe^{-\lambda x}+\lambda xe^{\lambda x}= \lambda xe^{-\lambda x}(-1+e^{2\lambda x}). \end{aligned}$$

It follows that h(x) is a strict increasing function on \((0,+\infty )\) as \(h'(x)>0\) when \(x>0\). Thus, \(h(x)>h(0)=0\), namely, \(f(x)>f(-x)\) when \(x>0\). As a consequence, \(f(\theta _1)<f(-\theta _1)\) when \(0<\theta _1\le \frac{1}{\lambda }\). Under the condition \(f(-\theta _2)=f(\theta _1)\), it is clear that \(f(-\theta _2)=f(\theta _1)>f(-\theta _1)\). Now, since \(f'(x)=e^{-\lambda x}+(x+\frac{1}{\lambda })(-\lambda )e^{-\lambda x}=-\lambda xe^{-\lambda x}>0\) when \(x<0\), it follows that f(x) is a strict increasing function on \((-\infty ,0)\). Thus, \(f(-\theta _2)>f(-\theta _1)\Leftrightarrow -\theta _2>-\theta _1\), namely \(\theta _1>\theta _2\).

Next, we will show the uniqueness of the solution of \(g(x)=0\) on \((0,\theta _1)\), where \(0<\theta _1\le \frac{1}{\lambda }\) and

$$\begin{aligned} g(x)=e^{\lambda x}(\frac{1}{\lambda }-x)-(\theta _1+\frac{1}{\lambda })e^{-\lambda \theta _1}. \end{aligned}$$

Since \(g'(x) = \lambda e^{\lambda x}(\frac{1}{\lambda }-x)+e^{\lambda x}(-1)=-\lambda x e^{\lambda x}<0\) when \(0<x<\theta _1\le \frac{1}{\lambda }\), it follows that g(x) is a strict decreasing function on \((0,\theta _1)\). Then, the sign of g(0) can be determined as follows

$$\begin{aligned} g(0)&= \frac{1}{\lambda }-(\theta _1+\frac{1}{\lambda })e^{-\lambda \theta _1}\\&= \frac{1}{\lambda }e^{-\lambda \theta _1}(e^{\lambda \theta _1}-(\lambda \theta _1+1))\\&>0 \quad (\text{ since } e^x>x+1 \text{ for } x\in {\mathbb {R}}^+). \end{aligned}$$

Because g(x) is a strict decreasing function on \((0,\theta _1)\) and \(g(0)>0\), if we could prove \(g(\theta _1)<0\), then g(x) must have a unique solution on \((0,\theta _1)\) based on intermediate value theorem. The function \(g(\theta _1)\) can be expressed as

$$\begin{aligned} g(\theta _1)&= e^{\lambda \theta _1}(\frac{1}{\lambda }-\theta _1)-(\theta _1+\frac{1}{\lambda })e^{-\lambda \theta _1}\\&=\frac{1}{\lambda }e^{-\lambda \theta _1}((1-\lambda \theta _1)e^{2\lambda \theta _1}-(\lambda \theta _1+1)). \end{aligned}$$

In order to prove \(g(\theta _1)<0\), it is equivalent to prove the following inequality:

$$\begin{aligned} (1-\lambda \theta _1)e^{2\lambda \theta _1}-(\lambda \theta _1+1)<0,\quad 0<\theta _1\le \frac{1}{\lambda }. \end{aligned}$$
(14)

Let \(k(x)=(1-x)e^{2x}-(x+1),0<x\le 1\). It is easy to calculate that

$$\begin{aligned} k'(x)&= -e^{2x}+2(1-x)e^{2x}-1= e^{2x}-2xe^{2x}-1,\\ k''(x)&= 2e^{2x}-2(e^{2x}+2xe^{2x})= -4xe^{2x}<0. \end{aligned}$$

Since \(k''(x)<0\) when \(0<x\le 1\), it follows that \(k'(x)\) is a strict decreasing function on (0, 1] and we can obtain \(k'(x)<k'(0)=0\) when \(0<x\le 1\). It leads that k(x) is a strict decreasing function on (0, 1], so \(k(x)<k(0)=0\) when \(0<x\le 1\) and equation (14) holds, i.e., \(g(\theta _1)<0\). Based on intermediate value theorem, we finish the proof.

Proof of Theorem 3

From equation (6), we can obtain

$$\begin{aligned} (\frac{1}{\lambda }+\frac{a_{i+1}^{(n)}-a_i^{(n)}}{2})e^{-\lambda \frac{a_{i+1}^{(n)}-a_i^{(n)}}{2}} =(\frac{1}{\lambda }-\frac{a_i^{(n)}-a_{i-1}^{(n)}}{2})e^{\lambda \frac{a_i^{(n)}-a_{i-1}^{(n)}}{2}} \end{aligned}$$
(15)

for \(i=2,...,n-1\). From Theorem 2, we know the final gap \(a_{n}^{(n)}-a_{n-1}^{(n)}\) is constant \(\frac{2}{\lambda }\). By the mathematical induction, if we set \(\theta _1=(a_{i+1}^{(n)}-a_i^{(n)})/2\) and \(\theta _2=(a_i^{(n)}-a_{i-1}^{(n)})/2\) in Lemma 5 and the condition is satisfied for each \(i=2,...,n-1\), then we can obtain

$$\begin{aligned} \frac{a_{i+1}^{(n)}-a_i^{(n)}}{2}>\frac{a_i^{(n)}-a_{i-1}^{(n)}}{2},\quad i=2,...,n-1, \end{aligned}$$

which completes the proof.

Proof of Theorem 4

According to equation (6) and Theorem 2, we can obtain the following two system of equations. The first one is for n MSE RPs and the second is for \(n+1\) case.

$$\begin{aligned}&\left\{ \begin{array}{lr} \displaystyle {(\frac{1}{\lambda }+\frac{a_{i+1}^{(n)}-a_i^{(n)}}{2})e^{-\lambda \frac{a_{i+1}^{(n)}-a_i^{(n)}}{2}} =(\frac{1}{\lambda }-\frac{a_i^{(n)}-a_{i-1}^{(n)}}{2})e^{\lambda \frac{a_i^{(n)}-a_{i-1}^{(n)}}{2}}, i=2,..,n-1}\\ \displaystyle {a_{n}^{(n)}-a_{n-1}^{(n)}=\frac{2}{\lambda }}. \end{array} \right. \\&\left\{ \begin{array}{lr} \displaystyle {(\frac{1}{\lambda }+\frac{a_{j+1}^{(n+1)}-a_j^{(n+1)}}{2})e^{-\lambda \frac{a_{j+1}^{(n+1)}-a_j^{(n+1)}}{2}} =(\frac{1}{\lambda }-\frac{a_j^{(n+1)}-a_{j-1}^{(n+1)}}{2})e^{\lambda \frac{a_j^{(n+1)}-a_{j-1}^{(n+1)}}{2}}, j=3,...,n-1}\\ \displaystyle {a_{n+1}^{(n+1)}-a_{n}^{(n+1)}=\frac{2}{\lambda }}. \end{array} \right. \end{aligned}$$

If we compare these two system equations for each pair i and j, the proof is completed by using Theorem 3 and Lemma 5 directly.

Proof of Theorem 5

Set \(\delta =(a_1^{(n+1)}+a_2^{(n+1)})/2\) and \(\rho =1\) in Lemma 4, then \(X+\delta \) has has the density function

$$\begin{aligned} q(x)=\frac{\lambda e^{-\lambda x}}{e^{-\lambda \delta }}{\mathbb {I}}_{(x\ge \delta )}, \end{aligned}$$

and its RPs are

$$\begin{aligned} \beta _i^{(n)}=a_i^{(n)}+\frac{a_1^{(n+1)}+a_2^{(n+1)}}{2}. \end{aligned}$$
(16)

Consider the \(n+1\) MSE RPs of \(EP(\lambda )\) and denote them by \(a_1^{(n+1)},...,a_{n+1}^{(n+1)}\) with \(a_1^{(n+1)}<...<a_{n+1}^{(n+1)}\). And those points must satisfy

$$\begin{aligned} \left\{ \begin{array}{lr} \displaystyle {\int _{0}^{(a_1^{(n+1)}+a_2^{(n+1)})/2}(x-a_1^{(n+1)})p(x)dx=0}\\ \displaystyle {\int _{(a_{i-1}^{(n+1)}+a_i^{(n+1)})/2}^{(a_i^{(n+1)}+a_{i+1}^{(n+1)})/2}(x-a_i^{(n+1)})p(x)dx=0,\quad i=2,...,n,}\\ \displaystyle {\int _{(a_{n}^{(n+1)}+a_{n+1}^{(n+1)})/2}^{\infty }(x-a_{n+1}^{(n+1)})p(x)dx=0} \end{array} \right. \end{aligned}$$

where \(p(x)=\lambda e^{-\lambda x},x\ge 0\). With some simple calculation, we can obtain the following n equations with deleting the first original equation

$$\begin{aligned} \left\{ \begin{array}{lr} \displaystyle {(\frac{a_{i-1}^{(n+1)}-a_i^{(n+1)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(a_{i-1}^{(n+1)}+a_i^{(n+1)})}=(\frac{a_{i+1}^{(n+1)}-a_i^{(n+1)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(a_{i+1}^{(n+1)}+a_i^{(n+1)})}, i=2,...,n}\\ \displaystyle {a_{n+1}^{(n+1)}-a_n^{(n+1)}=\frac{2}{\lambda }} \end{array} \right. \end{aligned}$$
(17)

For q(x), as we know that \(\beta _1^{(n)},...,\beta _n^{(n)}\) are the n MSE RPs of q(x) where \(a=(a_1^{(n+1)}+a_2^{(n+1)})/2\), then we can obtain the following n equations

$$\begin{aligned} \left\{ \begin{array}{lr} \displaystyle {(\frac{a_1^{(n+1)}-(2\beta _1^{(n)}-a_2^{(n+1)})}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(a_1^{(n+1)}+a_2^{(n+1)})}=(\frac{\beta _2^{(n)}-\beta _1^{(n)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(\beta _1^{(n)}+\beta _2^{(n)})}}\\ \displaystyle {(\frac{\beta _{i-1}^{(n)}-\beta _i^{(n)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(\beta _{i-1}^{(n)}+\beta _i^{(n)})}=(\frac{\beta _{i+1}^{(n)}-\beta _{i}^{(n)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(\beta _i^{(n)}+\beta _{i+1}^{(n)})},\quad i=2,...,n-1}\\ \displaystyle {\beta _{n}^{(n)}-\beta _{n-1}^{(n)}=\frac{2}{\lambda }} \end{array} \right. \end{aligned}$$
(18)

Based on Lemma 3, the solutions of equation (18) must be unique. Thus, compared equations (17) with (18), we can obtain

$$\begin{aligned} \beta _i^{(n)}=a_{i+1}^{(n+1)},\quad i=1,...,n. \end{aligned}$$

With equation (16), we conclude

$$\begin{aligned} a_i^{(n)}+\frac{a_1^{(n+1)}+a_2^{(n+1)}}{2}=a_{i+1}^{(n+1)},\quad i=1,...,n. \end{aligned}$$

Particularly,

$$\begin{aligned} a_{2}^{(n+1)}=a_1^{(n)}+\frac{a_1^{(n+1)}+a_2^{(n+1)}}{2}, \end{aligned}$$

which completes the proof.

Proof of Corollary 2

For \(n+1\) MSE RPs, the first integral equation in (6) becomes

$$\begin{aligned} \int _0^{(a_1^{(n+1)}+a_2^{(n+1)})/2}(x-a_1^{(n+1)})p(x)dx=0. \end{aligned}$$

After simplification, we obtain

$$\begin{aligned} \frac{1}{\lambda }-a_1^{(n+1)}=(\frac{a_2^{(n+1)}-a_1^{(n+1)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(a_1^{(n+1)}+a_2^{(n+1)})}. \end{aligned}$$

Multiplying \(e^{\lambda a_1^{(n+1)}}\) on both sides, the above equation becomes

$$\begin{aligned} e^{\lambda a_1^{(n+1)}}(\frac{1}{\lambda }-a_1^{(n+1)})=(\frac{a_2^{(n+1)}-a_1^{(n+1)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(a_2^{(n+1)}-a_1^{(n+1)})}. \end{aligned}$$

Substituting \(a_2^{(n+1)}-a_1^{(n+1)}=2a_1^{(n)}\) based on Theorem 5, we have

$$\begin{aligned} e^{\lambda a_1^{(n+1)}}(\frac{1}{\lambda }-a_1^{(n+1)})=(a_1^{(n)}+\frac{1}{\lambda })e^{-\lambda a_1^{(n)}}, \end{aligned}$$

which completes the proof.

Proof of Theorem 6

The proof will be divided into two steps. First, in the process of proving Theorem 5, we can obtain

$$\begin{aligned} a_i^{(n)}+\frac{a_1^{(n+1)}+a_2^{(n+1)}}{2}=a_{i+1}^{(n+1)},\quad i=1,...,n. \end{aligned}$$

Thus, \(a_i^{(n)}<a_{i+1}^{(n+1)}\) for \(i=1,2,...,n\). Second, we consider the relationship between \(a_i^{(n+1)}\) and \(a_i^{(n)}\) for \(i=1,2,...,n\). For \(a_1^{(n+1)}\) and \(a_1^{(n)}\), from Corollary 2 and Lemma 5, it is easy to obtain \(a_1^{(n+1)}<a_1^{(n)}\). For \(a_2^{(n+1)}\) and \(a_2^{(n)}\), in the process of proving Corollary 2, we consider n MSE RPs, and obtain

$$\begin{aligned} e^{\lambda a_1^{(n)}}(\frac{1}{\lambda }-a_1^{(n)})=(\frac{a_2^{(n)}-a_1^{(n)}}{2}+\frac{1}{\lambda })e^{-\frac{\lambda }{2}(a_2^{(n)}-a_1^{(n)})}. \end{aligned}$$

So, from Lemma 5, we can obtain

$$\begin{aligned} \frac{a_2^{(n)}-a_1^{(n)}}{2}>a_1^{(n)}. \end{aligned}$$

Then,

$$\begin{aligned}&a_2^{(n)}-a_1^{(n)}>2a_1^{(n)}>a_1^{(n+1)}+a_1^{(n)}\\&\Rightarrow 2a_1^{(n)}+a_1^{(n+1)}<a_2^{(n)}\\&\Rightarrow a_2^{(n+1)}<a_2^{(n)}. \end{aligned}$$

For \(a_i^{(n+1)}\) and \(a_i^{(n)},i=3,...,n\), from Theorem 3 and the above results, we obtain

$$\begin{aligned}&a_1^{(n)}+a_1^{(n+1)}<a_2^{(n)}-a_1^{(n)}<a_3^{(n)}-a_2^{(n)}<...<a_i^{(n)}-a_{i-1}^{(n)},\quad i=3,...,n\\&\Rightarrow a_1^{(n)}+a_1^{(n+1)}<a_i^{(n)}-a_{i-1}^{(n)},\quad i=3,...,n\\&\Rightarrow \frac{a_1^{(n+1)}+a_2^{(n+1)}}{2}<a_i^{(n)}-a_{i-1}^{(n)},\quad i=3,...,n\\&\Rightarrow a_{i-1}^{(n)}+\frac{a_1^{(n+1)}+a_2^{(n+1)}}{2}<a_i^{(n)},\quad i=3,...,n\\&\Rightarrow a_{i}^{(n+1)}<a_i^{(n)},\quad i=3,...,n \end{aligned}$$

Thus, \(a_i^{(n+1)}<a_{i}^{(n)}\) for \(i=1,...,n\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, LH., Fang, KT. & He, P. Properties and generation of representative points of the exponential distribution. Stat Papers 63, 197–223 (2022). https://doi.org/10.1007/s00362-021-01236-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-021-01236-1

Keywords

Navigation