Skip to main content
Log in

Bayesian predictive density estimation with parametric constraints for the exponential distribution with unknown location

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

In this paper, we consider prediction for the exponential distribution with unknown location. For the most part, we treat the one-dimensional case and assume that the location parameter is restricted to an interval. The Bayesian predictive densities with respect to prior densities supported on the real line and the restricted space are compared under the Kullback–Leibler divergence. We first consider the case where the scale parameter is known. We obtain general dominance conditions and also minimaxity and admissibility results. Next, we treat the case of unknown scale. In this case, the location parameter is assumed to be less than a known constant and sufficient conditions for domination are obtained. Finally, we treat a multidimensional problem with known scale where the location parameter is restricted to a convex set. The performance of several Bayesian predictive densities is investigated through simulation. Some of the prediction methods are applied to real data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Aitchison J (1975) Goodness of prediction fit. Biometrika 62:547–554

    Article  MathSciNet  Google Scholar 

  • Brown LD, George EI, Xu X (2008) Admissible predictive density estimation. Ann Stat 36:1156–1170

    MathSciNet  MATH  Google Scholar 

  • Casella G, Strawderman WE (1981) Estimating a bounded normal mean. Ann Stat 9:870–878

    Article  MathSciNet  Google Scholar 

  • Fourdrinier D, Marchand É, Righi A, Strawderman WE (2011) On improved predictive density estimation with parametric constraints. Electron J Stat 5:172–191

    Article  MathSciNet  Google Scholar 

  • Hamura Y, Kubokawa T (2019) Bayesian predictive distribution for a negative binomial model. Math Methods Stat 28:1–17

    Article  MathSciNet  Google Scholar 

  • Hamura Y, Kubokawa T (2020) Bayesian predictive distribution for a Poisson model with a parametric restriction. Commun Stat Theory Methods 49:3257–3266

    Article  MathSciNet  Google Scholar 

  • Hartigan JA (2004) Uniform priors on convex sets improve risk. Stat Probab Lett 67:285–288

    Article  MathSciNet  Google Scholar 

  • Hudson HM (1978) A natural identity for exponential families with applications in multiparameter estimation. Ann Stat 6:473–484

    Article  MathSciNet  Google Scholar 

  • Juvairiyya RM, Anilkumar P (2018) Estimation of stress-strength reliability for the Pareto distribution based on upper record values. Statistica (Bologna) 78:397–409

    MATH  Google Scholar 

  • Komaki F (2015) Simultaneous prediction for independent Poisson processes with different durations. J Multivar Anal 141:35–48

    Article  MathSciNet  Google Scholar 

  • Kubokawa T (1994) A unified approach to improving equivariant estimators. Ann Stat 22:290–299

    Article  MathSciNet  Google Scholar 

  • Kubokawa T (2005a) Estimation of a mean of a normal distribution with a bounded coefficient of variation. Sankhya 67:499–525

    MathSciNet  MATH  Google Scholar 

  • Kubokawa T (2005b) Estimation of bounded location and scale parameters. J Jpn Stat Soc 35:221–249

    Article  MathSciNet  Google Scholar 

  • Kubokawa T, Marchand É, Strawderman WE, Turcotte J-P (2013) Minimaxity in predictive density estimation with parametric constraints. J Multivar Anal 116:382–397

    Article  MathSciNet  Google Scholar 

  • Li M, Sun H, Peng L (2020) Fisher-Rao geometry and Jeffreys prior for Pareto distribution. Commun Stat Theory Methods. https://doi.org/10.1080/03610926.2020.1771593

    Article  Google Scholar 

  • L’Moudden A, Marchand É, Kortbi O, Strawderman WE (2017) On Predictive density estimation for Gamma models with parametric constraints. J Stat Plan Inference 185:56–68

  • Marchand É, Perron F (2001) Improving on the MLE of a bounded normal mean. Ann Stat 29:1078–1093

    Article  MathSciNet  Google Scholar 

  • Marchand É, Rancourt F, Strawderman WE (2021) Minimax estimation of a restricted mean for a one-parameter exponential family. J Stat Plan Inference 212:114–125

    Article  MathSciNet  Google Scholar 

  • Parsian A, Sanjari Farsipour N (1997) Estimation of parameters of exponential distribution in the truncated space using asymmetric loss function. Stat Pap 38:423–443

    Article  MathSciNet  Google Scholar 

  • Robert CP (1996) Intrinsic losses. Theor Decis 40:191–214

    Article  MathSciNet  Google Scholar 

  • Singh H, Gupta RD, Nisra N (1993) Estimation of parameters of an exponential distribution when the parameter space is restricted with an application to two-sample problem. Commun Stat Theory Methods 22:461–477

    Article  MathSciNet  Google Scholar 

  • Tripathi YM, Kumar S, Petropoulos C (2014) Estimation of the parameters of an exponential distribution under constrained location. Math Methods Stat 23:66–79

    Article  MathSciNet  Google Scholar 

  • van Eeden C (2006) Restricted parameter space problems—admissibility and minimaxity properties. Lecture notes in statistics, vol 188, Springer, New York

Download references

Acknowledgements

We would like to thank the editor, the associate editor, and the three reviewers for many valuable comments and helpful suggestions which improved the paper. Research of the authors was supported in part by Grant-in-Aid for Scientific Research (20J10427, 18K11188) from Japan Society for the Promotion of Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yasuyuki Hamura.

Ethics declarations

Conflicts of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Yasuyuki Hamura—JSPS Research Fellow.

Appendix

Appendix

Here, we provide Lemmas 3, 4, and 5 and their proofs as well as those of Lemmas 1 and 2.

Lemma 3

Let \(\mu \in \mathbb {R}\) and \({\lambda }\in (0, \infty )\) and let \(h :\mathbb {R} \rightarrow \mathbb {R}\). Suppose that \(\lim _{z \rightarrow \infty } e^{- {\lambda }z} z h(z) = 0\). Then we have

$$\begin{aligned} E_{( \mu , {\lambda })}^{Z \sim {\mathrm{Ex}} ( \mu , {\lambda })} \left[ \left( Z - \mu - {1 \over {\lambda }} \right) h(Z) \right] = E_{( \mu , {\lambda })}^{Z \sim {\mathrm{Ex}} ( \mu , {\lambda })} \left[ {Z - \mu \over {\lambda }} h' (Z) \right] \text {.} \end{aligned}$$

Proof

By integration by parts,

$$\begin{aligned}&E_{( \mu , {\lambda })}^{Z \sim {\mathrm{Ex}} ( \mu , {\lambda })} [ (Z - \mu ) h(Z) ] \\&\quad = \int _{\mu }^{\infty } {\lambda }e^{- {\lambda }(z - \mu )} (z - \mu ) h(z) dz \\&\quad = \left[ - e^{- {\lambda }(z - \mu )} (z - \mu ) h(z) \right] _{\mu }^{\infty } + \int _{\mu }^{\infty } e^{- {\lambda }(z - \mu )} \{ h(z) + (z - \mu ) h' (z) \} dz \\&\quad = {1 \over {\lambda }} E_{( \mu , {\lambda })}^{Z \sim {\mathrm{Ex}} ( \mu , {\lambda })} [ h(Z) + (Z - \mu ) h' (Z) ] \text {,} \end{aligned}$$

which proves the result. \(\square \)

Lemma 4

Let \(\mu , c \in \mathbb {R}\) and \({\lambda }\in (0, \infty )\) and let \(h :\mathbb {R} \rightarrow [0, \infty )\). Suppose that \(c > \mu \). Then

$$\begin{aligned} E_{( \mu , {\lambda })}^{Z \sim {\mathrm{Ex}} ( \mu , {\lambda })} [ h( {\lambda }(Z - c)) 1_{(c, \infty )} (Z) ] = e^{{\lambda }( \mu - c)} E^{Z \sim {\mathrm{Ex}} (0, 1)} [ h(Z) ] \text {.} \end{aligned}$$

Proof

We have

$$\begin{aligned} E_{( \mu , {\lambda })}^{Z \sim {\mathrm{Ex}} ( \mu , {\lambda })} [ h( {\lambda }(Z - c)) 1_{(c, \infty )} (Z) ]&= \int _{c}^{\infty } {\lambda }e^{- {\lambda }(z - \mu )} h( {\lambda }(z - c)) dz \\&= e^{{\lambda }( \mu - c)} \int _{0}^{\infty } {\lambda }e^{- {\lambda }z} h( {\lambda }z) dz \\&= e^{{\lambda }( \mu - c)} \int _{0}^{\infty } e^{- z} h(z) dz \\&= e^{{\lambda }( \mu - c)} E^{Z \sim {\mathrm{Ex}} (0, 1)} [ h(Z) ] \text {,} \end{aligned}$$

which proves Lemma 4. \(\square \)

Part (ii) of the following lemma is used only to establish a strict inequality in the proof of Theorem 7. Although part (ii) is proved below, there will be different proofs.

Lemma 5

Let \(p \in \mathbb {N}\) and let \(D \subsetneqq \mathbb {R} ^p\) be an open convex set containing the origin. Let \({\omega }_2> {\omega }_1 > 0\). Define \({\omega }D = \{ {\omega }\varvec{\xi }| \varvec{\xi }\in D \} \) for \({\omega }\in \{ {\omega }_1 , {\omega }_2 \} \). Then:

  1. (i)

    \({\omega }_1 D \subset {\omega }_2 D\).

  2. (ii)

    \(( {\omega }_2 D) \setminus ( {\omega }_1 D)\) has a nonempty interior.

Proof

Part (i) is trivial. We prove part (ii). By assumption, there exists \(\varvec{\xi }_0 \in \mathbb {R} ^p\) such that \(\varvec{\xi }_0 \notin D\). Clearly, \(\varvec{\xi }_0 \ne \varvec{0}\), where \(\varvec{0}\) denotes the origin \(\varvec{0} ^{(p)}\) in \(\mathbb {R} ^p\). Define \(I = \{ t \in [0, \infty ) | \tilde{t} \varvec{\xi }_0 \in D \text {for all} \tilde{t} \in [0, t] \} = \{ t \in [0, \infty ) | t \varvec{\xi }_0 \in D \} \), where the second equality follows from the convexity of D. Let \(t_0 = \sup I \le 1\). Since \(\varvec{0}\) is an interior point of D, we have \(t_0 > 0\). By definition, \([0, t_0 ) \subset I \subset [0, t_0 ]\). Since D is an open set, \(t_0 \notin D\). Thus, \(I = [0, t_0 )\) and \(t_0 \varvec{\xi }_0 \notin D\).

Now, \({\omega }_1 D\) and \({\omega }_2 D\) are open convex sets containing \(\varvec{0}\). Let \(\varvec{\xi }_1 = ( {\omega }_1 t_0 ) \varvec{\xi }_0\) and \(\varvec{\xi }_2 = [ \{ ( {\omega }_1 + {\omega }_2 ) / 2 \} t_0 ] \varvec{\xi }_0\). Since \(\varvec{\xi }_1 \notin {\omega }_1 D\), there exists \(\varvec{0} \ne \varvec{a}\in \mathbb {R} ^p\) such that \(\varvec{a}^{\top } \varvec{\xi }\ge \varvec{a}^{\top } \varvec{\xi }_1\) for all \(\varvec{\xi }\in {\omega }_1 D\). Since \(\varvec{0}\) is an interior point of \({\omega }_1 D\), we have \(\varvec{a}^{\top } \varvec{\xi }< 0\) for some \(\varvec{\xi }\in {\omega }_1 D\). It follows that \(\varvec{a}^{\top } \varvec{\xi }_1 < 0\), that \(\varvec{a}^{\top } \varvec{\xi }_2 < \varvec{a}^{\top } \varvec{\xi }_1\), and that \(\varvec{\xi }_2\) is not a closure point of \({\omega }_1 D\). On the other hand, \(\varvec{\xi }_2 \in {\omega }_2 D\). Therefore, \(\varvec{\xi }_2\) is contained in the intersection of \({\omega }_2 D\) with the complement of the closure of \({\omega }_1 D\). Thus, since the intersection is open, we conclude that \(\varvec{\xi }_2\) is an interior point of \(( {\omega }_2 D) \setminus ( {\omega }_1 D)\). \(\square \)

Proof of Lemma 1

The expressions for \(R( \mu , {\hat{p}}^{( \pi _{{\beta }} )} )\) and for \(R( \mu , {\hat{p}}^{( {\pi _{{\beta }}}^{*} )} )\) for \(\underline{\mu } = - \infty \) are verified by direct calculation. When \(\overline{\mu } = \infty \), we have

$$\begin{aligned}&R\left( \mu , {\hat{p}}^{( {\pi _{{\beta }}}^{*} )} \right) = E_{\mu }^{( \varvec{Y}, \varvec{X})} \Bigg [ n \mu + \log {n + m + {\beta }\over m + {\beta }} - \log {e^{(n + m + {\beta }) ( \underline{Y} \wedge \underline{X} )} \over e^{(m + {\beta }) \underline{X}}} \Bigg .\\&\Bigg .\qquad + (- \log [ 1 - e^{(n + m + {\beta }) \{ \underline{\mu } - ( \underline{Y} \wedge \underline{X} ) \} } ]) - \Bigg [- \log \{ 1 - e^{(m + {\beta }) ( \underline{\mu } - \underline{X} )} \} \Bigg ] \Bigg ] \\&\quad = E_{\mu }^{( \varvec{Y}, \varvec{X})} \left[ n \mu + \log {n + m + {\beta }\over m + {\beta }} - \left\{ (n + m + {\beta }) ( \underline{Y} \wedge \underline{X} ) - (m + {\beta }) \underline{X} \right\} \right. \\&\Bigg .\qquad + \sum _{k = 1}^{\infty } {1 \over k} e^{k \left( n + m + {\beta }\right) \left\{ \underline{\mu } - ( \underline{Y} \wedge \underline{X} ) \right\} } - \sum _{k = 1}^{\infty } {1 \over k} e^{k (m + {\beta }) ( \underline{\mu } - \underline{X} )} \Bigg ] \text {,} \end{aligned}$$

from which the desired result follows. \(\square \)

Proof of Lemma 2

By direct calculation, we have that

$$\begin{aligned} \tilde{p} ^{( {\tilde{\pi }}_{{\alpha }} )} ( \varvec{y}; \varvec{X})&= E_{{\tilde{\pi }}_{{\alpha }}}^{( \mu , {\lambda }) | \varvec{X}} \left[ p( \varvec{y}| \mu , {\lambda }) | \varvec{X}\right] \\&= \frac{ \int _{0}^{\infty } \left\{ \int _{- \infty }^{\underline{y} \wedge \underline{X}} {\tilde{\pi }}_{{\alpha }} ( \mu , {\lambda }) {\lambda }^{n + m} e^{{\lambda }(n \mu - y_{\cdot } + m \mu - X_{\cdot } )} d\mu \right\} d{\lambda }}{ \int _{0}^{\infty } \left\{ \int _{- \infty }^{\underline{X}} {\tilde{\pi }}_{{\alpha }} ( \mu , {\lambda }) {\lambda }^m e^{{\lambda }(m \mu - X_{\cdot } )} d\mu \right\} d{\lambda }} \\&= \frac{ \int _{0}^{\infty } \left\{ \int _{- \infty }^{\underline{y} \wedge \underline{X}} {\lambda }^{n + m + {\alpha }- 1} e^{- {\lambda }( y_{\cdot } + X_{\cdot } )} e^{{\lambda }(n + m) \mu } d\mu \right\} d{\lambda }}{ \int _{0}^{\infty } \left( \int _{- \infty }^{\underline{X}} {\lambda }^{m + {\alpha }- 1} e^{- {\lambda }X_{\cdot }} e^{{\lambda }m \mu } d\mu \right) d{\lambda }} \\&= {{\varGamma }(n + m + {\alpha }- 1) / (n + m) \over {\varGamma }(m + {\alpha }- 1) / m} {( X_{\cdot } - m \underline{X} )^{m + {\alpha }- 1} \over \{ y_{\cdot } + X_{\cdot } - (n + m) ( \underline{y} \wedge \underline{X} ) \} ^{n + m + {\alpha }- 1}} \end{aligned}$$

and that

$$\begin{aligned} \tilde{p} ^{( {{\tilde{\pi }}_{{\alpha }}}{}^{*} )} ( \varvec{y}; \varvec{X})&= E_{{{\tilde{\pi }}_{{\alpha }}}{}^{*}}^{( \mu , {\lambda }) | \varvec{X}} [ p( \varvec{y}| \mu , {\lambda }) | \varvec{X}]\\&= \frac{ \int _{0}^{\infty } \left\{ \int _{- \infty }^{\underline{y} \wedge \underline{X}} {{\tilde{\pi }}_{{\alpha }}}{}^{*} ( \mu , {\lambda }) {\lambda }^{n + m} e^{{\lambda }(n \mu - y_{\cdot } + m \mu - X_{\cdot } )} d\mu \right\} d{\lambda }}{ \int _{0}^{\infty } \left\{ \int _{- \infty }^{\underline{X}} {{\tilde{\pi }}_{{\alpha }}}{}^{*} ( \mu , {\lambda }) {\lambda }^m e^{{\lambda }(m \mu - X_{\cdot } )} d\mu \right\} d{\lambda }} \\&= \frac{ \int _{0}^{\infty } \big \{ \int _{- \infty }^{\underline{y} \wedge \underline{X} \wedge \overline{\mu }} {\lambda }^{n + m + {\alpha }- 1} e^{- {\lambda }( y_{\cdot } + X_{\cdot } )} e^{{\lambda }(n + m) \mu } d\mu \big \} d{\lambda }}{ \int _{0}^{\infty } \big ( \int _{- \infty }^{\underline{X} \wedge \overline{\mu }} {\lambda }^{m + {\alpha }- 1} e^{- {\lambda }X_{\cdot }} e^{{\lambda }m \mu } d\mu \big ) d{\lambda }} \nonumber \\&= {{\varGamma }(n + m + {\alpha }- 1) / (n + m) \over {\varGamma }(m + {\alpha }- 1) / m} {\{ X_{\cdot } - m ( \underline{X} \wedge \overline{\mu } ) \} ^{m + {\alpha }- 1} \over \{ y_{\cdot } + X_{\cdot } - (n + m) ( \underline{y} \wedge \underline{X} \wedge \overline{\mu } ) \} ^{n + m + {\alpha }- 1}} \text {.} \end{aligned}$$

This completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hamura, Y., Kubokawa, T. Bayesian predictive density estimation with parametric constraints for the exponential distribution with unknown location. Metrika 85, 515–536 (2022). https://doi.org/10.1007/s00184-021-00840-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-021-00840-3

Keywords

Navigation