Skip to main content
Log in

On the distribution of burr with applications

  • Published:
Sankhya B Aims and scope Submit manuscript

Abstract

Under certain conditions, the distribution of burr is shown to follow an extreme value distribution. In this context, a result on extremal process based on stationary sequence is proved. Some data sets are analyzed and applications of the results indicated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Dasgupta, R. 2006. Modeling of material Wastage by Ornstein—Uhlenbeck process. Calcutta Statistical Association Bulletin 58:15–35.

    MathSciNet  MATH  Google Scholar 

  • Dasgupta, R., J.K. Ghosh, and N.T.V. Ranga Rao. 1981. A cutting model and distribution of ovality and related topics. In Proc. of the ISI golden jubilee conference, 182–204.

  • Galambos, J. 1987. The asymptotic theory and extreme order statistics 2nd edn. Krieger.

  • Hu̇sler, J., and L. Peng. 2008. Review of testing issues in extremes: In honor of Professor Laurens de Haan. Extremes 11:99–111. doi:10.1007/s10687-007-0052-0.

    Article  MathSciNet  Google Scholar 

  • Johnson, N., S. Kotz, and N. Balakrishnan. 1995. Continuous univariate distributions vol. 2. New York: Wiley.

    MATH  Google Scholar 

  • Karlin, S., and H.M. Taylor. 1981. A second course in stochastic processes. London: Academic.

    MATH  Google Scholar 

  • Marks, N.B. 2007. Kolmogorov-Smirnov test statistic and critical values for the Erlang-3 and Erlang-4 distributions. Journal of Applied Statistics 34(8):899–906.

    Article  MathSciNet  Google Scholar 

  • Skrotzki, W., K. Kegler, R. Tamm, and C.-G. Oertel. 2005. Grain structure and texture of cast iron aluminides. Crystal Research and Technology 40(1/2):90–94. doi:10.1002/crat.200410311.

    Article  Google Scholar 

  • Zeevi, A., and P. Glynn. 2004. Estimating tail decay for stationary sequences via extreme values. Advances in Applied Probability 36(1):198–226.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Thanks are due to Professor J.K.Ghosh for interesting discussions, Professor Debasis Sengupta for help in computer programming, Mr. N.T.V.Ranga Rao for suggesting the problem and Mr. E. M. Vyasa for providing data. Referee’s suggestions improved the presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ratan Dasgupta.

Appendix

Appendix

Here we prove Theorem 2, extending A3 of Dasgupta et al. (1981). The earlier proof of Dasgupta et al. (1981) has to be suitably modified so as to extend the stated results from ‘iid continuous random variables’ to ‘stationary random variables’. We provide a modified proof elaborated below.

Assume that the distribution of the stationary sequence X i ’s are nondegenerate as the proof is trivial otherwise.

Consider a fixed sequence i o  = i o (m) = o(m). If the value of max 1 ≤ j ≤ m X j is attained at a single index of X, then from stationarity of the random variables, \(P \{\cup_{i = 1}^{i_o}( X_i = \max_{1\leq j\leq m} X_j)\} = \frac{i_o}{m};\) since the index of X, attaining the maximum value of X, is uniformly distributed on the set {1, ⋯ ,m}.

In a similar fashion, if the maximum is attained for two distinct values of the index, then the probability that both the indices are in the set {1, ⋯ ,i o }, is \(\frac{i_o(i_o-1)}{m (m-1)}\leq \frac{i_o}{m},\) etc.

Thus, in general, the probability that at least one index with maximum value of X will lie within the set { i o  + 1, ⋯ ,m } is at least \((1 - \frac{i_o}{m}).\) In other words,

$$ P \{\cup_{i = i_o +1}^m( X_i = \max\nolimits_{1\leq j\leq m} X_j)\}\geq 1 - \frac{i_o}{m} \rightarrow 1, \;\mbox{as}\; m \rightarrow \infty. $$

Now using the fact that {c i } is a nondecreasing sequence, one gets

$$ P \{\cup_{i = i_o +1}^m( c_i X_i = \max\nolimits_{1\leq j\leq m} c_j X_j)\}\geq 1 - \frac{i_o}{m} \rightarrow 1. $$

On the intersection of above two sets, we have from Eq. 4.1

$$ c_{i_o}\max\nolimits_{1\leq i\leq m} X_i\leq Y_m \leq \max\nolimits_{1\leq i\leq m} X_i. $$

Thus it suffices to show that

  1. (1)

    \(b_m^{-1}(1 - c_{i_o}) \max_{1\leq i\leq m} X_i\rightarrow 0,\;\;\; \mbox {in distribution}.\)

    Now from Eq. 4.2, \(Z_m = b_m^{-1}( \max_{1\leq i\leq m} X_i - a_m)\) is bounded in probability. Write,

    $$ b_m^{-1}(1 - c_{i_o}) \max\nolimits_{1\leq i\leq m} X_i = (1 - c_{i_o})(a_m b_m ^{-1}+ Z_m). $$

    Since \(c_{i_o}\rightarrow 1,\) it is sufficient to show that,

    $$ (1 - c_{i_o})|a_m b_m ^{-1}|\rightarrow 0, $$

    i.e., \( c(i_o(m)) = 1- o(|\;a_m^{-1}b_m|).\;\;\) Hence the first part of the theorem.

    To prove the second part, note that (1 − c(k))f(k) = o (1), there exists a sequence h(k)→ ∞ such that

  2. (2)

    (1 − c(k))f(k)h(k) = o (1). (e.g., if ((1 − c(k))f(k) ≤ ε k →0, then one may take \(h(k)= \epsilon_k^{-1/2}.)\)

    Let n = n(k) be such that

  3. (3)

    f(n(k)) < f(k − 1)h(k − 1) ≤ f(n(k) + 1).

    Such a choice of n = n(k) is possible as \(f(i) \rightarrow \infty, \;\mbox {as}\;i \rightarrow \infty.\)

    (For a fixed k the middle term of the above is finite and as f(n(k))→ ∞ , n(k)→ ∞ ; (3) is ensured.)

    From the assumption \( \overline{\lim}_{i \rightarrow \infty}\frac{f(i\delta)}{f(i)} < \infty,\) for every fixed δ > 0, it follows that

  4. (4)

    k − 1n(k)→ ∞ .

    To see this, suppose n(k) < < k, divide the terms of (3) by f(k); then both r.h.s. and l.h.s. terms of (3) are bounded above in view of \( \overline{\lim}_{i \rightarrow \infty}\frac{f(i\delta)}{f(i)} < \infty,\) whereas the middle term → ∞ , leading to a contradiction.

    Next, for m = 1, 2, ⋯ define, k(m) = k if n(k) ≤ m < n(k + 1). Then

  5. (5)

    f(m) < f(n(k + 1)) < f(k)h(k) see (3). Now from (2)

  6. (6)

    (1 − c k(m))f(m)→0, where \(\frac{k(m)}{m}\leq \frac{k(m)}{n(k(m))}\rightarrow 0,\) from (4).

Hence the second part.

We have a different bound for burr Y given in Eq. 4.6. Proof of Theorem 2 remains valid for every fixed j = 1, ⋯ ,p ; as the sequence of constants {c 1j , ⋯ ,c mj } satisfy the conditions therein. Theorem 2 therefore, holds for the burr Y satisfying the bound of Eq. 4.6, as the minimum is taken over j, a finite p number of terms.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dasgupta, R. On the distribution of burr with applications. Sankhya B 73, 1–19 (2011). https://doi.org/10.1007/s13571-011-0015-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13571-011-0015-y

Keywords

Mathematics Subject Classifications (2010)

Navigation