Skip to main content
Log in

Identifiability issues in dynamic stress–strength modeling

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

In many real-life scenarios, system reliability depends on dynamic stress–strength interference, where strength degrades and stress accumulates concurrently over time. In some other cases, shocks appear at random time points, causing damage which is only effective at the instant of shock arrival. In this paper, we consider the identifiability problem of a system under deterministic strength degradation and stochastic damage due to shocks arriving according to a homogeneous Poisson process. We provide conditions under which the models are identifiable with respect to lifetime data only. We also consider current status data and suggest to collect additional information and discuss the issues of model identifiability under different data configurations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Bhuyan, P., Dewanji, A. (2014). Dynamic stress–strength modeling with cumulative stress and strength degradation. http://www.isical.ac.in/~asu/TR/TechRepASU201407.pdf

  • Bhuyan, P., Dewanji, A. (2015). Reliability computation under dynamic stress strength modeling with cumulative stress and strength degradation. Communications in Statistics - Simulation and Computation. doi:10.1080/03610918.2015.1057288

  • Clifford, P. (1972). Non-threshold models of the survival of bacteria after irradiation. In Proceedings of 6th Berkeley Symposium on Mathematical Statistics and Probability (pp. 265–286). University of California Press.

  • Cornell, C. A. (1968). Engineering seismic risk analysis. Bulletin of the Seismological Society of America, 58, 1583–1606.

    Google Scholar 

  • Esary, J. D., Marshall, A. W., Proschan, F. (1973). Shock models and wear processes. The Annals of Probability, 1, 617–649.

  • Gertbsbakh, I. B., Kordonskiy, K. B. (1969). Models of Failure. New York: Springer.

  • Gil-Pelaez, J. (1951). Note on the inversion theorem. Biometrika, 38, 481–482.

    Article  MathSciNet  MATH  Google Scholar 

  • Gupta, R. C., Ramakrishnan, S., Zhou, X. (1999). Point and interval estimation of \(P(X<Y)\): The normal case with common coefficient of variation. Annals of the Institute of Statistical Mathematics, 51, 571–584.

  • Guttman, I., Johnson, R. A., Bhattacharayya, G. K., Reiser, B. (1988). Confidence limits for stress–strength models with explanatory variables. Technometrics, 30, 166–168.

  • Kaplan, S. (1981). On the method of discrete probability distributions in risk and reliability calculations—application to seismic risk assessment. Risk Analysis, 1, 189–196.

    Article  Google Scholar 

  • Kapur, K. C., Lamberson, L. R. (1977). Reliability in Engineering Design. New York: John Wiley & Sons. Inc.

  • Kotz, S., Lumelskii, Y., Pensky, M. (2003). The Stress–Strength Model and its Generalizations. Singapore: World Scientific Publishing Co., Pvt. Ltd.

  • Nakagawa, T. (2007). Shock and Damage Models in Reliability Theory. London: Springer.

  • Nakagawa, T., Osaki, S. (1974). Some aspects of damage models. Microelectronics and Reliability, 13, 253–257.

  • Puri, P. S. (1977). On certain problems involving nonidentifiability of distributions arising in stochastic modeling. Proceedings of an International Conference on Optimizing Methods in Statistics. Indian Institute of Technology, Bombay.

  • Puri, P. S. (1983). On identifiability problems among some stochastic models in reliability theory. Proceedings of Neyman–Keifer Conference. University of California.

  • Ross, S. M. (1996). Stochastic Processes. New York: John Wiley & Sons Inc.

    MATH  Google Scholar 

  • Rudin, W. (1976). Principles of Mathematical Analysis. New York: MacGraw Hill Education Private Limited.

    MATH  Google Scholar 

  • Shaked, M., Shanthikumar, J. G. (2007). Stochastic Orders. New York: Springer Science & Business Media LLC.

  • Simonoff, J. S., Hochberg, Y., Reiser, B. (1986). Alternative estimation procedure for \({P(X \le Y)}\) in categorized data. Biometrics, 42, 895–907.

  • Snyder, D. L., Miller, M. I. (1991). Random Point Processes in Time and Space. New York: Springer.

  • Wilcoxon, F. (1945). Individual comparison by ranking methods. Biometrical Bulletin, 1, 80–83.

    Article  Google Scholar 

  • Xue, J., Yang, K. (1997). Upper and lower bounds of stress-strength interference reliability with random strength degradation. IEEE Trans Reliability, 46, 142–145.

Download references

Acknowledgments

The authors are thankful to Professor Subir Kumar Bhandari, and also the anonymous Associate Editor and Reviewer, for many helpful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Prajamitra Bhuyan.

Appendix

Appendix

Lemma 1

Suppose \(g(\cdot )\) is a right-continuous or left-continuous function, \(h(0)\ge 0\), and \(h(t)>0\), for all \(t>0\), then \(\int \nolimits _{0}^{v}g(t) h(t)dt=0\), for all \(v> 0\), implies \(g(t)=0\), for all \(t\ge 0\).

Proof

Let us first suppose that g(t) is a right-continuous function. In addition, suppose, if possible, \(g(t_{0})\ne 0 \), for some \(t_{0} \ge 0\). Without loss of generality, let us consider \(g(t_{0}) > 0\). Then, there exists \(\delta > 0\), such that \(g(t) >0\), for all \(t\in \left[ t_{0},t_{0}+\delta \right) \). Then, from the following equation:

$$\begin{aligned}&\int \nolimits _{0}^{t_{0}+\delta }g(t)h(t)\mathrm{{d}}t=\int \nolimits _{0}^{t_{0}}g(t)h(t)\mathrm{{d}}t+\int \nolimits _{t_{0}}^{t_{0}+\delta }g(t)h(t)\mathrm{{d}}t.\nonumber \end{aligned}$$

we have \(\int \nolimits _{t_{0}}^{t_{0}+\delta }g(t)h(t)\mathrm{{d}}t=0\), which is a contradiction, since \(h(t) >0\) for all \(t>0\), and \(g(t) >0\), for all \(t\in \left[ t_{0},t_{0}+\delta \right) \). Hence, \(g(t)=0\), for all \(t\ge 0\). The proof is similar when g(t) is a left-continuous function. \(\square \)

Lemma 2

If f is a right-continuous function and g is a non-increasing left-continuous function, then \(f\circ g\) is left-continuous.

Proof

Let \(\{x_n\}\) be a sequence, such that \(x_n \rightarrow x\) as \(n \rightarrow \infty \) and \(x_n\le x\), for all \(n=1,2,\ldots \). Then, since g is left-continuous, \(g(x_n)\rightarrow g(x)\). In addition, since g is a non-increasing function, \(g(x_n)\ge g(x)\), for all \(n=1,2,\ldots \). Therefore, \(f(g(x_n))\rightarrow f(g(x))\), since f is a right-continuous function. From sequential characterization of continuity (Rudin 1976, Theorem 4.2, p-84), we conclude that \(f\circ g\) is left-continuous. \(\square \)

Result 1

If f is a left-continuous function and g is non-increasing right-continuous function, then \(f\circ g\) is right-continuous.

Proof

Similar to the proof of Lemma 2 (see Bhuyan and Dewanji 2014). \(\square \)

Lemma 3

If V and W are two non-negative random variables, such that \(V <_{st} W\), and \(\eta (\cdot )\ge 0\) is a strictly decreasing function, then \(E[\eta (V)]>E[\eta (W)]\).

Proof

We first show that \(\eta (W) <_{st} \eta (V)\). We know that \({P[W>x]\ge P[V>x]}\), for all \({x\ge 0}\), and \(P[W>x_{0}]>P[V>x_{0}]\), for some \({x_{0}\ge } 0\). Now \({P[\eta (W)< x]}={P[W>\eta ^{-1}(x)]}\ge P[V>\eta ^{-1}(x)]=P{[\eta (V)<x]}\), for all \(x\ge 0\), and \(P[\eta (W)< \eta (x_{0})]=P[W>x_{0}]>P[V>x_{0}]=P{[\eta (V)<\eta (x_{0})]}\). Hence, \(\eta (W) <_{st} \eta (V)\).

Since \(\eta (W)\) and \(\eta (V)\) are non-negative random variables, we can write \(E[\eta (V)]-E[\eta (W)]=\int \nolimits _{0}^{\infty }\{P[\eta (V)>x]-P[\eta (W)>x]\}\mathrm{{d}}x\). We know that \({P[\eta (V)>x]-}P[\eta (W)>x]\) is a right-continuous function and \(P[\eta (V)>y_{0}]-P[\eta (W)>y_{0}]>0\) for some \(y_{0} \ge 0\). Therefore, there exists \(\delta >0\), such that \(P[\eta (V)>x]-P[\eta (W)>x]>0\) for all \(y_{0}\le x<y_{0}+\delta \). Hence, \(E[\eta (V)]>E[\eta (W)]\). \(\square \)

Lemma 4

Suppose X is a non-negative random variable and \(G(x-)=P(X<x)\) is a strictly increasing function of \(x>0\), then \(G^{(n)}(x-)\) is a strictly decreasing function of n, for all \({x>0}\), where \(n=0,1,\ldots \).

Proof

We prove this result by the method of induction. Let us first fix some arbitrary \({x_{0}>0}\). Since \(G(x-)\) is strictly increasing in x and \(\lim \nolimits _{x\rightarrow \infty }G(x-)=1\), \({G(x_{0}-) < G^{(0)}(x_{0}-)=1}\). Now

$$\begin{aligned} G^{(2)}(x_{0}-)= & {} \int \nolimits _{[0,x_{0})}G(x_{0}-t-)dG(t)\nonumber \\< & {} \int \nolimits _{[0,x_{0})}G(x_{0}-)dG(t) \nonumber \\= & {} \{G(x_{0}-)\}^{2}\nonumber \\\le & {} G(x_{0}-). \nonumber \end{aligned}$$

By definition, \(G^{(2)}(x-)\) is also strictly increasing in \(x>0\), since \(G(x-)\) is. Similarly, it is easy to see that \(G^{(n)}(x-)\) is also strictly increasing in \(x>0\), for all \(n=1,2,\ldots \). Then

$$\begin{aligned} G^{(n+1)}(x_{0}-)= & {} \int \nolimits _{[0,x_{0})}G^{(n)}(x_{0}-t-)dG(t)\nonumber \\< & {} \int \nolimits _{[0,x_{0})}G^{(n)}(x_{0}-)dG(t) \nonumber \\= & {} G^{(n)}(x-)G(x_{0}-)\nonumber \\\le & {} G^{(n)}(x_{0}-). \nonumber \end{aligned}$$

Therefore, by induction, we conclude that \(G^{(n)}(x-)\) is a strictly decreasing function of n, for all \(x>0\). \(\square \)

Theorem 1

Let \(X_{1},X_{2},\ldots \) and \(Y_{1},Y_{2},\ldots \) are two sequences of iid non-negative random variables with the common cdf \(F\in \varPi ^{d}\) and \(H\in \varPi ^{d}\), respectively. If \(X_{1}\ne _{st}Y_{1}\), then, for all \(\alpha >0\), either there exists \(u_{0} \in (0,\alpha )\), such that, without loss of generality, \(P[X_{1}<u_{0}]>P[Y_{1}<u_{0}]\) and \(P[\sum \nolimits _{i=1}^{n} X_{i}<u_{0}]\ge P[\sum \nolimits _{i=1}^{n} Y_{i}<u_{0}]\) for all \(n=2,3,\ldots \), or for all \(u \in (0,\alpha ]\), \(P[Y_{1}<u]=P[X_{1}<u]\).

Proof

Fix \(\alpha >0\). Let \(X_{1}\) and \(Y_{1}\) take values \(x_{1}, x_{2},\ldots \), and \(y_{1}, y_{2},\ldots \), respectively. Let us write \({v_{0}=\inf \{x \ge 0 :P(X_{1}<x)\ne P(Y_{1}<x)\}}\). Note that, since the mass points of both F and H are isolated, this \(v_{0}\) is a mass point of either F or H, but does not satisfy \(P[X_{1}<v_{0}]\ne {P[Y_{1}<v_{0}]}\). Therefore, \(v_0\) is the first point, where the masses of F and H differ and all the mass points of F and H, smaller than \(v_0\), are common having equal mass. Suppose \(z_1,\ldots ,z_k\) are the common mass points of F and H, smaller than \(v_0\). If \(v_{0}\ge \alpha \), then \({P[Y_{1}<u]=P[X_{1}<u]}\) for all \(u \in (0,\alpha ]\). If \(v_{0} < \alpha \), then we define \({u_{0}=v_{0}+w_{0}}\), where \({w_{0}=\left[ \left\{ \min \left\{ \left( v_{0},\infty \right) \bigcap \{x_{i},y_{i}:,i=1,2,\ldots \right\} \right\} \wedge \alpha \}-v_{0}\right] /2}\). Note that the set \({\{x_{i},y_{i}:i=1,2,\ldots \}}\) has no limit point, and hence, \((v_{0},\infty )\bigcap \{x_{i},y_{i}:,i=1,2,\ldots \}\) is a closed and non-empty set. Hence, the minimum is well-defined. Since \(u_0>v_0\), we have \(P[X_{1}<u_{0}]\ne P[Y_{1}<u_{0}]\). We assume, without loss of generality, \({P[X_{1}<u_{0}]>P[Y_{1}<u_{0}]}\). Therefore, \(P[X_1=v_0]>P[Y_1=v_0]\). Let us consider the set \(S=\{z_1,\ldots ,z_k,v_0\}\). Then,

$$\begin{aligned} P\left[ \sum _{i=1}^{n} X_{i}<u_{0}\right]= & {} \sum \limits _{\{(l_{1},\ldots , l_{n}):l_{i}\in S, l_{1}+\cdots +l_{n}<u_{0}\}}\prod \limits _{i=1}^{n}P[X_{i}=l_{i}] \nonumber \\\ge & {} \sum \limits _{\{(l_{1},\ldots , l_{n}):l_{i}\in S, l_{1}+\cdots +l_{n}<u_{0}\}}\prod \limits _{i=1}^{n}P[Y_{i}=l_{i}] \nonumber \\= & {} P\left[ \sum _{i=1}^{n} Y_{i}<u_{0}\right] \nonumber , \end{aligned}$$

for all \(n=2,3,\ldots \). Hence, the proof. \(\square \)

Theorem 2

Let \(X_{1},X_{2},\ldots \) and \(Y_{1},Y_{2},\ldots \) are two sequences of iid non-negative random variables with the common cdf \(F(\cdot )\in \varPi ^{C}\) and \(G(\cdot )\in \varPi ^{C}\), respectively. If \(X_{1}\ne _{st}Y_{1}\), then there exists \(x_{0}>0\), such that, for all \(u \in (0,x_{0})\), without loss of generality, \(P[X_{1}<u]>P[Y_{1}<u]\) and \(P[\sum \nolimits _{i=1}^{n} X_{i}<u]\ge P[\sum \nolimits _{i=1}^{n} Y_{i}<u]\), for all \(n=2,3,\ldots \).

Proof

If \(E_{G,H}\), as defined above, is empty, then, without loss of generality, we have \({P[X_{1}<x] > P[Y_{1}<x]}\) for all \({x>0}\); that is, \({X_{1}<_{st} Y_{1}}\). Now, by Theorem 1.A.3 of (Shaked and Shanthikumar 2007, p-6), we get \({\sum \nolimits _{i=1}^{n}X_{i}\le _{st} \sum \nolimits _{i=1}^{n}Y_{i}}\) for all \(n=2,3,\ldots \) and, hence, \({P[\sum \nolimits _{i=1}^{n}X_{i}< x] \ge P[\sum \nolimits _{i=1}^{n}Y_{i}<x]}\) for all \(x>0\) and for all \(n=2,3,\ldots \).

If \(E_{G,H}\) is non-empty, let us write \(x_{0}=\min \{x: x\in E_{G,H}\}>0\). Since \(E_{G,H}\) is a closed set, this minimum \(x_{0}\) exists. Let us define random variables \(X^{*}_{i}\) and \(Y^{*}_{i}\), \(i=1,2,\ldots \), with probability distributions defined as \({P[X^{*}_{i}< x]=P[X_{i}< x]/P[X_{i}\le x_{0}]}\) and \({P[Y^{*}_{i}< x]=P[Y_{i}< x]/P[Y_{i}\le x_{0}]}\), respectively, for all \({0< x \le x_{0}}\), and \({P[X^{*}_{i}< x]=P[Y^{*}_{i}< x]=1}\) for all \(x> x_{0}\). Note that \({P[X_{1}\le x_{0}]=P[X_{1}<x_{0}]=P[Y_{1}< x_{0}]=P[Y_{1}\le x_{0}]>0}\). Since \(P[X_{1}<x]-P[Y_{1}<x]\) is a continuous function, using Theorem 4.23 of (Rudin 1976, p-93), without loss of generality, we have \({P[X_{1}<x] > P[Y_{1}<x]}\) for all \(x\in (0,x_{0})\). Hence, \({P[X^{*}_{1}<x] > P[Y^{*}_{1}<x]}\) for all \(x\in (0,x_{0})\), that is \(X^{*}_{1}<_{st}Y^{*}_{1}\). Now, by Theorem 1.A.3 of (Shaked and Shanthikumar 2007, p-6), we get \(\sum \nolimits _{i=1}^{n}X^{*}_{i}\le _{st} \sum \nolimits _{i=1}^{n}Y^{*}_{i}\) for all \({n=2,3,\ldots }\), and hence, \({P[\sum \nolimits _{i=1}^{n}X^{*}_{i}< x] \ge P[\sum \nolimits _{i=1}^{n}Y^{*}_{i}<x]}\) for all \(x>0\), and for all \({n=2,3,\ldots }\). Note that this theorem is not applicable on the original variables, since \(P[X_{1}<x]>P[Y_{1}<x]\) for \(x\in (0,x_{0})\) only, not on the entire support. Now, \({P[\sum \nolimits _{i=1}^{n}X_{i}<x]=\{P[X_{1}\le x_{0}]\}^{n}P[\sum \nolimits _{i=1}^{n}X^{*}_{i}<x]}\) and \({P[\sum \nolimits _{i=1}^{n}Y_{i}<x]=\{P[Y_{1}\le x_{0}]\}^{n}P[\sum \nolimits _{i=1}^{n}Y^{*}_{i}<x]}\), for all \(0<x\le x_{0}\) and for all \({n=2,3,\ldots }\). Therefore, \({P[\sum \nolimits _{i=1}^{n}X_{i}< x] \ge P[\sum \nolimits _{i=1}^{n}Y_{i}<x]}\) for all \(x\in (0,x_{0})\), and for all \({n=2,3,\ldots }\). Hence, the proof. \(\square \)

Corollary 1

Let \(X_{1},X_{2},\ldots \) and \(Y_{1},Y_{2},\ldots \) are two sequences of iid non-negative random variables with the common distributions \(P[X_{1}\le x]=\sum \nolimits _{j=0}^{\infty }\alpha _{j}x^{j}\) and \(P[Y_{1}\le x]=\sum \nolimits _{j=0}^{\infty }\beta _{j}x^{j}\), respectively. If \(X_{1}\ne _{st}Y_{1}\), then there exists \(x_{0}>0\), such that, for all \(u \in (0,x_{0})\), without loss of generality, \(P[X_{1}<u]>P[Y_{1}<u]\) and \(P[\sum \nolimits _{i=1}^{n} X_{i}<u]\ge P[\sum \nolimits _{i=1}^{n} Y_{i}<u]\), for all \(n=2,3,\ldots \).

Proof

Let us consider the set \(A=\{x>0 :P(X_{1}<x)=P(Y_{1}<x)\}\). Note that \(P[X_{1}<x]\) and \(P[Y_{1}<x]\) are continuous and strictly increasing functions for all \(x>0\).

If \(A=\phi \), then A is closed. If \(A\ne \phi \) and the set of limit points of A is also non-empty, then by Theorem 8.5 of (Rudin 1976, p-177), we have \(\alpha _{j}=\beta _{j}\), for all \(j=0,1,\ldots \). This is a contradiction to the fact that \(X_{1}\ne _{st}Y_{1}\). Therefore, we consider that A has no limit point. This implies A is a closed set.

For both the cases, distributions of \(X_{1}\) and \(Y_{1}\) belong to \(\varPi ^{C}\), and the Corollary 1 follows from Theorem 2. \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhuyan, P., Mitra, M. & Dewanji, A. Identifiability issues in dynamic stress–strength modeling. Ann Inst Stat Math 70, 63–81 (2018). https://doi.org/10.1007/s10463-016-0579-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-016-0579-4

Keywords

Navigation