# Parameter inference from hitting times for perturbed Brownian motion

- 1.1k Downloads
- 7 Citations

## Abstract

A latent internal process describes the state of some system, e.g. the social tension in a political conflict, the strength of an industrial component or the health status of a person. When this process reaches a predefined threshold, the process terminates and an observable event occurs, e.g. the political conflict finishes, the industrial component breaks down or the person dies. Imagine an intervention, e.g., a political decision, maintenance of a component or a medical treatment, is initiated to the process before the event occurs. How can we evaluate whether the intervention had an effect? To answer this question we describe the effect of the intervention through parameter changes of the law governing the internal process. Then, the time interval between the start of the process and the final event is divided into two subintervals: the time from the start to the instant of intervention, denoted by \(S\), and the time between the intervention and the threshold crossing, denoted by \(R\). The first question studied here is: What is the joint distribution of \((S,R)\)? The theoretical expressions are provided and serve as a basis to answer the main question: Can we estimate the parameters of the model from observations of \(S\) and \(R\) and compare them statistically? Maximum likelihood estimators are calculated and applied on simulated data under the assumption that the process before and after the intervention is described by the same type of model, i.e. a Brownian motion, but with different parameters. Also covariates and handling of censored observations are incorporated into the statistical model, and the method is illustrated on lung cancer data.

## Keywords

First passage times Maximum likelihood estimation Wiener process Degradation process Effect of intervention Survival analysis## 1 Introduction

Statistical inference for univariate stochastic processes from observations of hitting times, i.e. epochs when the process attains a boundary for the first time, is a common problem, see Lee and Whitmore (2006) and references therein. Here we investigate its specific variant for perturbed stochastic processes and discuss it in a general setting, presenting some of the fields in which this methodology can be applied. At a known time instant, either controlled by an experimentalist or induced by an independent external condition, an intervention is applied and the time to a given event following the intervention is measured. Assume that the intervention causes a change in the parameters of the underlying process. This scenario can be found in many fields, such as reliability theory, social sciences, finance, biology or medicine. The time course of the intervention can be interpreted as a time-varying explanatory factor in a threshold regression. Also constant and time-varying covariates can be incorporated into the underlying parametric model for the stochastic process, in the spirit of Lee et al. (2008, 2010).

A degradation process in a medical context is commonly modeled as an intrinsic, but not observable, diffusion stochastic process. With this interpretation, our model takes into account an abrupt change of medication or life style before an observable event takes place. For example, in Commenges and Hejblum (2013) the event is myocardial infarction or coronary heart disease and the degradation is the atheromatous process, which is modeled as a Brownian motion with drift, where the drift is a function of explanatory variables. Lee et al. (2008) use a time scale transformation to accommodate treatment switching in clinical trails: the total survival time from randomization is a linear combination of two event times, randomization-to-switch and switch-to-death. Here we keep the original times, but instead model the switching by a change in the drifts, which introduces a dependence structure between the two times. The interpretation in our model is that the underlying Wiener process is a model of a deterioration process, and the intervention either accelerates or slows down the risk process. Lee et al. (2010) propose a Markov Threshold regression model for time-varying covariates. The model decomposes the complete longitudinal process of a subject into a series of shorter processes based on times at which observed covariates change in value. Between two consecutive measurements, the latent process describing the health status of a subject is then approximated by a function of the observed covariates. In this paper we do not assume access to the time-course of the covariates, and the latent process is estimated only through the observed times before and after the intervention.

Similarly to the survival context in medicine, for analysing reliability of technical systems it is important to investigate damage processes. A common model is the Wiener process (Whitmore 1995; Whitmore and Schenkelberg 1997; Whitmore et al. 1998, 2012; Kahle and Lehmann 1998). In Pieper et al. (1997), changing drifts of Wiener processes describes various stress levels for a damage process. Doksum and Hoyland (1992) use a Gaussian process and inverse Gaussian distribution (IGD) to discuss a lifetime model under a step-stress accelerated life test. Nelson (2008) discusses practical issues when conducting an accelerated life test. Yu (2003) proposed a systematic approach to the classification problem where the products’ degradation paths satisfy Wiener processes. Our model fits into the above framework as follows. The degradation of a component is modeled by a Wiener process with failure corresponding to the first crossing of a certain level. The time for maintenance is independently of the time since last repair and the maintenance changes the parameters of the Wiener process. Then from measurements of the time from last repair to the time of maintenance and from the maintenance to the degradation, we deduce the effect of the maintenance on the system.

Lancaster (1972) makes effective use of the IGD in describing data on duration of strikes in UK between 1965 and 1972. The approach is via the first passage time (FPT) of an underlying Wiener process, which follows an IGD, and has also been used by Harrison and Stewart (1993) ,Desmond and Yang (2011). Again, the model studied in this paper can fit this scenario. Imagine that during a strike an important offer towards strikers is proposed. Then the time after may move on a different scale.

In neuroscience, the interval between two consecutive action potentials is often studied being related to information transfer in neurons. The Wiener process is sometimes chosen to model the subthreshold membrane potential evolution of the neuron (Gerstein and Mandelbrot 1964) and parameter estimation has been investigated (Lansky and Ditlevsen 2008). In many experiments, a stimulation (the intervention) such as a sound or a visual image is presented and the changes in electrical activity of the neuron is measured. Estimation from observations of the last action potential before the intervention and the next following it, also in presence of delayed response to the stimulus, has been investigated (Tamborrino et al. 2012, 2013). The current model also fits this framework.

The aim of this paper is to solve two problems. The first is the investigation of the joint distribution of the subintervals up to the instant of intervention, and between the intervention and the first crossing after it. This is needed for the second problem, namely the estimation of the parameters of the process before and after the intervention and testing their equality. This allows to statistically judge the effect of an intervention, if it is as intended or expected and to quantify the size, by comparing latent processes before and after intervention within subjects. The proposed modeling framework can then serve as an alternative to standard survival models, where placebo groups in a medical context have to be included in a randomized experiment to evaluate the effect of treatment. Obviously, in our model, the time to treatment and time to failure are dependent and the statistical inference is complicated by not observing the position of the process at the time of intervention. Further complications arise in the presence of censoring or truncation. Right censoring occurs if the event does not happen before the end of study, which for example is often occurring in medical studies as in the example above where a patient does not die before the end of study or is lost to follow-up. Also left censoring has to be accounted for if time of diagnosis or disease onset is unknown. Another type of missing data can occur if the event happens before the intervention, e.g. a strike ends without any political intervention or a patient dies before the beginning of a treatment. With a slightly abuse of notation we will call this truncation. These schemes can easily be incorporated into the likelihood, as long as data are available. This can be a problem under truncation: If the study is started at time of intervention, then the study population is defined as those subjects who receive the intervention, and data from before are collected retrospectively. Then it is not well-defined how many study subjects have an event before the intervention. This can bias the estimates of parameters governing the process before intervention, as will be illustrated on a data set on lung cancer. This will typically be a problem in medical studies, but not in the strike example, where for example ”strikes in UK between 1965 and 1972” is well-defined. In the neuroscience example, neither censoring nor truncation will be relevant, because the observation period typically will include many spikes both before and after the intervention, and thus, the interval containing the intervention is always fully observed.

The main contributions of the paper are the solutions to these questions in the case of a perturbed Brownian motion. A detailed guideline on how to carry out both simulation of the data and parameter estimation in the computing environment **R** (R Development Core Team 2011) is presented (see Appendices 2 and 3). Using the derived theoretical expressions, estimation could be carried out for more complicated diffusion processes.

In Sect. 2 the type of experimental data together with a description of the involved quantities and variables are presented. In Sect. 3 we describe the model, mathematically define the quantities of interest and derive the probability densities for a general diffusion process. The Brownian motion model under different assumptions on its parameters is treated in Sect. 4. The estimation procedure, accommodating for covariates and for right and left censored and truncated data, is described in Sect. 5. The performance of the maximum likelihood estimators and testing the difference between parameters are illustrated in Sect. 6 on simulated data, and finally the Veteran’s Administration lung cancer data set taken from Kalbfleisch and Prentice (1980) is analyzed in Sect. 7 and compared to previous analysis.

## 2 Data

## 3 Model and its properties

### 3.1 Probability densities of \(S\), \(X(0)\), \(R\) and \((S,R)\)

## 4 The Wiener process

### 4.1 Special case: squared diffusion coefficients proportional to the drifts

## 5 Parameter estimation

The maximum likelihood estimator \(\hat{\phi }=(\hat{\beta },\hat{\sigma }^2_1, \hat{\sigma }^2_2)\) is found by numerically maximizing (18) (see Appendix 3 for detailed description). An approximate 95 % confidence interval (CI) for \(\phi _i\) is given by \(\hat{\phi }_i \pm 1.96\ \text{ SE }(\hat{\phi }_i)\), where \(\text{ SE }\) is the asymptotic standard error given by \(\text{ SE }(\hat{\phi }_i)=\sqrt{I_{ii}(\hat{\phi })^{-1}/n}\), where \(I(\phi )\) is the Fisher information matrix (Cramer 1946), which we numerically approximate (see Appendix 3). To test the hypothesis \(H_0:\mu _1=\mu _2\) we perform a likelihood ratio test at a 5 % significance level, evaluating it in a chi-squared distribution with \(m\) degrees of freedom. The test statistic is \(-2\log [ L_0(\hat{\phi }_0)/L_\mathrm{full}(\hat{\phi })]\), where \(L_0\) and \(L_\mathrm{full}\) denote the likelihood functions of the null and full (alternative) model evaluated in the estimated parameters \(\hat{\phi }_0=(\hat{\mu },\hat{\sigma }_1^2,\hat{\sigma }^2_2)\) and \(\hat{\phi }=(\hat{\mu }_1,\hat{\mu }_2,\hat{\sigma }_1^2,\hat{\sigma }^2_2)\) under the hypotheses \(\mu =\mu _1=\mu _2\) (corresponding to \( \beta _{p+1}= \cdots = \beta _{p+m}=0\)) and \(\mu _1\ne \mu _2\), respectively.

In the following the performance of the estimators is checked on simulated data in a simple set-up both without and with right censoring, and then on a data set with a more complicated structure, incorporating covariate effects. This is the Veteran’s Administration lung cancer data set taken from Kalbfleisch and Prentice (1980), which is analyzed and results are compared.

## 6 Monte Carlo simulation study

Here we briefly summarize the main results from the simulation study. An extended treatment and further figures can be found in the online material accompanying the paper. In the simulations we are mainly concerned with illustrating the performance of the estimators. It is of interest to evaluate the effect of the variability and correlation of \(S\) and \(R\) on estimation, to evaluate sample sizes needed for the asymptotic results of tests and CIs to be valid, to illustrate different special submodels which simplify estimation, and finally to evaluate how much information is gained on parameters of \(S\) by taking into account observations of \(R\).

Averages, empirical and asymptotic SEs and CPs in percentage over 1,000 estimates of \(\phi =(\mu _1,\sigma _1^2,\mu _2,\sigma _2^2)\) for \(n=100\), when \(\mu _1=1, \sigma _1^2=0.4, \mu _2=0.1\), and \(\sigma _2^2 = 0.026, 0.059, 0.094\), or 0.131, yielding an approximate \(\text{ CV }(R)=0.60, 0.65, 0.70\) or 0.75, respectively

Average | Empirical | Asymptotic | Average | Empirical | Asymptotic | |||
---|---|---|---|---|---|---|---|---|

CV(R) | \(\text{ of } \hat{\mu }_1\) | \( \text{ SE }(\hat{\mu }_1)\) | \( \text{ SE }(\hat{\mu }_1)\) | \(\text{ CP }(\hat{\mu }_1)\) | \( \text{ of } \hat{\sigma }_1^2\) | \( \text{ SE }(\hat{\sigma }^2_1)\) | \( \text{ SE }(\hat{\sigma }^2_1)\) | \(\text{ CP }(\hat{\sigma }^2_1)\) |

0.60 | 0.9998 | 0.0405 | 0.0397 | 94.7 | 0.39962 | 0.1079 | 0.1027 | 91.6 |

0.65 | 1.0020 | 0.0438 | 0.0428 | 93.7 | 0.4016 | 0.1213 | 0.1154 | 91.3 |

0.70 | 1.0023 | 0.0468 | 0.0441 | 94.5 | 0.3983 | 0.1315 | 0.1198 | 91.8 |

0.75 | 1.0020 | 0.0458 | 0.0449 | 94.9 | 0.3989 | 0.1388 | 0.1251 | 91.4 |

Average | Empirical | Asymptotic | Average | Empirical | Asymptotic | |||
---|---|---|---|---|---|---|---|---|

CV(R) | \(\text{ of } \hat{\mu }_2\) | \( \text{ SE }(\hat{\mu }_2)\) | \( \text{ SE }(\hat{\mu }_2)\) | \(\text{ CP }(\hat{\mu }_2)\) | \( \text{ of } \hat{\sigma }_2^2\) | \( \text{ SE }(\hat{\sigma }^2_2)\) | \( \text{ SE }(\hat{\sigma }^2_2)\) | \(\text{ CP }(\hat{\sigma }^2_2)\) |

0.60 | 0.1003 | 0.0032 | 0.0032 | 94.8 | 0.0256 | 0.0083 | 0.0080 | 92.7 |

0.65 | 0.1001 | 0.0044 | 0.0043 | 93.7 | 0.0578 | 0.0154 | 0.0145 | 91.9 |

0.70 | 0.1000 | 0.0053 | 0.0051 | 93.7 | 0.0926 | 0.0221 | 0.0212 | 92.1 |

0.75 | 0.1001 | 0.0058 | 0.0058 | 95.5 | 0.1290 | 0.0288 | 0.0278 | 92.9 |

*Parameters vary freely* Details about the settings of parameters, sample sizes and number of repetitions can be found in the online material, and are also given in Table 1, where averages and empirical SEs of the estimates, as well as medians of the asymptotic SEs and the coverage probabilities of the CIs are reported. All estimators appear unbiased and with acceptable SEs. Not surprisingly, the performance improves when the CV of \(R\) decreases. This holds also for \(\hat{\mu }_1\) and \(\hat{\sigma }_1^2\), highlighting the dependence between \(S\) and \(R\): a large variability after the intervention deteriorates estimation of parameters governing the process before the intervention. Coverage probabilities of drift parameters are close to the desired 95 %, whereas the diffusion parameters \(\sigma _1^2\) and \(\sigma _2^2\) need a larger \(n\).

*Equal variances* When \(\sigma _1^2 = \sigma _2^2=\sigma ^2\), the behavior of the estimators is similar, and with equal variances we can more easily analyze the behavior of the drift estimators as functions of the parameters. All estimators improve when \(\sigma ^2\) decreases, since that reduces the variability of both \(S\) and \(R\). The performance of \(\hat{\mu }_i\) improves while that of \(\hat{\mu }_j\) gets worse when \(\mu _j\) increases, for \(i,j=1,2\) and \(i\ne j\). Interestingly, the performance of \(\hat{\sigma }^2\) seems to be constant with respect to \(\mu \), unless \(\sigma ^2\) is large. A likelihood ratio test for testing the hypothesis \(H_0:\mu _1=\mu _2\) performs well for Type I error when \(n=100\) for different sizes of \(\sigma ^2\). Not surprisingly, the power of the test decreases when \(\sigma ^2\) increases.

*Variance proportional to the mean* Assume \(\sigma _i^2 = k \mu _i\), for \(k>0\). As expected from the theoretical results in Sect. 4.1, the performance of \(\hat{\mu }_1\) and \(\hat{\mu }_2\) appears similar, and it does not depend on \(\mu _2\) and \(\mu _1\), respectively. Interestingly, the asymptotic SE of \(\hat{k}\) depends neither on \(\mu _1\) nor on \(\mu _2\), but only on \(k\). This may be due to the fact that neither the \(\text{ CVs }\) of \(S\) and \(R\) nor their correlation depend on \(\mu _1\) and \(\mu _2\), see Eqs. (13), (14) and (17).

*Right truncation* The effect of censoring on the estimation of \(\phi \) is illustrated in the online material, where boxplots of the estimates are reported for different percentage of right censored data and sample sizes. As expected, the performance of \(\hat{\phi }\) gets worse when the percentage of right censored data increases and thus a larger sample size is needed.

## 7 Veterans’ Administration lung cancer data

*Veterans’ Administration lung cancer data set*from Kalbfleisch and Prentice (1980), available in the

**R**-package ”survival” with the name ”veteran”. In this trial, males with advanced inoperable lung cancer were randomized to either a standard or test chemotherapy. The randomization time is the time of intervention. The primary endpoint for therapy comparison was time to death. This is a standard survival analysis data set. The following variables were recorded:

- 1.
Disease duration: Time in months from diagnosis to randomization (observations of \(S\)). We transform to units of days by multiplying by 30.4.

- 2.
Survival lifetime: Time in days from randomization to death (observations of \(R\)).

- 3.
Treatment: standard, test.

- 4.
Histological type of tumor: squamous, small, adeno, large cell.

- 5.
A measure at randomization of the patient’s performance status (Karnofsky rating); 10–30 completely hospitalized, 40–60 partial confinement, 70–99 able to care for oneself. We call it

*karno*and transform to 100-karno. - 6.
Age in years of the patient.

- 7.
Prior therapy: no, yes.

- 8.
Indicator for right censoring (observations of \(\delta ^r\))

The aim of the study is to compare types of treatment and histological types of tumor. A positive component for a given covariate means a higher \(\mu \) and thus increased risk. A negative component implies protection. Indeed, the best treatment and the less dangerous type of tumors should have the (expected) highest survival time and thus the lowest value of \(\mu _2\), since for \(X(0)=x\) is \(\mathbb {E}[R|X(0)]=(B-x)/\mu _2\). Furthermore, it is of interest to compare treatment against no treatment, that is, the difference between \(\mu _1\) and \(\mu _2\), in particular, to judge whether any of the two treatments has an effect with respect to no treatment.

*changes*with respect to \(\mu _1\), i.e. with respect to no treatment) are added to \(\mu _2\), and thus \(m=3\). This implies an extra parameter compared to standard models because the time before the intervention, corresponding to no treatment, is included. In standard models this would require inclusion of an extra randomized group with placebo. Estimates and \(\chi ^2\)-values are reported in Table 2.

Estimates of \(\beta \) for all regressor variables and asymptotic \(\chi ^2\) statistics

Full model | Reduced model | |||
---|---|---|---|---|

Regressor variable | \(\hat{\beta }\) | \(\chi ^2\) value | \(\hat{\beta }\) | \(\chi ^2\) value |

Performance status (100-karno) | 0.0014 | 23.12 | 0.0014 | 23.62 |

Age (years) | 0.0001 | 0.30 | ||

Prior therapy | \(-\)0.0146 | 17.09 | \(-\)0.0147 | 17.30 |

Cell type | ||||

Squamous | 0.0284 | 0.0341 | ||

Small | 0.0379 | 0.0431 | ||

Adeno | 0.0522 | 0.0576 | ||

Large | 0.0346 | 17.13 | 0.0396 | 16.64 |

Treatment | \(-\)0.0231 | 5.37 | ||

Test | \(-\)0.0215 | |||

Standard | \(-\)0.0277 | 0.44 |

Since treatment estimates are negative, treatment increases survival time. This information is missing in standard survival models, unless a placebo group is included in the study. A likelihood ratio test for testing \(H_0: \beta _{\mathrm{standard}}=\beta _{\mathrm{test}}\) shows no statistical difference between treatment types (\(p=0.51\)). Age is not statistical significant either, whereas histological cell types, performance status and prior theory are statistical significant. Results for the reduced model without age and merging the two treatment groups, are reported in Table 2. These results agree with those in Kalbfleisch and Prentice (1980). In their paper, Weibull and Log-normal regression models were fitted to these data, with survival lifetime as dependent variable and disease duration prior to entry to the clinical trial, treatment (one category for the difference between test and standard treatment), cell types (large as reference level and three categories), age and prior therapy as covariates. An important difference is that they include disease duration (the variable \(S\)) as a covariate, whereas we include it as a driving part of the model to interpret the entire disease development. They do not find it statistical significant, whereas the test \(\mu _1=\mu _2\) (i.e. \(\beta _{karno}=\beta _{treatment}=0\)) is strongly significant (\(\chi ^2= 34.98\)). This might be due to the strong significance of performance status, but also a test only of treatment effect (i.e. \(\beta _{treatment}=0\)) yields \(\chi ^2= 5.37 \ (p=0.02)\). Furthermore, the estimate of \(\mu _1\) might be strongly downward biased due to non reported deaths before the beginning of the treatment, which might also bias the regression coefficient in the analysis by Kalbfleisch and Prentice (1980). If this is the case, the treatment effect is larger than what the study shows. This is a general problem of missing data when the amount of truncation is not reported. To fully evaluate the treatment effect this information (or an estimate thereof) is needed, or a placebo randomization group should be included in the study design. An important advantage of the present model is that it allows to evaluate treatment effect as such, whereas the model of Kalbfleisch and Prentice (1980) only evaluates the difference between treatment types.

## 8 Conclusion

In any study where an intervention is applied, the most natural question arising is whether it has an effect, and if this is the case if it is the intended effect and to quantify the size. Here, the effect is reflected in the change of the time to an observable event. However, in many studies there is no apparent information available about what such a time would have been if no intervention had been applied. In this paper we solve the problem by comparing the time to the intervention and the time to the final event. The parameters of the underlying process are identified and statistically compared to judge the presence and size of an effect. The method represents a potential tool in all the experimental or observational situations where direct measurements of the time course of the underlying process are not available, but only the qualitative changes are observable through times of observable events.

An essential assumption in our approach is that the intervention time is independent of the underlying process. This is a strong assumption and probably not fulfilled in many cases. It is difficult to avoid this assumption, unless the dependence structure is specifically modeled, which is prone to imply even stronger assumptions that might be more difficult to check or fulfil. Nevertheless, in many applications we believe it to be a reasonable assumption. In the neuroscience example when analysing neuronal spike data, the assumption is absolutely reasonable, because the time of intervention (e.g. start of stimulation) is independent of the neuronal activity, where many spikes occur both before and after the intervention. In this case neither censoring nor truncation is relevant. Also in the reliability of technical systems the assumption will often be reasonable, where an intervention is applied to the entire production at the same time, independent of how each component is evolving at that moment. However, in many medical contexts it will of course not be realistic that the intervention time is independent of disease status, and careful reservations have to be taken for possible bias in estimates. In some examples the assumption might be reasonable, though, or it might be possible to include some corrections at intervention time as done in the data example. The analysis corrects both for prior therapy as well as for performance status at intervention time. This last covariate hopefully corrects for any (or most of) the dependence as well as unmeasured confounders, where the disease state might influence the decision of whether a patient should enter the study or not and thus be randomized to one of the treatments. In this application the most serious problem is that data from before the intervention are collected retrospectively from those patients having an intervention, and thus, no information is available about possible deaths before the intervention time. We therefore expect that the estimate of the drift before the intervention is downward biased (only those surviving until intervention are kept in the analysis), and the effect of treatment might be larger than the analysis shows. In other medical examples, the assumption is fully justified. For example imagine a transplant intervention, where start is defined by being approved for a transplant, final event is death, and the intervention is the transplant. Then the intervention time will depend on when a matching organ is available, which will be independent of the disease progress in a particular patient. Here truncation (death before the transplant) will probably be present, but it can easily be corrected for if data on deaths before the intervention are available, which is also a reasonable assumption. The strike example is the most problematic, since a political decision of an intervention will likely depend on the status of the strike. In that case proper care should be taken to include possible covariates, which can hopefully correct for some of the incurred bias, such as media coverage or other social factors.

## Notes

### Acknowledgments

S.D. was supported by the Danish Council for Independent Research \(|\) Natural Sciences. P.L. supported by grant No. RVO: 67985823. The work is part of the Dynamical Systems Interdisciplinary Network, University of Copenhagen.

## Supplementary material

## References

- Aalen OO, Gjessing HK (2001) Understanding the shape of the hazard rate: a process point of view. Stat Sci 16:1–22MATHMathSciNetGoogle Scholar
- Chhikara RS, Folks JL (1989) The inverse Gaussian distribution: theory, methodology, and applications. Marcel Dekker, New YorkMATHGoogle Scholar
- Commenges D, Hejblum BP (2013) Evidence synthesis through a degradation model applied to myocardial infarction. Liftime Data Anal 19(1):1–18MathSciNetGoogle Scholar
- Cox DR, Lewis P-AW (1966) The statistical analysis of series of events. Methuen, LondonMATHCrossRefGoogle Scholar
- Cox DR, Miller HD (1965) The theory of stochastic processes. Chapman and Hall, LondonGoogle Scholar
- Cramer H (1946) Mathematical methods of statistics. Princeton University Press, PrincetonMATHGoogle Scholar
- Desmond AF, Yang ZL (2011) Score tests for inverse Gaussian mixtures. Appl Stoch Models Bus Ind 27(6):633–648MathSciNetCrossRefGoogle Scholar
- Doksum KA, Hoyland A (1992) Models for variable-stress accelerated life testing experiments based on Wiener-processes and the Inverse Gaussian distribution. Technometrics 34(1):74–82MATHMathSciNetCrossRefGoogle Scholar
- Gerstein GL, Mandelbrot B (1964) Random walk models for the spike activity of a single neuron. Biophys J 4:41–68CrossRefGoogle Scholar
- Giraudo MT, Greenwood PE, Sacerdote L (2011) How sample paths of leaky integrate-and-fire models are influenced by the presence of a firing threshold. Neural Comput 23:1743–1767MATHMathSciNetCrossRefGoogle Scholar
- Harrison A, Stewart M (1993) Strike duration and strike size. Can J Econ-Revue Can D Econ 26(4):830–849CrossRefGoogle Scholar
- Kahle W, Lehmann A (1998) Advances in stochastic models for reliability, quality and safety, chapter parameter estimation in damage processes: dependent observations of damage increments and first passage time, pp. 139–152. Birkhauser, Boston, 1998Google Scholar
- Kalbfleisch D, Prentice RL (1980) The statistical analysis of failure time data. Wiley, New YorkMATHGoogle Scholar
- Laming D (1986) Sensory analyses. Academic Press, LondonGoogle Scholar
- Lancaster T (1972) Stochastic model for the duration of a strike. J R Stat Soc Ser A 135:257CrossRefGoogle Scholar
- Lansky P, Ditlevsen S (2008) A review of the methods for signal estimation in stochastic diffusion leaky integrate-and-fire neuronal models. Biol Cybern 99:253–262MATHMathSciNetCrossRefGoogle Scholar
- Lansky P, Sacerdote L (2001) The Ornstein–Uhlenbeck neuronal model with the signal-dependent noise. Phys Lett A 285:132–140MATHMathSciNetCrossRefGoogle Scholar
- Lee MLT, Chang M, Whitmore GA (2008) Threshold regression mixture model for assessing treatment efficacy in a multiple myeloma clinical trial. J Biopharm Stat 18:1136–1149MathSciNetCrossRefGoogle Scholar
- Lee MLT, Whitmore GA, Rosner B-A (2010) Threshold regression for survival data with time-varying covariates. Stat Med 29:896–905MathSciNetCrossRefGoogle Scholar
- Lee MLT, Whitmore GA (2006) Threshold regression for survival analysis: modeling event times by a stochastic process reaching a boundary. Stat Sci 21(4):501–513MATHMathSciNetCrossRefGoogle Scholar
- Nelson W (2008) Accelerated degradation, pp. 521–548. Wiley, 2008. ISBN 9780470316795. doi: 10.1002/9780470316795.ch11
- Pieper V, Domine M, Kurth P (1997) Level crossing problems and drift reliability. Math Methods Oper Res 45(3):347–354MATHMathSciNetCrossRefGoogle Scholar
- R Development Core Team. (2011) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, 2011. URL http://www.R-project.org/. ISBN 3-900051-07-0
- Sacerdote L, Giraudo MT (2013) Leaky Integrate and Fire models: a review on mathematical methods and their applications. In Stochastic biomathematical models with applications to neuronal modeling, volume 2058 of Lecture Notes in Mathematics, pp. 95–148. Springer, 2013Google Scholar
- Tamborrino M, Ditlevsen S, Lansky P (2012) Identification of noisy response latency. Phys Rev E 86:021128CrossRefGoogle Scholar
- Tamborrino M, Ditlevsen S, Lansky P (2013) Parametric inference of neuronal response latency in presence of a background signal. BioSystems 112:249–257CrossRefGoogle Scholar
- Whitmore GA, Ramsay T, Aaron SD (2012) Recurrent first hitting times in Wiener diffusion under several observation schemes. Lifetime Data Anal 18(2):157–176MathSciNetCrossRefGoogle Scholar
- Whitmore GA (1995) Estimating degradation by a Wiener diffusion process subject to measurement error. Lifetime Data Anal 1:307–319MATHCrossRefGoogle Scholar
- Whitmore GA, Schenkelberg F (1997) Modelling accelerated degradation data using Wiener diffusion with a time scale transformation. Lifetime Data Anal 3:27–45MATHCrossRefGoogle Scholar
- Whitmore GA, Crowder MJ, Lawless JF (1998) Failure inference from a marker process based on a bivariate Wiener model. Lifetime Data Anal 4(3):229–251MATHCrossRefGoogle Scholar
- Yu HF (2003) Optimal classification of highly-reliable products whose degradation paths satisfy Wiener processes. Eng Optim 35(3):313–324MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.