A numerical method for computing interval distributions for an inhomogeneous Poisson point process modified by random dead times

The inhomogeneous Poisson point process is a common model for time series of discrete, stochastic events. When an event from a point process is detected, it may trigger a random dead time in the detector, during which subsequent events will fail to be detected. It can be difficult or impossible to obtain a closed-form expression for the distribution of intervals between detections, even when the rate function (often referred to as the intensity function) and the dead-time distribution are given. Here, a method is presented to numerically compute the interval distribution expected for any arbitrary inhomogeneous Poisson point process modified by dead times drawn from any arbitrary distribution. In neuroscience, such a point process is used to model trains of neuronal spikes triggered by the detection of excitatory events while the neuron is not refractory. The assumptions of the method are that the process is observed over a finite observation window and that the detector is not in a dead state at the start of the observation window. Simulations are used to verify the method for several example point processes. The method should be useful for modeling and understanding the relationships between the rate functions and interval distributions of the event and detection processes, and how these relationships depend on the dead-time distribution.


Introduction
The inhomogeneous Poisson point process is commonly used to model time series of discrete, stochastic events. It is applied to diverse phenomena, such as in the fields of neuroscience (e.g., Siebert 1970;Srulovicz and Goldstein 1983;Brown et al. 1998;Liu et al. 2001;Amarasingham et al. 2006), optical communications (e.g., Vannucci and Teich 1978;Drost et al. 2015;Verma and Drost 2017), and particle physics (e.g., Müller 1981b). An inhomogeneous Poisson point process has a time-varying rate and generates a sequence of events that occur at random times (Snyder 1975;Cox and Isham 1980). In the ideal case, all of the events can be observed using a suitable detector. In reality, a subset of events often fails to be observed due to a dead time in the detector (Müller 1981a;Grupen and Shwartz 2008;Picinbono 2009). One must therefore distinguish between two point processes: one of which describes the "events" themselves and the other of which describes the "detections" (i.e., the subset of events that is detected). An example from biology is the firing of spikes (i.e., action potentials) by a postsynaptic neuron in response to the release of excitatory neurotransmitter by a presynaptic neuron (with the further simplifying assumption that each neurotransmitter release event triggers a spike unless the postsynaptic neuron is in a dead state): the event process corresponds to the times at which neurotransmitter is released by the presynaptic neuron, the detection process corresponds to the times at which the released neurotransmitter triggers spikes in the postsynaptic neuron, and the dead time arises from a physiological property of neurons, known as the refractory period, that prevents spikes from being generated too soon after the previous spike (Hodgkin and Huxley 1952). An example from physics is the counting of photons by a sensor: the event process corresponds to the times at which photons Communicated by Benjamin Lindner.
1 3 arrive at the sensor, the detection process corresponds to the times at which the sensor detects photons, and the dead time arises from a property of the sensor that prevents photons from being detected if they arrive too soon after the previous detection. For simplicity, it is often assumed that all dead times in a given process have the same fixed duration (e.g., Müller 1981a;Teich 1985;Teich and Khanna 1985;Picinbono 2009), but some processes are better described by dead times with random durations drawn from a fixed distribution (e.g., Teich et al. 1978;Müller 1981a;Young and Barta 1986;Deger et al. 2010;Peterson and Heil 2018). A distinction is made between "paralyzable" and "nonparalyzable" detectors (Müller 1981a). With a paralyzable detector, an event that occurs during a dead time causes the dead time to be prolonged. With a nonparalyzable detector, an event that occurs during a dead time simply fails to be detected, but the dead time is not prolonged. Only the nonparalyzable case, as would apply to neurons, is considered here. Although a detection is synonymous with a spike in the context of a neuronal spike train, the more general term "detection" is used here to emphasize the broader applicability of the work. Figure 1 demonstrates how an inhomogeneous event process can be modified by random dead times to yield a detection process. Figure 1a shows the time-varying event rate that defines the process. This rate function (also known as the intensity function) determines the probability of an event occurring at each time point in the observation window. Figure 1b shows stochastic event times in such a process. Figure 1c shows random dead times and indicates the subset of events (marked by "x") that fall into each dead time and are therefore not detected. For illustrative purposes, the dead times are relatively long so that many events fail to be detected. Figure 1d shows all events that do not fall into a dead time and are therefore detected. Each detection triggers the start of a new random dead time. Figure 1e shows the first-order intervals between detections. The statistical properties of point processes are often investigated by computing the distributions of such intervals (e.g., Johnson 1978;Wilbur and Rinzel 1983;Gummer 1991;Shcherbakov et al. 2005;Kroó et al. 2007;Nawrot et al. 2008;Arkani and Raisali 2015;Pommé et al. 2015). However, except in very specific cases, typically involving a homogeneous Poisson point process or other simple renewal processes, it is difficult or impossible to obtain a closed-form expression to describe the interval distribution (Picinbono 2009).
Here, a numerical method is presented for computing the interval distribution expected for any arbitrary inhomogeneous Poisson point process modified by dead times drawn from any arbitrary distribution. The interval distributions obtained with this method are intended to be analogous to distributions computed from experimental data. Because experimental data are collected over a finite observation window (or several repetitions of a finite window), empirical interval distributions are distorted by the fact that an interval between detections cannot exceed the time remaining in the observation window and therefore cannot be arbitrarily long. This distortion effect is known as right censoring and leads to an overrepresentation of short intervals and underrepresentation of long intervals (e.g., Wiener 2003;Nawrot et al. 2008). Nonetheless, to maintain equivalence with experimentally obtained interval distributions, the numerical method must also reproduce the right censoring caused by the finite observation window. The numerical method presented here does this and is therefore applicable to a variety of experimentally observed phenomena in several fields of study.

Methods
The numerical method presented below was implemented in MATLAB R2019b and is publicly available in an online code repository (Peterson 2020). The method has been divided into several algorithms, with the goal of presenting each algorithm as straightforwardly as possible without regard for computational demand. Those algorithms found to be computationally demanding were then reimplemented using more optimized but less comprehensible MATLAB code. One particularly demanding computation (Eq. 8) was reimplemented using C code and compiled to a MATLAB Sequence of stochastic events generated by the process. c Random dead times, with each undetected event marked by an "x." d Sequence of stochastic detections. e First-order intervals between detections 1 3 MEX file to enable faster execution. All implementations of each algorithm are included in the online repository, along with a demonstration of their equivalence.
The following notations and conventions are used throughout: 1. The observation window of the point process spans the interval from t min to t max and is divided into bins of width Δt. Each bin is referenced by the time point at its right edge, denoted t i , where i is the index of the bin within the observation window. For an observation window starting at t min = 0 s, the time point marking the first bin is t 1 = Δt. 2. Durations in the dead-time distribution are denoted d j , where j is the index of a bin within the distribution. For the numerical method presented here, the shortest duration that needs to be considered is d 1 = Δt. 3. Waiting times in the forward recurrence distributions or intervals in the interval distributions are denoted w k , where k is the index of a bin within the distribution. For the numerical method presented here, the shortest waiting time or interval that needs to be considered is w 1 = Δt. 4. For simplicity, the bin indices i, j, and k are omitted from the figures, captions, and occasionally elsewhere.
Before describing each step of the numerical method below, it is necessary to define several prerequisite functions. For clarity, all prerequisites and algorithm steps are demonstrated using one example point process of events and one example dead-time distribution.

Prerequisite 1: The rate (or probability) function of the detection process
To compute the interval distribution expected for an inhomogeneous Poisson point process modified by random dead times, it is necessary to know the rate of the detection process over the observation window (i.e., the rate obtained from experimental measurements). The numerical method presented here will work for any arbitrary rate function. Figure 2 shows the example rate R detection (in detections/s) of the detection process used to demonstrate the method. Here, the observation window is 5 ms long and the time step is Δt = 0.1 ms. Note that multiplying the rate function by Δt (in seconds) will convert it to an equivalent probability function p detection , which is also shown (right axis).

Prerequisite 2: The distribution of dead times
To compute the interval distribution expected for an inhomogeneous Poisson point process modified by random dead times, it is also necessary to know the distribution from which the dead times are drawn. The numerical method presented here will work for any arbitrary distribution, as long as the random dead times are independent and drawn from the same distribution. Throughout the demonstrations below, the dead time is a random variable d dead equal to the sum of a fixed duration d fixed and a random duration d rand drawn from the geometric distribution with a mean of μ rand . This is the discrete equivalent of a continuous distribution of dead times consisting of a fixed portion and a random portion drawn from the exponential distribution, such as was used by Young and Barta (1986) and Peterson and Heil (2018). The cumulative distribution function (CDF) for d dead (i.e., the probability that d dead is shorter than or equal to duration d j ) is given by This distribution is equivalent to the cumulative geometric distribution G(k) = 1 − (1 − p) k (with success probability p and trial k ∈ {1, 2, 3,…}) but is parameterized in terms of time and is delayed by d fixed . Figure 3a shows the CDF for the example dead-time distribution (with d fixed = 0.5 ms and μ rand = 0.5 ms). The survivor function for d dead (i.e., the probability that d dead is longer than duration d j ) is given by the complement of the CDF, Figure 3b shows the survivor function for the example dead-time distribution.
(1) The probability mass function (PMF) for d dead (i.e., the probability that d dead is equal to duration d j ) is given by Figure 3c shows the PMF for the example dead-time distribution. Note that, as the time step decreases toward 0 ms, the PMF approaches the continuous probability density function (PDF) for a dead time having a random portion drawn from the exponential distribution (not shown).
With the rate of the detection process and the dead-time distribution specified, it is now possible to proceed to the first step in the numerical method.

Step 1: Computing the rate (or probability) function of the event process
For the numerical method presented here, it is necessary to know the probability p event of an event at each time point in the observation window. If the dead-time distribution is known, then it is possible to compute p event from p detection . This computation requires knowing the probability p dead that the detector is in a dead state at each time point in the observation window, given by where S dead (d) is the survivor function for the dead times (i.e., the probability that d dead is longer than duration d), t h = h⋅Δt is a time point of a bin preceding bin i, and d i-h = (i−h)⋅Δt is a dead-time duration. For simplicity, the process is assumed to be free from dead-time effects at the  Figure 4b shows the resultant p dead (t i ) computed for the example process. Note that Eq. 4 is nothing other than the convolution of p detection and S dead and could be replaced by a more efficient algorithm such as convolution based on the fast Fourier transform.
Recall that two conditions must be met for a detection to occur: an event must occur, and the detector must not be in a dead state. The detection probability p detection (t i ) is therefore given by the joint probability Equation 5 can be rearranged to obtain the probability p event of an event at each time point in the observation window, Figure 4c shows p event (t i ) computed for the example process. It is higher than p detection (t i ) ( Fig. 2) at all time points except the first one. At the first point, p event and p detection are identical because it is assumed that the detector is not in a dead state at the start of the observation window (i.e., it is assumed that p dead (t 1 ) = 0). Note that Eq. 6 is undefined in the unlikely case that p dead (t i ) = 1, which would occur only The survivor function for d dead , given by Eq. 2. c The PMF for d dead , given by Eq. 3. For all three functions, d fixed = 0.5 ms, μ rand = 0.5 ms, and Δt = 0.1 ms when t i falls into the fixed portion of a dead time following a time point for which a detection is guaranteed to occur (i.e., for which p detection = 1). In such a case, p detection (t i ) = 0 regardless of the value of p event (t i ), and the event probability would therefore be unknowable and unrecoverable.
Note that if p event were known, rather than p detection , then p dead and p detection could be computed using Eqs. 4 and 5. For all time points t i following the initial point, the computation of p dead (Eq. 4) depends on the value of p detection from all previous time points t h , whereas the computation of p detection (Eq. 5) depends on p dead from the current time point t i . This interdependence of p dead and p detection requires that both values be computed at a given time point before proceeding to the next point. For the numerical method presented here, it is necessary to know both p event and p detection , no matter which of the two was known initially.

Step 2: Computing forward recurrence distributions for events
The numerical method presented here makes heavy use of forward recurrence distributions. The distribution of forward recurrence times, f event (t i ,w k ), specifies the probability that, at time point t i , the waiting time (or recurrence interval) to the next event is equal to w k . More precisely, the probability that the next event will occur with a waiting time of w k equals the joint probability that there is an event at time t i + w k and that there are no events between times t i and t i + w k . At time point t i in the observation window, the forward recurrence probability f event that the waiting time to the next event is equal to w k is given by Here, for each time point t i , the product given by the ∏ notation computes the survivor function of the event process over the waiting times w k . Figure 5 shows selected forward recurrence distributions computed for the example process in Fig. 2. Figure 5a shows the forward recurrence distribution f event computed for the first time point (t 1 = 0.1 ms), Fig. 5b shows f event computed for the tenth time point (t 10 = 1 ms), and Fig. 5c shows f event computed for the twentieth time point (t 20 = 2 ms). For each recurrence distribution, potential waiting times range from as short as a single bin (w 1 = Δt) to as long as the total number of bins remaining in the observation window (w max = t max − t i ). Recurrence probabilities are not needed for waiting times that would extend the process beyond the edge of the observation window and are omitted; each distribution shown is therefore incomplete because the sum of its probabilities will be less than 1. For each recurrence distribution in Fig. 5, the upper time axis shows the corresponding time within the observation window. The forward recurrence distribution in Fig. 5a is one bin shorter than the observation window, with this edge marked by the gray shading beginning at w = 4.9 ms (i.e., t = 5 ms on the upper axis). As t i increases, the corresponding forward recurrence distribution becomes increasingly incomplete. Although results are shown for only three example time points, the method requires f event to be computed for all time points in the observation window except for the final point (there is no recurrence possible after the final point, so there is no need to compute its recurrence distribution here). Note that when the event rate is periodic, as in the example (Fig. 4c), the distributions of forward recurrence times to the next event are identical for any two points separated by an integer number of periods, except that the distribution for the later point is more incomplete. This means that, for periodic event rates, the forward recurrence distributions only need to be computed for the points within the first cycle in the observation window; they can then be duplicated (incompletely) to obtain corresponding distributions for the points in each subsequent cycle. For aperiodic event rates, the forward recurrence distributions must be computed for all points in the observation window.

Step 3: Computing forward recurrence distributions for detections
Due to the random dead times in the detection process, the distribution of forward recurrence times to the next detection cannot be computed as straightforwardly as the distribution of forward recurrence times to the next event.
In the method presented here, recurrence times to the next detection after time t i are computed under the assumption that a detection has occurred at time t i . Note that, with the assumption of a detection at t i , the forward recurrence times actually represent intervals (unlike forward recurrence times in the event process, for which there was no assumption of an event at time t i ). Assuming a detection at time t i , the distribution of forward recurrence times to the next detection can be computed by considering the effect of each possible dead-time duration. If the dead time following a detection at t i ends after duration d j (i.e., at time t i+j = t i + d j ), two facts are apparent about the distribution of forward recurrence times to the next detection, f detection (t i ,w k ). First, for any waiting time w k shorter than d j , the recurrence probability f detection = 0. Second, for any waiting time w k longer than or equal to d j , the recurrence probability f detection must follow the time course of f event (Eq. 7) computed for time point t i+j (i.e., for the first point in the observation window free from the effects of the dead time). More specifically, the joint probability that a detection at time t i is followed by a dead time with duration d j (i.e., ending at time t i+j ) and that the next detection occurs after waiting time w k following the detection is given by g dead (d j )·f event (t i+j ,w k ). For each time point in the observation window, such a joint probability function is computed over all remaining bins. For time point t i , the forward recurrence probability f detection is obtained by summing the respective contributions from each of the joint probability functions, Here, w k is the waiting time following a detection at time t i and g dead (d j ) is the probability that the dead time d dead has duration d j (Eq. 3 and Fig. 3c). Figure 6 shows selected results for the example process. Figure 6a shows all joint probability functions contributing to the forward recurrence distribution for the first time point (t 1 = 0.1 ms), each of which originates at a time point marked by a symbol. Figure 6b shows the corresponding f detection , equivalent to the binwise sum of all functions in Fig. 6a (note that the vertical axes differ in scaling). Figure 6c, d shows the contributing joint probability functions and f detection for the tenth time point (t 10 = 1 ms), and Fig. 6e, f shows the contributing joint probability functions and f detection for the twentieth time point (t 20 = 2 ms). Although results are shown for only three example time points, the method requires f detection to be computed for all time points in the observation window except for the final point (there is no recurrence possible after the final point, so there is no need to compute its recurrence distribution here). Note that when the event rate is periodic, as in the example (Fig. 4c), the forward recurrence distributions only need to be computed for the points within the first cycle in the observation window and can then be duplicated (incompletely) to obtain corresponding distributions for the points in each subsequent cycle.
Waiting time w (ms)  ,w), showing the probability that, at time point t in the process, the waiting time to the next event will equal w. a f event for time t 1 = 0.1 ms. b f event for time t 10 = 1 ms. c f event for time t 20 = 2 ms. Each distribution is computed with Eq. 7. The top axis in each panel shows the time relative to the 5-ms observation window. Gray shading marks the region missing from each distribution (i.e., the region that would extend beyond the edge of the observation window). Each distribution is therefore incomplete

Step 4: Computing the interval distributions for the event process and for the detection process
It is now possible to compute the distribution of interevent intervals (IEIs) and the distribution of interdetection intervals (IDIs). The former requires knowing p event (t i ) (Eq. 6 and Fig. 4c) and the distribution of forward recurrence times to the next event for each time point in the observation window, f event (t i ,w k ) (Eq. 7 and Fig. 5). The latter requires knowing p detection (t i ) (Eq. 5 and Fig. 2) and the distribution of forward recurrence times to the next detection for each time point in the observation window, f detection (t i ,w k ) (Eq. 8 and Fig. 6). For an observation window having m time points, the probability p IEI of observing an IEI with duration w k is given by Here, n IEIs = n events − 1 + p zero is the expected number of IEIs per observation window, with n events being the expected number of events per observation window and p zero being the probability that an observation window contains zero events. Note that if every observation window were guaranteed to contain at least one event, then the expected number of IEIs would simply be n IEIs = n events − 1. However, observation windows containing zero events typically can occur and will yield zero IEIs rather than zero minus one IEIs, such that some portion of the 1 which was subtracted must be added back. This portion is equal to the probability that an observation window contains zero events. Here, n events = t max − t min ⋅ p event ∕Δt and , with t max − t min being the duration of the observation window (in seconds), p event being the mean event probability in the observation window, and Δt being the time step (in seconds). Note that if p event is a row vector with column index i and f event is a matrix with row  Fig. 6 Example forward recurrence distributions, f detection (t,w), showing the probability that, given a detection at time point t in the process, the waiting time to the next detection will equal w. a Contributions to f detection following a detection at time t 1 = 0.1 ms. Each possible waiting time (marked by symbols) contributes to f detection for all time points that follow. The contribution of each waiting time w (gray lines) is given by the portion of f event (Fig. 5a) over the interval [w, w max ], scaled by the probability g dead that the dead time has duration d = w (Fig. 3c). b The overall f detection function for t 1 = 0.1 ms, computed with Eq. 8, is equivalent to the binwise sum of all individual contributions in a. c Contributions to f detection following a detection at time t 10 = 1 ms. d The overall f detection function for t 10 = 1 ms.
e Contributions to f detection following a detection at time t 20 = 2 ms. f The overall f detection function for t 20 = 2 ms. The top axis in each panel shows the time relative to the 5-ms observation window. Gray shading marks the region missing from each distribution (i.e., the region that would extend beyond the edge of the observation window). Each distribution is therefore incomplete index i and column index k, then p IEI in Eq. 9 is given by the matrix product of p event and f event , normalized by n IEIs . The corresponding probability p IDI of observing an IDI with duration w k is given by Here, n IDIs = n detections − 1 + p zero is the expected number of IDIs per observation window, with n detections = t max − t min ⋅ p detection ∕Δt . The assumption that the detector is not in a dead state at the start of the observation window means the probability that an observation window contains zero detections is identical to the probability that it contains zero events: Note that if p detection is a row vector with column index i and f detection is a matrix with row index i and column index k, then p IDI in Eq. 10 is given by the matrix product of p detection and f detection , normalized by n IDIs .
The IEI distribution (in the form of a probability mass function as in Eq. 9) can be converted to rate R IEI (in IEIs/s), by dividing each probability by the time step Δt (in seconds): Naturally, the IDI distribution (Eq. 10) can be scaled in the same way to yield rate R IDI (in IDIs/s): Numerically computed IEI and IDI distributions are shown below for the example process and several additional processes. Note that, in the context of a neuronal spike train, Δt the IDI distribution can be equated with the interspike interval (ISI) distribution.

Results
The correctness of the method presented above will now be demonstrated for several point processes by comparing results obtained with the numerical method to results computed from stochastic simulations. Although relatively high rates are used in these demonstrations, the numerical method yields equally precise results for low rates.

Inhomogeneous periodic Poisson point process modified by random dead times
For the example inhomogeneous Poisson point process above (see Methods and Figs. 2,3,4,5,6), the event rate R event (in events/s) is given by a sinusoid passed through an exponential function, where A is a scale factor (in events/s) that specifies the event rate when the instantaneous value of the sinusoid equals zero, B is a slope factor, f is the frequency (in Hertz) of the sinusoid, and t i is a time point (in seconds) in the observation window. For the example process in Fig. 4c, A = 600 events/s, B = 1, and f = 400 Hz. Figure 7a shows an IEI distribution computed from one million simulations of the event process (gray line) and the IEI distribution obtained using the numerical method (dashed black line). Figure 7b shows the corresponding IDI distributions of the detection process. from the numerical method (dashed black line). b IDI distributions computed from simulations and from the numerical method after taking the dead-time effects into consideration, with d fixed = 0.5 ms and μ rand = 0.5 ms obtained using the numerical method (i.e., p detection , p dead , f event , and f detection ) are in correspondingly close agreement with the simulation results (not shown). These results show that the numerical method works correctly for the example process having a periodic event rate. Figure 8 shows results for an inhomogeneous Poisson point process whose event rate over the observation window is aperiodic and equal to one realization of a random walk. Figure 8a shows the particular random walk used for this example. The initial rate at time t 1 was 600 events/s, and the rates at later time points were obtained by cumulatively summing random values from the uniform distribution on [− 75, 75] events/s. Figure 8b shows the detection rate expected if this process were modified by dead times drawn from the example distribution (Fig. 3c). Figure 8c shows an IEI distribution computed from one million simulations of the event process (gray line) and the IEI distribution obtained using the numerical method (dashed black line). Figure 8d shows the corresponding IDI distributions. The numerical results and the simulation results are in close agreement. Figure 9 shows results for a homogeneous Poisson point process. Figure 9a shows its constant event rate of 1000 events/s. Figure 9b shows the detection rate expected if the process were modified by dead times drawn from the example distribution (Fig. 3c). Figure 9c shows an IEI distribution computed from one million simulations of the event process (gray line) and the IEI distribution obtained using the numerical method (dashed black line). The IEIs in the homogeneous Poisson point process have a geometric distribution (or, equivalently, an exponential distribution in the case of a continuous-time process). The inset in Fig. 9c shows the IEI distribution from the example process plotted with a logarithmic rate axis. Due to right censoring caused by the finite observation window, this distribution deviates at longer IEIs from the straight line that would be expected for a geometric distribution. Figure 9d shows the corresponding IDI distributions. The numerical results and the simulation results are in close agreement. Although a closed-form expression is available for the continuous counterpart to this distribution (Young and Barta 1986;Heil et al. 2007;Neubauer et al. 2009), it has been derived assuming an infinitely long observation window which avoids the effects of right censoring. This closed-form solution would therefore account poorly for any experimentally obtained interval distributions that are strongly affected by right censoring. As the observation window becomes shorter and right censoring becomes more pronounced, the mean and standard deviation of the observed intervals decrease relative to those expected for the underlying process (Nawrot et al. 2008). Because a distribution computed with the numerical method presented here includes the effects of right censoring, it can Although a homogeneous event process (Fig. 9a) is in equilibrium, the detection process (Fig. 9b) that arises after applying random dead times is not. The nonequilibrium nature of the detection process manifests as a detection rate that initially equals the event rate but then decreases and settles into a lower steady-state rate. The higher detection rate at the start of the observation window results from the assumption that the detector is not initially in a dead state, such that the distribution of forward recurrence times to the first detection is identical to the distribution of forward recurrence times to the first event. The lower detection rate in the steady state results from the fact that, for all detections after the first one, deadtime effects cause the probability mass in the distribution of forward recurrence times to the next detection to be shifted toward later times than in the corresponding distribution of forward recurrence times to the next event. For a homogeneous event process, the steady-state detection rate equals the inverse of the sum of the mean IEI and the mean dead time. This yields a steady-state rate of 526.3 detections/s for the example process (Fig. 9b), which has a mean IEI of (1−p event )·( Δt/p event ) = 0.9 ms and a mean dead time of d fixed + μ rand = 1 ms.

Discussion
The numerical method presented here yields the expected distribution of intervals between detections of events in an inhomogeneous Poisson point process, assuming the detector is nonparalyzable and each detection is followed by a dead time drawn from a fixed probability distribution. There are, however, several limitations and considerations to keep in mind, as described below.

The method models the process in discrete time
Although point processes in nature operate in continuous time, numerical computation methods necessarily treat them as though that they operate in discrete time. For a stochastic process that operates in continuous time, the IDI distribution obtained with the discrete numerical method is therefore an estimate which converges to the exact solution as the time step approaches zero. There are, however, practical limits to how brief the time step can be in the numerical method. Shortening the time step or prolonging the rate function increases the number of time points in the observation window, and the computation time of the method grows superlinearly with the number of time points. The main computational bottleneck is Eq. 8, which has a computation time that grows approximately in proportion to the cube of the number of time points. Nonetheless, to obtain an adequate description of a continuous point process, the time step in the numerical method must be sufficiently brief. At a minimum, it should be brief enough that each time step would be expected to have either zero events or one event, but not several events. It should also be brief enough that fluctuations in the rate function are not undersampled. For example, to model a process in which the rate fluctuates sinusoidally with a frequency of f Hz, the author suggests a time step of Fig. 9 Comparison of numerical and simulation results for a homogeneous event process. a The constant event rate of R event = 1000 events/s. b The expected detection rate after taking the dead-time effects into consideration, with d fixed = 0.5 ms and μ rand = 0.5 ms. c IEI distributions computed from one million simulations (gray line) and from the numerical method (dashed black line). d IDI distributions computed from the simulations and from the numerical method. Insets in c and d show the same distributions with a logarithmic rate axis

The method describes a nonequilibrium detection process
It has been shown that the numerical method can be used to compute IDI distributions for either homogeneous (Fig. 9) or inhomogeneous (Figs. 1, 2, 3, 4, 5, 6, 7, 8) event processes. In either case, the detection process is not in equilibrium, because there are transient changes in the detection rate at the start of the process before it settles into a steady state (Deger et al. 2010). This effect is most apparent for the process with a constant event rate (Fig. 9a, b), although it also occurs for the process with a periodic event rate (Figs. 4c and 2). For a process to be in equilibrium at the start of the observation window, it must have both begun and settled into its steady state prior to the start of the observation window. However, if a process began prior to the start of the observation window, then it might be in a dead state at the start of the observation window, which violates an assumption of the numerical method presented here. Adapting the method to describe an equilibrium detection process would therefore require considering the probability that each time point in the observation window is in a dead state resulting from a detection occurring prior to the observation window.
The numerical method presented here is therefore suitable only for processes that begin at the start of the observation window or processes that have a negligible event probability prior to the start of the observation window. An example of such a process would be a train of spikes recorded from a neuron in response to a stimulus, provided the neuron fires no (or negligibly few) spikes in the absence of a stimulus prior to the observation window.

The method requires that the detection probability be independent of the process history prior to the previous detection
The numerical method presented here works only for processes in which the instantaneous probability of a detection depends only on the current time point in the observation window and on the time elapsed since the most recent detection (necessary for knowing the probability that the process is still in a dead state). The detection process can be either an "ordinary renewal process" (which results if the underlying event process is homogeneous such that the intervals between detections are independent and identically distributed) or an "inhomogeneous renewal process" (which results if the underlying event process is inhomogeneous such that the distribution of intervals to the next detection will depend on the rate function following the most recent detection). The numerical method can be applied in either case, so long as the probability of detection is not influenced by the process history prior to the most recent detection. The method does not work for nonrenewal processes.

The method operates in one direction only
Given a particular dead-time distribution, the numerical method can be used to work forward from the rate function to the IDI distribution. It is not possible, however, to work backward from the IDI distribution to the rate function that gave rise to it because the interval distribution contains little information about the timing of detections within the observation window (Turcott et al. 1994;Bi et al. 1988). Indeed, many different rate functions can give rise to virtually identical interval distributions. This can be demonstrated for a simple case by comparing a homogeneous Poisson point process to a "doubly stochastic" point process in which the event rate varies stochastically over time (Cox 1955). For sufficiently long observation windows, both processes can yield interval distributions that are essentially identical, despite the processes having different rate functions (Teich et al. 1990;Lowen and Teich 1991).

Previous approaches
Several investigators have presented results that relate, to varying degrees, to the present work. To the author's knowledge, however, none has presented a method to compute the IDI distribution expected for any arbitrary inhomogeneous Poisson point process of events modified by dead times drawn from any arbitrary distribution. For example, closedform expressions have been presented to describe the IDI distribution for the dead-time-modified homogeneous (i.e., constant-rate) Poisson point process, which in the absence of right censoring is given by the convolution of the IEI distribution and the dead-time distribution (Young and Barta 1986;Li and Young 1993;Picinbono 2009;Prijs et al. 1993;Heil et al. 2007;Neubauer et al. 2009). Although Turcott et al. (1994) provide an expression for the IDI distribution of an inhomogeneous Poisson point process, it is for the very specific case in which the rate function decays exponentially and dead times are nonrandom and inversely proportional to the instantaneous detection rate. Deger et al. (2010) present a method to compute the time-dependent hazard function (their Eq. 22) for an inhomogeneous Poisson point process modified by random dead times. If converted to the corresponding probability distribution, this hazard function is equivalent to the distribution of forward recurrence times to the next detection obtained in the present study (Eq. 8).
However, Deger et al. (2010) did not proceed to compute the expected IDI distribution from this result. Yakovlev et al. (2005) present a method to compute the expected IEI distribution for any arbitrary inhomogeneous Poisson point