Generalizations of the ‘Linear Chain Trick’: incorporating more flexible dwell time distributions into mean field ODE models

In this paper we generalize the Linear Chain Trick (LCT; aka the Gamma Chain Trick) to help provide modelers more flexibility to incorporate appropriate dwell time assumptions into mean field ODEs, and help clarify connections between individual-level stochastic model assumptions and the structure of corresponding mean field ODEs. The LCT is a technique used to construct mean field ODE models from continuous-time stochastic state transition models where the time an individual spends in a given state (i.e., the dwell time) is Erlang distributed (i.e., gamma distributed with integer shape parameter). Despite the LCT’s widespread use, we lack general theory to facilitate the easy application of this technique, especially for complex models. Modelers must therefore choose between constructing ODE models using heuristics with oversimplified dwell time assumptions, using time consuming derivations from first principles, or to instead use non-ODE models (like integro-differential or delay differential equations) which can be cumbersome to derive and analyze. Here, we provide analytical results that enable modelers to more efficiently construct ODE models using the LCT or related extensions. Specifically, we provide (1) novel LCT extensions for various scenarios found in applications, including conditional dwell time distributions; (2) formulations of these LCT extensions that bypass the need to derive ODEs from integral equations; and (3) a novel Generalized Linear Chain Trick (GLCT) framework that extends the LCT to a much broader set of possible dwell time distribution assumptions, including the flexible phase-type distributions which can approximate distributions on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^+$$\end{document}R+ and can be fit to data.


Introduction
Many scientific applications involve systems that can be framed as continuous time state transition models (e.g., see Strogatz 2014), and these are often modeled using mean field ordinary differential equations (ODE) of the form where x(t) ∈ R n , parameters θ ∈ R p , and f : R n → R n is smooth. The abundance of such applications, and the accessibility of ODE models in terms of the analytical techniques and computational tools for building, simulating, and analyzing ODE models [e.g., straightforward simulation methods and bifurcation analysis software like MatCont (Dhooge et al. 2003) and XPPAUT (Ermentrout 1987(Ermentrout , 2019], have made ODEs one of the most popular modeling frameworks in scientific applications. Despite their widespread use, one shortcoming of ODE models is their inflexibility when it comes to specifying probability distributions that describe the duration of time individuals spend in a given state. The basic options available for assuming specific dwell time distributions within an ODE framework can really be considered as a single option: the 1st event time distribution for a (nonhomogeneous) Poisson process, which includes the exponential distribution as a special case.
To illustrate this, consider the following SIR model of infectious disease transmission by Kermack and McKendrick (1927), where S(t), I (t), and R(t) correspond to the number of susceptible, infected, and recovered individuals in a closed population at time t and λ(t) ≡ β I (t) is the percapita infection rate (also called the force of infection by Anderson and May (1992) and others). This model can be thought of as the mean field model for some underlying stochastic state transition model where a large but finite number of individuals transition from state S to I to R [see the Appendix, Kermack and McKendrick (1927) for a derivation, and see Armbruster and Beck (2017), Banks et al. (2013), and references therein for examples of the convergence of stochastic models to mean field ODEs]. Although multiple stochastic models can yield the same mean field deterministic model, it is common to consider a stochastic model based on Poisson processes. For the SIR model above, for example, this stochastic analog would assume that, over the time interval [t, t + Δt] (for very small Δt), an individual in state S or I at time t is assumed to transition from S to I with probability λ(t) Δt, or from I to R with probability γ Δt, respectively. Taking Δt → 0 yields the desired continuous time stochastic model. Here, the linear rate of transitions from I to R (γ I (t)) arises from assuming the dwell time for an individual in the infected state (I) follows an exponential distribution with rate γ (i.e., the 1st event time distribution for a homogeneous Poisson process with rate γ ). Similarly, assuming the time spent in state S follows the 1st event time distribution under a nonhomogeneous (also called inhomogeneous) Poisson process with rate λ(t) yields a time-varying per capita transition rate λ(t). This association of a mean field ODE with a specific underlying stochastic model provides very valuable intuition in an applied context. For example, it allows modelers to ascribe application-specific (e.g., biological) interpretations to parameters and thus estimate parameter values (e.g., for γ above, the mean time spent infectious is 1/γ ), and it provides intuition and a clear mathematical foundation from which to construct and evaluate mean field ODE models based on individual-level, stochastic assumptions.
To construct similar models using other dwell time distributions, a standard approach is to formulate a continuous time stochastic model and from it derive mean field distributed delay equations, typically represented as integro-differential equations (IDEs) or integral equations (IEs) (e.g., see Kermack and McKendrick 1927;Hethcote and Tudor 1980;Feng et al. 2007Feng et al. , 2016 or see the example derivation in the Appendix). Readers unfamiliar with IEs and IDEs are referred to Burton (2005) or similar texts. IEs and IDEs have proven to be quite useful models in biology, e.g., they have been used to model chemical kinetics (Roussel 1996), gene expression (Smolen et al. 2000;Takashima et al. 2011;Guan and Ling 2018), physiological processes such as glucose-insulin regulation (Makroglou et al. 2006, and references therein), cell proliferation and differentiation (Özbay et al. 2008;Clapp and Levy 2015;Yates et al. 2017), cancer biology and treatment (Piotrowska and Bodnar 2018;Krzyzanski et al. 2018;Câmara De Souza et al. 2018), pathogen and immune response dynamics (Fenton et al. 2006), infectious disease transmission (Anderson and Watson 1980;Lloyd 2001a, b;Feng and Thieme 2000;Wearing et al. 2005;Lloyd 2009;Feng et al. 2007;Ciaravino et al. 2018), and population dynamics (MacDonald 1978a;Blythe et al. 1984;Metz and Diekmann 1986;Boese 1989;Nisbet et al. 1989;Cushing 1994;Wolkowicz et al. 1997;Gyllenberg 2007;Wang and Han 2016;Lin et al. 2018;Robertson et al. 2018). See also Campbell and Jessop (2009) and the applications reviewed therein.
However, while distributed delay equations are very flexible, in that they can incorporate arbitrary dwell time distributions, when compared to ODEs they also can be more challenging to derive, to analyze mathematically, and to simulate (Cushing 1994;Burton 2005). Thus, many modelers face a trade-off between building appropriate dwell time assumptions into their mean field models (i.e., opting for an IE or IDE model) and constructing parsimonious models that are more easily analyzed both mathematically and computationally (i.e., opting for an ODE model). For example, the following system of integral equations generalizes the SIR example above by incorporating an arbitrary distribution for the duration of infectiousness (i.e., the dwell time in state I):

S(t) = S(0)(1 − F S (t))
(3a) where N = S(0) + I (0) + R(0), 1 − F S (t) = exp − t 0 β I (u) du is the survival function for the distribution of time spent in susceptible state S [i.e., the 1st event time under a Poisson process with rate λ(t) = β I (t)], and 1 − F I (t) = exp − γ t is the survival function for the time spent in the infected state I (related models can be found in, e.g., Feng and Thieme 2000;Ma and Earn 2006;Krylova and Earn 2013;Champredon et al. 2018). A different choice of the CDF F I allows us to generalize the SIR model to other dwell time distributions that describe the time individuals spend in the infected state. Integral equations like those above can also be differentiated (assuming the integrands are differentiable) and represented as integrodifferential equations (e.g., as in Hethcote and Tudor 1980).
There have been some efforts in the past to identify which categories of integral and integro-differential equations can be reduced to systems of ODEs (e.g., MacDonald 1989;Metz and Diekmann 1991;Ponosov et al. 2002;Jacquez and Simon 2002;Burton 2005;Goltser and Domoshnitsky 2013;Diekmann et al. 2017, and references therein), but in practice the most well known case is the reduction of IEs and IDEs that assume Erlang 1 distributed dwell times. This is done using what has become known as the Linear Chain Trick (LCT, also referred to as the Gamma Chain Trick; MacDonald 1978b; Smith 2010) which dates at least back to Fargue (1973) and earlier work by Theodore Vogel (e.g., Vogel 1961, 1965, according to Câmara De Souza et al. 2018). However, for more complex models that exceed the level of complexity that can be handled by existing "rules of thumb" like the LCT, the current approach is to derive mean field ODEs from mean field integral equations that might themselves first need to be derived from system-specific stochastic state transition models (e.g., Kermack and McKendrick 1927;Feng et al. 2007;Banks et al. 2013;Feng et al. 2016, and see the Appendix for an example.). Unfortunately, modelers often avoid these extra (often laborious) steps in practice by assuming (sometimes only implicitly) very simplistic dwell time distributions based on Poisson process 1st event times as in the SIR example above.
In light of the widespread use of ODE models, these challenges and trade-offs underscore a need for a more rigorous theoretical foundation to more effectively and more efficiently construct mean field ODE models that include more flexible dwell time distribution assumptions (Wearing et al. 2005;Feng et al. 2016;Robertson et al. 2018). The goal of this paper is to address these needs by (1) providing a theoretical foundation for constructing the desired system of ODEs directly from "first principles" (i.e., stochastic model assumptions), without the need to derive ODEs from intermediate IDEs or explicit stochastic models, and by (2) providing similar analytical results for novel extensions of the LCT which allow more flexible dwell time distributions, and conditional relationships among dwell time distributions, to be incorporated into ODE models. We also aim to clarify how underlying (often implicit) stochastic model assumptions are reflected in the structure of corresponding mean field ODE model equations.
The remainder of this paper is organized as follows. An intuitive description of the Linear Chain Trick (LCT) is given in Sect. 1.1 as a foundation for the extensions that follow. In Sect. 2 we review key notation and properties of Poisson processes and certain probability distributions needed for the results that follow. In Sect. 3.1.1 we highlight the association between Poisson process intensity functions and per capita rates in mean field ODEs, and in Sect. 3.1.2 we introduce what we call the weak memorylessness property of (nonhomogeneous) Poisson process 1st event time distributions. In Sects. 3.2 and 3.3 we give a formal statement of the LCT and in Sect. 3.4 a generalization that allows time-varying rates in the underlying Poisson processes. We then provide similar LCT generalizations for more complex cases: In Sect. 3.5 we provide results for multiple ways to implement transitions from one state to multiple states (which arise from different stochastic model assumptions and lead to different systems of mean field ODEs), and we address dwell times that obey Erlang mixture distributions. In Sect. 3.6 we detail how to construct mean field ODEs in which a sub-state transition (e.g., from infected to quarantined) does or does not alter the overall dwell time distribution in that state (e.g., the duration of infection). Lastly, in Sect. 3.7 we present a Generalized Linear Chain Trick (GLCT) which describes how to construct mean field ODEs from first principles based on assuming a very flexible family of dwell time distributions that include the phase-type distributions, i.e., hitting time distributions for continuous time Markov chains (Reinecke et al. 2012a;Horváth et al. 2016) which we address more specifically in Sect. 3.7.1. Tools for fitting phasetype distributions to data, or using them to approximate other distributions, are also mentioned in Sect. 3.7.1.

Intuitive description of the Linear Chain Trick
To begin, an intuitive understanding of the Linear Chain Trick (LCT) based on properties of Poisson processes, is helpful for drawing connections between underlying stochastic model assumptions and the structure of corresponding mean field ODEs.
Here we consider a very basic case (detailed in Sect. 3.2): the mean field ODE model for a stochastic process in which particles in state X remain there for an Erlang(r , k) distributed amount of time before exiting to some other state.
In short, the LCT exploits a natural stage structure within state X imposed by assuming an Erlang(r , k) distributed dwell time [i.e., a gamma(r , k) distribution with integer shape k]. Recall that the time to the kth event under a homogeneous Poisson process with rate r > 0 is Erlang(r , k) distributed. In that context, each event is preceded by a length of time that is exponentially distributed with rate r , and thus the time to the kth event is the sum of k independent and identically distributed exponential random variables [i.e., the sum of k iid exponential random variables with rate r is Erlang(r , k) distributed]. Particles in state X at a given time can therefore be classified by which event they are awaiting next, i.e., a particle is in state X i if it is waiting for the ith event to occur, and X 1 , . . ., X k is a partition of X. The dwell time distribution for each sub-state X i is exponential with rate r , and particles leave the last state X k (and thus X) upon the occurrence of the kth event.
This sub-state partition is useful to impose on X because the corresponding mean field equations for these sub-states are linear (or nearly linear) ODEs. Specifically, if x i (t) is the amount in X i at time t, these mean field ODEs are where the total amount in X at time t is As we show below, this Poisson process based perspective allows us to generalize the LCT to other more complex scenarios where we partition a focal state X in a similar fashion. Those results are further extended from exponential dwell times to 1st event time distributions under a nonhomogeneous Poisson processes with time varying rate r (t), allowing for time-varying (or state-dependent) dwell time distributions to be used in extensions of LCT.

Model framework
The context in which we consider applications of the Linear Chain Trick (LCT) is the derivation of continuous time mean field model equations for stochastic state transition models with a distributed dwell time in a focal state, X. Such mean field models might otherwise be modeled as integral equations (IEs) or integro-differential equations (IDEs), and we seek to identify generalizations of the LCT that allow us to replace mean field integral equations with equivalent systems of 1st order ODEs.

Distributions and notation
Below we will extend the LCT from Erlang(r , k) distributions (i.e., kth event time distributions under homogeneous Poisson processes with rate r ) to event time distributions under nonhomogeneous Poisson processes with time varying rate r (t), and related distributions like the minimum of multiple Erlang random variables. In this section we will first review some basic properties of these distributions.
Gamma distributions can be parameterized 2 by two strictly positive quantities: rate r and shape k (sometimes denoted α and β, respectively). The Erlang family of distributions are the subset of gamma distributions with integer-valued shape parameters k ∈ Z + , or equivalently, the distributions resulting from the sum of k iid exponential distributions. That is, if a random variable T = k i=1 T i , where all T i are independent exponential distributions with rate r , then T is Erlang(r , k) distributed. Since the inter-event times under a homogeneous Poisson process are exponentially distributed, the time to the kth event is thus Erlang(r , k). This construction is foundational to a proper intuitive understanding of the LCT and its extensions.
If random variable T is gamma(r , k) distributed, then its mean μ, variance σ 2 , and coefficient of variation c v are given by Note that gamma distributions can be parameterized by their mean μ and variance σ 2 by rewriting rate r and shape k using Eq. (5): However, to ensure this gamma distribution is also Erlang (i.e., to ensure the shape parameter k is an integer) one must adjust the assumed variance up or down by rounding the value of k in Eq. (6) down or up, respectively, to the nearest integer (see Appendix B for details, and alternatives). The Erlang density function (g), CDF (G), and survival 3 function (S = 1 − G; also called the complementary CDF) are given by The results in Sect. 3 use (and generalize) the property of Erlang distributions detailed in Lemma 1 (eqs. 7.11 in Smith 2010, restated here without proof), which is the linchpin of the LCT.
Lemma 1 Erlang density functions g j r (t), with rate r and shape j, satisfy Since homogeneous Poisson processes are a special case of nonhomogeneous Poisson processes 4 from here on we will use "Poisson process" or "Poisson process with rate r (t)" to refer to cases that apply to both homogeneous [i.e., r (t) = r constant] and nonhomogeneous Poisson processes.
The more general kth event time distribution under a Poisson process with rate r (t), starting from some time τ < t has a density function (h k r ), survival function (S k r ), and CDF (H k r ≡ 1 − S k r ) given by For an arbitrary survival function starting at time τ (i.e., over the period [τ, t] where t ≥ τ ) we use the notation S(t, τ ). In some instances, we also use the notation Lastly, in the context of state transitions models, it is common to assume that, upon leaving a given state (e.g., state X) at time t, individuals are distributed across multiple recipient states according to a generalized Bernoulli distribution (also known as the categorical distribution or the multinomial distribution with 1 trials) defined on the integers 1 through k where the probability of a particle entering the jth of k recipient states ( j ∈ 1, . . . , k) is p j (t), which define probability vector p(t) = [ p 1 (t), . . . , p k (t)] T and k j=1 p j (t) = 1.

Results
The results below focus on one or more states, within a potentially larger state transition model, for which we would like to assume a particular dwell time distribution and derive a corresponding system of mean field ODEs using the LCT or a generalization of the LCT. In particular, the results below describe how to construct those mean field ODEs directly from stochastic model assumptions without needing to derive them from equivalent mean field integral equations (which themselves might need to be derived from an explicit continuous-time stochastic model).

Preliminaries
Before presenting extensions of the LCT, we first illustrate in Sect. 3.1.1 how mean field ODEs include terms that reflect underlying Poisson process rates. In Sect. 3.1.2, we highlight a key property of these Poisson process 1st event time distributions that we refer to as a weak memorylessness property since it is a generalization of the well known memorylessness property of the exponential and geometric distributions.

Transition rates in ODEs reflect underlying Poisson process rates
To build upon the intuition spelled out in Sect. 1.1, where particles exit X following an exponentially distributed dwell time, we now assume that particles exit X following the 1st event time under nonhomogeneous Poisson processes with rate r (t) (recall the 1st event time distribution is exponential if r (t) = r is constant). As illustrated by the corresponding mean field equations given below, the rate function r (t) can be viewed as either the intensity function 5 for the Poisson process governing when individuals leave state X, or as the (mean field) per-capita rate of loss from state X as shown in Eq. (11). This dual perspective provides valuable intuition for critically evaluating mean field ODEs.
Example 1 (Equivalence between Poisson process rates and per capita rates in mean field ODEs) Consider the scenario described above. The survival function for the dwell time distribution for a particle entering X at time τ is S(t, τ ) = exp(− t τ r (u) du), and it follows that the expected proportion of such particles remaining in X at time t > τ is given by S(t, τ ). Let x(t) be the total amount in state X at time t, x(0) = x 0 , and assume that I(t) and r (t) are integrable, non-negative functions of t. Then the corresponding mean field integral equation for this scenario is and the integral equation (10) above is equivalent to Proof The Leibniz rule for differentiating integrals, Eq. (10), and Lemma 1 yield

Weak memoryless property of Poisson process 1st event time distributions
The intuition behind the LCT, and the history-independent nature of ODEs, are related to the memorylessness property of exponential distributions. For example, when par-ticles accumulate in a state with an exponentially distributed dwell time distribution, then at any given time all particles currently in that state have iid exponentially distributed amounts of time left before they leave that state regardless of the duration of time already spent in that state. Each of these can be viewed as a reflection of the history-independent nature of Poisson processes. Accordingly, the familiar memorylessness property of exponential and geometric distributions can, in a sense, be generalized to (nonhomogeneous) Poisson process 1st event time distributions. Recall that if an exponentially distributed (rate r ) random variable T represents the time until some event, then if the event has not occurred by time s the remaining duration of time until the event occurs is also exponential with rate r . The analogous weak memorylessness property of nonhomogeneous Poisson process 1st event time distributions is detailed in the following definition.

Definition 1 (Weak memorylessness property of Poisson process 1st event times)
Assume T is a Poisson process 1st event time starting at time τ , which has rate r (t) and CDF H 1 r (t, τ ) = 1 − exp(−m(t, τ )) [see Eqs. (9) and (9c)]. If the event has not occurred by time s > τ the distribution of the remaining time T s ≡ T − s | T > s follows a shifted but otherwise identical Poisson process 1st event time distribution with CDF P(T s ≤ t) = H 1 r (t + s, s). If r (t) = r is a positive constant we recover the memorylessness property of the exponential distribution.

Proof
The CDF of T s (for t > τ) is given by In other words, Poisson process 1st event time distributions are memoryless up to a time shift in their rate functions. In the context of multiple particles entering a given state X at different times and leaving according to independent Poisson process 1st event times with identical rates r (t) (i.e., t is absolute time, not time since entry into X), then for all particles in state X at a given time the distribution of time remaining in state X is (1) independent of how much time each particle has already spent in X and (2) follows iid Poisson process 1st event time distributions with rate r (t).

Simple case of the LCT
To illustrate how the LCT follows from Lemma 1, consider the simple case of the LCT illustrated in Fig. 1, where a higher dimensional model includes a state transition into, then out of, a focal state X. Assume the time spent in that state (T X ) is Erlang(r , k) Example diagram where state X has an Erlang(r , k) distributed dwell time, represented either as a a single state and corresponding integral equation, or b as a set of k sub-states each with exponential dwell time distributions whose mean field equations can be represented as either integral equations or a system of ODEs (see Theorem 1). Rate I(t) is an integrable non-negative function describing the mean field influx rate into state X distributed [i.e., T X ∼ Erlang(r , k)]. Then the LCT provides a system of ODEs equivalent to the mean field integral equations for this process as detailed in the following theorem: . Let x(t) be the (mean field) amount in state X at time t and assume x(0) = x 0 .
The mean field integral equation for this scenario (see Fig. 1a) is State X can be partitioned (see Fig. 1b) into k sub-states X i , i = 1, . . . , k, where particles in X i are those awaiting the ith event as the next event under a homogeneous Poisson process with rate r . Let x i (t) be the amount in X i at time t, and x(t) = k j=1 x j (t). Equation (14) is equivalent to the mean field ODEs with initial conditions x 1 (0) = x 0 , x j (0) = 0 for j ≥ 2, and Proof Substituting Eq. (7c) into Eq. (14) and then substituting Eq. (16) yields Differentiating Eq. (16) (for j = 1, . . . , k) yields Eq. (15) as follows.
For j = 1, Eq. (16) reduces to Differentiating x 1 (t) using the Leibniz integral rule and substituting (18) yields Similarly, for j ≥ 2, Lemma 1 yields The X j dwell time distributions are exponential with rate r . To see why, let χ i (t) (where 1 ≤ i ≤ k) be the (mean field) number of particles in state X at time t that have not reached the ith event. Then and by Eq. (7) we see from Eqs. (18) and (21) That is, particles in state X j are those for which the ( j − 1)th event has occurred, but not the jth event. Thus, by properties of Poisson processes the dwell time in state X j must exponential with rate r .

Standard LCT
The following theorem and corollary together provide a more general, formal statement of the standard Linear Chain Trick (LCT) as used in practice. These extend Theorem 1 (compare Figs. 1, 2) to explicitly include that particles leaving X enter state Y then remain in Y according to an arbitrary distribution with survival function S, and include transitions into Y from other sources.  The mean field integral equations for this scenario are Proof Eqs. (23a), (23b) and (24) follow from Theorem 1. Equation (23c) follows from substituting (24) into (22b). The definition of x j and initial condition x(0) = x 0 together imply x 1 (0) = x 0 and x j (0) = 0 for the remaining j ≥ 2.

As implied by 1 and 2 above, if the per-capita loss rate μ(t) = μ is constant or time spent in Y is otherwise exponentially distributed, then
4. Any of the more general cases considered in the sections below.
Example 2 To illustrate how the Standard LCT can be used to substitute an implicit exponential dwell time distribution with an Erlang distribution, consider the SIR example discussed in the Introduction [Eqs. (2) and (3), see also Anderson and Watson 1980;Lloyd 2001a, b], but assume the dwell time distribution for the infected state I is Erlang (still with mean 1/γ ) with variance 6 σ 2 , i.e., by Eq. (6), Erlang with a rate r = γ k and shape k = σ 2 /γ 2 . By Theorem 2 and Corollary 1, with Notice that if σ 2 = γ 2 (i.e., k = 1), the dwell time in I is exponentially distributed with rate γ , I (t) = I 1 (t), and Eq. (28) reduce to Eq. (2).
This example nicely illustrates how using Theorem 2 to relax an exponential dwell time assumption implicit in a system of mean field ODEs is much more straightforward than constructing them after first deriving the integral equations, like Eq. (3), and then differentiating them using Lemma 1. In the sections below, we present similar theorems intended to be used for constructing mean field ODEs directly from stochastic model assumptions.

Extended LCT for Poisson process kth event time distributed dwell times
The Standard LCT assumes an Erlang(r , k) distributed dwell time, i.e., a kth event time distribution under a homogeneous Poisson process with rate r . Here we generalize the Standard LCT by assuming the X dwell time follows a more general kth event time distribution under a Poisson process with rate r (t).
First, observe that Eq. (8) in Lemma 1 are more practical when written in terms of 1 r g j r (t) (see the proof of Theorem 1), Lemma 2 A similar relationship to Eq. (29) above (i.e., to Lemma 1) holds true for the Poisson process jth event time distribution density functions h j r given by Eq. (9a). Specifically, as shown in the proof below, where h k Likewise, for j ≥ 2, we have Lemma 2 allows us to generalize Erlang-based results like Theorem 2 to their timevarying counterparts with a time-dependent (or state-dependent) rate r (t), as in the following generalization of the Standard LCT (Theorem 2).
Theorem 3 (Extended LCT for dwell times distributed as Poisson process k th event times) Consider the Standard LCT in Theorem 2 but assume the X dwell time is a Poisson process kth event time, rate r (t). The corresponding mean field integral equations, where h j r and S j r are given in Eq. (9), are Equation (35c) may be further reduced to ODEs, e.g., via Corollary 1.
Having generalized the Standard LCT (Lemma 1 and Theorem 2) to include Poisson process kth event time distributed dwell times, we may now address more complex stochastic model assumptions and how they are reflected in the structure of corresponding mean field ODEs.

Transitions to multiple states
Modeling the transition from one state to multiple states following a distributed delay (as illustrated in Fig. 3) can be done under different sets of assumptions about the underlying stochastic processes, particularly with respect to the rules governing how individuals are distributed across multiple recipient states upon exiting X, and how those rules depend on the dwell time distribution(s) for individuals in that state. Importantly, those different sets of assumptions can yield very different mean field models (e.g., see Feng et al. 2016) and so care must be taken to make those assumptions appropriately for a given application.
While modelers have some flexibility to choose appropriate assumptions, in practice modelers sometimes make unintended and undesirable implicit assumptions, especially when constructing ODE models using "rules of thumb" instead of deriving them from first principles. In this section we present results aimed at helping guide (a) the process of picking appropriate dwell time distribution assumptions, and (b) directly constructing corresponding systems of ODEs without deriving them from explicit stochastic models or intermediate integral equations.
Each of the three cases detailed below yield different mean field ODE models for the scenario depicted in Fig. 3.
First, in Sect. 3.5.1, we consider the extension of Theorem 3 where upon leaving X particles are distributed across m ≥ 1 recipient states according to a generalized Bernoulli distribution with (potentially time varying) probabilities/proportions p j (t), j = 1, . . . , m. Here the outcome of which state a particle transitions to is independent of the time spent in the first state. . 3 Example diagram of transitions out of a given state (X) and into multiple states (Y 1 and Y 2 ). Different assumptions about (1) the dwell times in X, and (2) rules governing the subsequent transitions to Y 1 and/or Y 2 will lead to different sub-state partitions of X, and thus different mean field equations, as detailed in Sect. 3.5. Fortunately, different scenarios often encountered in applications can be reduced to ODEs by applying the results in Sect. 3.5 as detailed in Theorems 4, 5, and 6 and as illustrated in Figs. 4,5,and 6 Second, in Sects. 3.5.2 and 3.5.3, particles entering the first state (X) do not all follow the same dwell time distribution in X. Instead, upon entering X they are distributed across n ≥ 2 sub-states of X, X i , according to a generalized Bernoulli distribution, and each sub-state X i has a dwell time given by a Poisson process k i th event time distribution with rate r i (t). That is, the X dwell time is a finite mixture of Poisson process event time distributions. Particles transition out of X into m subsequent states Y j according to the probabilities/proportions p i j (t), the probability of going to Y j from X i , i = 1, . . . , n and j = 1, . . . , m. Here the determination of which recipient state Y a particle transitions to depends on which sub-state of X the particle was assigned to upon entering X (see Fig. 5).
Third, in Sect. 3.5.4, the outcome of which recipient state a particle transitions to upon leaving X is determined by a "race" between multiple competing Poisson process kth event time distributions, and is therefore not independent of the time spent in the first state (as in Sect. 3.5.1), nor is it pre-determined upon entry into X (as in Sects. 3.5.2 and 3.5.3). This result is obtained using yet another novel extension of Lemma 1 in which the dwell time in state X is the minimum of n ≥ 2 independent Poisson process event time distributions.
Lastly (Sect. 3.5.5), we describe an equivalence between (1) the more complex case addressed in Sect. 3.5.4 assuming a dwell time that obeys the minimum of Poisson process 1st event times, before being distributed across m recipient states, and (2) the conceptually simpler case in Sect. 3.5.1 where the dwell time follows a single Poisson process 1st event time distribution before being distributed among m recipient states. This is key to understanding the scope of the Generalized Linear Chain Trick in Sect. 3.7.

Transition to multiple states independent of the X dwell time distribution
Here we extend the case in the previous section and assume that, upon leaving state X, particles can transition to one of m states (call them Y i , i = 1, . . . , m), and that a particle leaving X at time t enters state Y i with probability p i (t), where m i=1 p i (t) = 1 [i.e., particles are distributed across all Y i following a generalized Bernoulli distribution with parameter vector p(t) = ( p 1 (t), . . . , p m (t))]. See Fig. 4 for a simple example with constant p and m = 2. An important assumption in this case is that the determination about which state a particle goes to after leaving X is made once it leaves X, and thus the state it transitions to upon exiting X is determined independent of the dwell time in X. Examples from the literature include Model II in Feng et al. (2016), where infected individuals (state X) either recovered (Y 0 ) or died (Y 1 ) after an Erlang distributed time delay.
Theorem 4 (Extended LCT with proportional output to multiple states) Consider the case addressed by Theorem 3, and further assume particles go to one of m states (call them Y j ) with p j (t) being the probability of going to Y j . Let S j be the survival functions for the dwell times in Y j .
The special case of Fig. 3 under the assumptions of Theorem 4 (Extended LCT with proportional outputs to multiple states; see Sect. 3.5.1). Specifically, this case assumes that (1) the dwell time distribution for X is Erlang(r , k), and (2) upon exiting X particles are distributed to multiple recipient states, here Y and Z, with probabilities p and 1 − p, respectively The sub-state diagram (cf. Fig. 3) for Example 4 after applying Theorem 5 (Extended LCT for finite mixtures of Poisson process event time distributions and output to multiple states). Upon entering X particles have an Erlang(r i , k i ) dwell time in X with probability ρ i , i = 1, 2, 3, i.e., the X dwell time follows an Erlang mixture distribution (see Sect. 3.5. 3) The mean field integral equations for this case, with x(0) = x 0 and y j (0) = y j0 , are These integral equations are equivalent to the following system of equations: Equations (39c) may be further reduced to ODEs, e.g., via Corollary 1.
Proof Equations (39a), (39b) and (40) follow from Theorem 3. Equation (39c) follows from substitution of Eq. (40) into (38b). The derivation of Eq. (38b) is similar to the derivation in Appendix A.1 but accounts for the expected proportion entering each Y j at time t being equal to p j (t). Fig. 4, where the dwell time distribution for X is Erlang(r , k) and the dwell times in Y and Z follow 1st event times under nonhomogeneous Poisson processes with respective rates μ Y (t) and μ Z (t). By Theorem 4 the corresponding mean field ODEs are

Transition from sub-states of X with differing dwell time distributions and differing output distributions across states Y j
In this second case, particles in state X can be treated as belonging to a heterogeneous population, where each remains in that state according to one of N possible dwell time distributions, the ith of these being the k i th event time distribution under a Poisson process with rate r i (t). Each particle is assigned one of these N dwell time distributions (i.e., it is assigned to sub-state X i ) upon entry into X according to a generalized Bernoulli distribution with a (potentially time varying) probability vector ρ(t) = (ρ 1 (t), . . . , ρ N (t)), N i=1 ρ i (t) = 1. In contrast to the previous case, here the outcome of which recipient state a particle transitions to is not necessarily independent of the dwell time distribution.
Note that the dwell time distribution in this case is a finite mixture of N independent Poisson processes event time distributions. If a random variable T is a mixture of Erlang distributions, or more generally a mixture of N independent Poisson process event time distributions, then the corresponding density function ( f ) and survival function (Φ) are Assume that particles leaving sub-state X i then transition to state Y with probability p i (t), = 1, . . . , m, where the duration of time spent in state Y follows a delay distribution give by survival function S j . Then we can partition each X i into X i j , j = 1, . . . , k i , according to Theorem 3 and let x(t), x i (t), x i j (t), and y (t) be the amounts in states X, X i , X i j , and Y at time t, respectively. Assume non-negative initial conditions The mean field integral equations for this scenario are The above system of equations (43) are equivalent to The amounts in X i and X i j are Equations (44c) may be reduced to ODEs, e.g., via Corollary 1.
Proof Substituting Eq. (42b) into Eq. (43a) and then substituting Eq. (45) yields  Fig. 5, where particles entering state X at rate I X (t) enter sub-state X i with probability ρ i , ρ 1 + ρ 2 + ρ 3 = 1, and the X i dwell time is Erlang(r i , k i ) distributed. Particles exiting X 1 and X 2 transition to Y with probability 1, while particles exiting X 3 transition either to state Y or Z with probabilities p Y and p Z = 1 − p Y . Assume particle may also enter states Y and Z from sources other than state X (at rates I Y (t) and I Z (t), respectively), and the dwell times in those two states follow the 1st event times of independent nonhomogeneous Poisson processes with rates μ Y (t) and μ Z (t). Theorem 5 yields the following mean field ODEs (see Fig. 5).

Extended LCT for dwell times given by finite mixtures of Poisson process event time distributions
It's worth noting that in some applied contexts one may want to approximate a non-Erlang delay distribution with a mixture of Erlang distributions (see Sect. 3.7.1 and Appendix B for more details on making such approximations). Theorem 5 above details how assuming such a mixture distribution would be reflected in the structure of the corresponding mean field ODEs. This case can also be addressed in the more general context provided in Sect. 3.7.1.

Transition to multiple states following "competing" Poisson processes
We now consider the case where T , the time a particle spends in a given state X, follows the distribution given by T = min i T i , the minimum of n ≥ 2 independent random variables T i , where T i has either an Erlang(r i , k i ) distribution or, more generally, a Poisson process k i th event time distribution with rate r i (t). Upon leaving state X, particles have the possibility of transitioning to any of m recipient states Y , = 1, . . . , m, where the probability of transitioning to state Y depends on which of the n random variables T i was the minimum. That is, if a particle leaves X at time T = T i = t, then the probability of entering state Y is p i (t) . The distribution associated with T is not itself an Erlang distribution or a Poisson process event time distribution, however its survival function is the product 7 of such survival functions, i.e., As detailed below, we can further generalize the recursion relation in Lemma 1 for the distributions just described above, which can then be used to produce a mean field system of ODEs based on appropriately partitioning X into sub-states. Before considering this case in general, it is helpful to first describe the sub-states of X imposed by assuming the dwell time distribution described above, particularly the case where the distribution for each T i is based on 1st event times (i.e., all k i = 1). Recall that the minimum of n exponential random variables (which we may think of as 1st event times under a homogeneous Poisson process) is exponential with a rate that is the sum of the individual rates r = n i=1 r i . More generally, it is true that the minimum of n 1st event times under independent Poisson processes with rates r i (t) is itself distributed as the 1st event time under a single Poisson processes with rate r (t) ≡ n i=1 r i (t), thus in this case S(t, τ ) = n i=0 S 1 r i (t, τ ) = S 1 r (t, τ ). Additionally, if particles leaving state X are then distributed across the recipient states Y as described above, then this scenario is equivalent to the proportional outputs case described in Theorem 4 with a dwell time that follows a Poisson process 1st event time distribution with rate r (t) ≡ n i=1 r i (t) and a probability vector p = n i=1 p i (t)r i (t)/r (t), since P(T = T i ) = r i (T )/r (T ). (This mean field equivalence of these two cases is detailed in Sect. 3.5.5.) Thus, the natural partitioning of X in this case is into sub-states with dwell times that follow iid 1st event time distributions with rate r (t) ≡ N i=1 r i (t). We may now describe the mean field ODEs for the more general case using the following notation. To index the sub-states of X, consider the ith Poisson process and its k i th event time distribution which defines the distribution of T i . Let a i ∈ {1, . . . , k i } denote the event number a particle is awaiting under the ith Poisson process. Then we can describe the particle's progress through X according to its progress along each of these n Poisson processes using the index vector α ∈ K, where K = {(a 1 , a 2 , . . . , a n ) | a j ∈ {1, . . . , k j }}.
Let K i ⊂ K denote the subset of indices where a i = k i (where we think of particles in these sub-states as being poised to reach the k i th event related to the ith Poisson process, and thus poised to transition out of state X).
To extend Lemma 2 for these distributions, define where m i (t, τ ) = exp − We will also refer to the quantities u and S with the jth element of each product (in u) removed using the notation This brings us to the following lemma, which generalizes Lemma 1 and Lemma 2 to distributions that are the minimum of n different (independent) Poisson process event times. As with the above lemmas, Lemma 3 will allow one to partition X into sub-states corresponding to each of the event indices in K describing the various stages of progress along each Poisson process prior to the first of them reaching the target event number.
Proof Using the definition of u in Eq. (50) above, it follows that The next theorem details the LCT extension that follows from Lemma 3.

where T i are either Erlang(r i , k i ) distributed or follow the more general (nonhomogeneous) Poisson process k i th event time distribution with rate r i (t).
Assume particles leaving X can enter one of m states Y , = 1, . . . , m. If a particle leaves X at time T i (i.e., T i occurred first, so T = T i ), and then the particle transitions into state Y with probability p i (T ). Let x(t), and y (t) be the amount in each state, respectively, at time t, and assume non-negative initial conditions. The mean field integral equations for this scenario, for = 1, . . . , m and i = 1, . . . , n, are Equations (55) above are equivalent to The y (t) equations (56c) may be further reduced to a system of ODEs, e.g., via Corollary 1.
Eqs. (55b) become (56c), where K i = {α | α ∈ K, a i = k i }, by substituting Eqs. (52), (57), and S \i (t, τ ) h k i r i (t, τ ) = α∈K i r i (t) u(t, τ, α), which yields Example 5 See Fig. 6. Suppose the X dwell time is T = min(T 1 , T 2 ) where T 1 and T 2 are the k 1 th and k 2 th event time distributions under independent Poisson processes (call these PP1 and PP2) with rates r 1 (t) and r 2 (t), respectively. Assume that, upon leaving X, particles transition to Y 1 if T = T 1 or to Y 2 if T = T 2 . By Theorem 6, we can partition X into sub-states defined by which event (under each Poisson process) particles are awaiting next. Upon entry into X, all particles enter sub-state X 1,1 where they each await the 1st events under PP1 or PP2. If the next event to occur for a given particle is from PP1, the particle transitions to X 2,1 where it awaits a 2nd event from PP1 or 1st event from PP2 (hence the subscript notation). Likewise, if PP2's 1st event occurs before PP1's 1st event, the particle would transition to X 1,2 , and so on. Particles would leave these two states to either X 2,2 , Y 1 , or Y 2 depending on which event occurs next. Under these assumptions, with k 1 = k 2 = 2 and exponential Y i dwell times with rates μ, then the corresponding mean field equations (using r (t) = r 1 (t) + r 2 (t)) are · · · · · · · · · · · · · · · · · · Fig. 6 The sub-state diagram (cf. Fig. 3) for the scenario detailed in Example 5 after the application of Theorem 6 (extended LCT for dwell times given by competing Poisson processes). Here the X dwell time distribution is the minimum of two Erlang random variables T i ∼Erlang(r i , k i ), i = 1, 2, which can be thought of as event time distribution under two homogeneous Poisson processes as detailed in the main text. We here assume that whichever of these occurs first determines whether particles leaving X transition to Y 1 or Y 2 , respectively It's worth pointing out that the dwell times for the above sub-states of X are all identically distributed Poisson process 1st event times (note the loss rates in Eqs. (62a)-(62d), and recall the weak memorylessness property from Sect. 3.1.2). All particles in a X sub-state at time τ will spend a remaining amount of time in that state that follows a 1st event time distributions under a Poisson process with rate r (t) = r 1 (t) + r 2 (t). This is a slight generalization of the familiar fact that the minimum of n independent exponentially distributed random variables (with respective rates r i ) is itself an exponential random variable (with rate r ≡ n i=1 r i ). The next section addresses the generality of this observation about the sub-states of X.

Mean field equivalence of proportional outputs and competing Poisson processes for 1st event time distributions
The scenarios described in Sect. 3.5.1 (proportional distribution across multiple states Y after an Erlang dwell time in X) and Sect. 3.5.4 (proportional distribution across multiple states based upon competing Poisson processes), can lead to equivalent mean field equations when the X dwell times follow Poisson process 1st event time distributions, as is Example 5. This equivalence is detailed in Theorem 7, and is an important aspect of the GLCT detailed in Sect. 3.7 because it helps to show how sub-states with dwell times distributed as Poisson process 1st event times are the fundamental buildings blocks of the GLCT.  τ ). Since all k i = 1, the probability that T = T i is r i (T )/r (T ), thus the probability that a particle leaving X at t goes to Y is

. , n and particles transition to Y with probability p i (T ) when T = T i . The corresponding mean field model is equivalent to the special case of Theorem 4 (the Extended LCT for multiple outputs) where the X dwell time is a Poisson process 1st event time distribution with rate r (t) = n i=1 r i (t), and the transition probability vector for leaving X and entering state Y is given by p
Substituting the above equalities into the mean field Eq. (56a) (where there's only one possible index in K = {(1, 1, . . . , 1)}) and (56c) gives which are the mean field equations for the aforementioned special case of Theorem 4.

Modeling intermediate state transitions: reset the clock, or not?
We next describe how to apply extensions of the LCT in two similar but distinctly different scenarios (see Fig. 7) where the transition to one or more intermediate substates either resets an individual's overall dwell time in state X (by assuming the time spent in an intermediate sub-state X I i is independent of time already spent in X 0 ; see Sect. 3.6.1), or instead leaves the overall dwell time distribution for X unchanged (by conditioning the time spent in intermediate state X I i on the time already spent in X 0 ; see Sect. 3.6.2).
An example of these different assumptions leading to important differences in practice comes from Feng et al. (2016) where individuals infected with Ebola can either . 7 Should the overall dwell time distribution for state X be "reset" by the transition from base sub-state X 0 to intermediate sub-state X I (i.e., should the dwell time in state X I be independent of the time already spent in X 0 ?), or should the X I dwell time be conditioned on time already spent in X 0 so that the X 0 →X I transition does not alter the overall dwell time in state X? How do these different assumptions alter the structure of the corresponding mean field ODEs? We answer these question in Sect. 3.6 where we describe how to apply the LCT in scenarios with intermediate states, assuming in Sect. 3.6.1 that the dwell time distribution for X I is independent of the amount of time spent in X 0 , and assuming in Sect. 3.6.2 that the overall dwell time for X is unaffected by transitions from X 0 to X I leave the infected state (X) directly (either to a recovery or death), or after first transitioning to an intermediate hospitalized state (X I ) which needs to be explicitly modeled in order to incorporate a quarantine effect into the rates of disease transmission (i.e., the force of infection should depend on the number of non-quarantined individuals, i.e., X 0 ). As shown in Feng et al. (2016), the epidemic model output depends strongly upon whether or not it is assumed that moving into the hospitalized sub-state impacts the distribution of time spent in the infected state X. To most simply illustrate these two scenarios, consider the simple case in Fig. 7 where a single intermediate sub-state X I is being modeled, and particles enter X into sub-state X 0 at rate I X (t). Let X=X 0 ∪X I . Both cases assume particles transition out of X 0 either to sub-state X I or they leave state X directly and enter state Y. Both cases also assume the distribution of time spent in X 0 is T * =min(T 0 , T 1 ) where particles transition to X I if T 1 < T 0 (i.e., if T = T 1 ) or to Y if T 0 < T 1 (where each T i is the k i th event time under Poisson processes with rates r 0 (t) and r 1 (t) (see Sects. 3.5.4 and 3.5.5). Let T I denote the distribution of time spent in intermediate state X I . The first case assumes T I is independent of time spent in X 0 (i.e., the transition to X I 'resets the clock'; see Sect. 3.6.1). The second case assumes T I is conditional on time already spent in X 0 (call it t 0 ), such that the total amount of time spent in X, t 0 + T I , is equivalent in distribution to T 0 (i.e., the transition to X I does not change the overall distribution of time spent in X; see Sect. 3.6.2).
In the next two sections, we provide extensions of the LCT for generalizations of these two scenarios, extended to multiple possible intermediate states with eventual transitions out of X into multiple recipient states.

Intermediate states that reset dwell time distributions
First, we consider the case in which the time spent in the intermediate state X I is independent of the time already spent in X (i.e., in the base state X 0 ). This is arguably the more commonly encountered (implicit) assumption found in ODE models that aren't explicitly derived from a stochastic model and/or mean field integro-differential delay equations.
The construction of mean field ODEs for this case is a straightforward application of Theorem 6 from the previous section, combined with the extended LCT with output to multiple states (Theorem 4). Here we have extended this scenario to include M X intermediate sub-states X I j where the transition to those sub-states from base state X 0 is based on the outcome of N competing Poisson process event time distributions (T i ), and upon leaving the intermediate states particles transition out of state X into one of M Y possible recipient states Y .

Theorem 8 (Extended LCT with dwell time altering intermediate sub-state transitions)
Suppose particles enter X at rate I X (t) into a base sub-state X 0 . Assume particles remain in X 0 according to a dwell time distribution given by T , the minimum of N + 1 independent Poisson process k i th event time distributions with rates r i (t), i = 0, . . . , N , i.e., T = min i (T i ). Particles leaving X 0 transition to one of M X ≥ 1 intermediate sub-states X I i or to one of M Y ≥ 1 recipient states Y according to which T i = T . If T 0 = T then the particle leaves X and the probability of transitioning to Y is p 0 (T ), where M Y =1 p 0 (T ) = 1. If T i = T for i ≥ 1 then the particle transitions to X I j with probability p i j (T ), where M X j=1 p i j (T ) = 1.

Particles in intermediate state X I j remain there according to the κ i th event times under a Poisson process with rate i (t), and then transition to state Y with probability q j (t), where (for fixed t) M Y =1 q j (t) = 1, and they remain in Y according to a dwell time with survival function S (t, τ ).
In this case the corresponding mean field equations are where the amount in base sub-state X 0 is x 0 (t) = α∈K x 0α (t), the amount in the jth intermediate state X I j is x I j (t) = κ j k=1 x I jk (t) (see Theorem 6 for notation), 0 , a 1 , . . . , a N . . . ,N ,= 1,. . . ,M Y ,and in Eq. (64b) α = (a 0 , . . . , a N ) ∈ K\(1, . . . , 1). Note that the y (t) equations (64e) may be further reduced to a system of ODEs, e.g., via Corollary 1, and that more complicated distributions for dwell times in intermediate states X I i (e.g., an Erlang mixture) could be similarly modeled according to other cases addressed in this manuscript.
Proof This follows from applying Theorem 6 to X 0 and treating the intermediate states X I j as recipient states, then applying Theorem 4 to each intermediate state to partition each X I j into X I jk , k = 1, . . . , κ j , yielding Eq. (64).
Example 6 Consider the scenario in Fig. 7. Let the X 0 dwell time be the minimum of T 0 ∼Erlang(r 0 , 2) and T 1 ∼Erlang(r 1 , 2), with intermediate state dwell time T I 1 ∼Erlang( 1 , κ 1 = 3) and an exponential (rate μ) dwell time in Y. Assume the only inputs into X are into X 0 at rate I X (t). By Theorem 8 the corresponding mean field ODEs (see Fig. 8

Intermediate states that preserve dwell time distributions
In this section we address how to modify the outcome in the previous section to instead construct mean field ODE models that incorporate 'dwell time neutral' sub-state transitions, i.e., where the distribution of time spent in X is the same regardless of whether or not particles transition (within X) from some base sub-state X 0 to one or more intermediate sub-states X I j . This is done by conditioning the dwell time distributions in X I i on time spent in X 0 in a way that leverages the weak memorylessness property discussed in Sect. 3.1.2. In applications, this case (in contrast to the previous case) is perhaps the more commonly desired assumption, since modelers often seek to partition states into substates where key characteristics, like the overall X dwell time distribution, remain (a) Intermediate state dwell time is independent of time spent in X 0 .
unchanged, but where the different sub-states have functional differences elsewhere in the model. For example, transmission rate reductions from quarantine in SIR type infectious disease models.
One approach to deriving such a model is to condition the dwell time distribution for an intermediate state X I i on the time already spent in X 0 (as in Feng et al. 2016). We take a slightly different approach and exploit the weak memoryless property of Poisson process 1st event time distributions (see Theorem 1 in Sect. 3.1.2, and the notation used in the previous section) to instead condition the dwell time distribution for intermediate states X I j on how many of the k 0 events have already occurred when a particle transitions from X 0 to X I j (rather than conditioning on the exact elapsed time spent in X 0 ). In this case, since each sub-state of X 0 has iid dwell time distributions that are Poisson process 1st event times with rate r (t) = N i=0 r i (t), if i of the k 0 events had occurred prior to the transition out of X 0 , then the weak memoryless property of Poisson process 1st event time distributions implies that the remaining time spent in X I j should follow a (k 0 − i)th event time distribution under an Poisson process with rate r 0 (t), thus ensuring that the total time spent in X follows a k 0 th event time distribution with rate r 0 (t). With this realization in hand, one can then apply Theorem 6 and Theorem 4 as in the previous section to obtain the desired mean field ODEs, as detailed in the following Theorem, and as illustrated in Fig. 8.

Theorem 9 (Extended LCT with dwell time preserving intermediate states)
Consider the mean field equations for a system of particles entering state X (into sub-state X 0 ) at rate I X (t). As in the previous case, assume the time spent in X 0 follows the minimum of N + 1 independent Poisson process k i th event time distributions with respective rates r i (t), i = 0, . . . , N (i.e., T = min i (T i )). Particles leaving X 0 at time T transition to recipient state Y with probability p 0 (T ) if T = T 0 , or if T = T i (i = 1, . . . , N ) to the jth of M X intermediate sub-states, X I j , with probability p i j (T ). If T < T 0 , we may define a random variable K ∈ {0, . . . , k 0 − 1} indicating how many events had occurred under the Poisson process associated with T 0 at the time of the transition out of X 0 (at time T ). In order to ensure the overall time spent in X follows a Poisson process k 0 th event time distribution with rate r 0 (t), it follows that particles entering state, X I j will remain there for a duration of time that is conditioned on K = k such that the conditional dwell time for that particle in X I j will be given by a Poisson process (k 0 − k)th event time with rate r 0 (t). Finally, assume that particles leaving X via intermediate sub-state X I j at time t transition to Y with probability q j , where they remain according to a dwell time determined by survival function S (t, τ ).
The corresponding mean field equations are

Generalized Linear Chain Trick (GLCT)
In the preceding sections we have provided various extensions of the Linear Chain Trick (LCT) that describe how the structure of mean field ODE models reflects the assumptions that define corresponding continuous time stochastic state transition models. Each case above can be viewed as a special case of the following more general framework for constructing mean field ODEs, which we refer to as the Generalized Linear Chain Trick (GLCT). The cases we have addressed thus far share the following stochastic model assumptions, which constitute the major assumptions of the GLCT. A1. A focal state (which we call state X) can be partitioned into a finite number of substates (e.g., X 1 , . . . ,X n ), each with independent (across states and particles) dwell time distributions that are either exponentially distributed with rates r i or, more generally, are distributed as independent 1st event times under nonhomogeneous Poisson processes with rates r i (t), i = 1, . . . , n. Recall the equivalence relation in Sect. 3.5.5. A2. Inflow rates into X can be described by non-negative, integrable inflow rates into each of these sub-states (e.g., I X 1 (t), . . . , I X n (t)), some or all of which may be zero. This includes the case where particles enter X at rate Λ(t) and are distributed across sub-states X i according to the probabilities ρ(t) = [ρ 1 (t), . . . , ρ n (t)] T (i.e., we let . Particles that transition out of a sub-state X i at time t transition either into substate X j with probability p i j (t), or into one of recipient states Y , = 1, . . . , m, with probability p i,n+ . That is, let p i j (t) denote the probability that a particle leaving state X i at time t enters either X j if j ≤ n or Y j−n if j > n, where i = 1, . . . , n, j = 1, . . . , n, n + 1, . . . , n + m. A4. Recipient states Y , = 1, . . . , m, also have dwell time distributions defined by survival functions S Y (t, τ ) and integrable, non-negative inflow rates I Y (t) that describe inputs from all other non-X sources.
The GLCT (Theorem 10) describes how to construct mean field ODEs for states X and Y for state transition models satisfying the above assumptions.
Theorem 10 (Generalized Linear Chain Trick) Consider a stochastic, continuous time state transition model of particles entering state X and transitioning to states Y , = 1, . . . , m, according to the above assumptions A1-A4. Then the corresponding mean field model is given by , and we assume non-negative initial conditions x i (0) = x i0 , y (0) = y 0 . Note that the y (t) equations might be reducible to ODEs, e.g., via Corollary 1 or other results presented above.
Furthermore, Eq. (68a) may be written in vector form where P X (t) = ( p i j (t)) (i, j ∈ {1, . . . , n}) is the n × n matrix of (potentially time-varying) probabilities describing which transitions out of X i at time t go to X j (likewise, one can define P Y (t) = ( p i j (t)), i ∈ {1, . . . , n}, j ∈ {n + 1, . . . , n + m}, which is the n × m matrix of probabilities describing which transitions from X i at time t go to Y j−n ), where • indicates the Hadamard (element-wise) product.
Proof The proof of the theorem above follows directly from applying Theorem 4 to each sub-state.

Example 8 (Dwell time given by the maximum of independent Erlang random variables)
We here illustrate how the GLCT can provide a conceptually simpler framework for deriving ODEs relative to derivation from mean field integral equations by assuming the X dwell time obeys the maximum of multiple Erlang distributions. While the survival function for this distribution is not straightforward to write down, it is fairly straightforward to construct a Markov Chain that yields such a dwell time distribution (see Fig. 9). Recall that, in Sect. 3.5.4, we considered a dwell time given by the minimum of N Erlang distributions. Here we instead consider the case where the dwell time distribution is given by the maximum of multiple Erlang distributions, T = max(T 1 , T 2 ) where T i ∼Erlang(r i , 2). For simplicity, assume the dwell time in a single recipient state Y is exponential with rate μ. We again partition X according to which events (under the two independent homogeneous Poisson processes associated with each of T 1 and T 2 ) particles are awaiting, and index those sub-states accordingly (see Fig. 9). These sub-states are X 11 , X 21 , X 12 , X * 1 , X 22 , X 1 * , X * 2 , and X 2 * , where a ' * ' in the ith index position indicates that particles in that sub-state have already had the ith Poisson process reach the k i th event (in this case, the 2nd event). Each such sub-state has exponentially distributed dwell times, but rates for these dwell time distributions differ (unlike the cases in Sect. 3.5.4 where all sub-states had the same rate): the Poisson process rates for sub-states X 11 , X 21 , X 12 , and X 22 are r = r 1 + r 2 (see Fig. 9 and compare to Theorem 6 and Fig. 6), but the rate for the states X 1 * and X 2 * (striped circles in Fig. 9) are r 1 , and for X * 1 and X * 2 (shaded circles in Fig. 9) are r 2 .
In the context of the GLCT, let x(t) =[x 11 (t), x 21 (t), x 12 (t), x * 1 (t), x 22 (t), x 1 * (t), x * 2 (t), x 2 * (t)] T then by the assumptions above R(t) =[r , r , r , r 2 , r , r 1 , r 2 , r 1 ] T , ρ(t) = [1, 0, . . . , 0] T and hence I X (t) = [I X (t), 0, . . . , 0] T . Denote p 1 ≡ r 1 /r and p 2 ≡ r 2 /r (à la Theorem 7 in Sect. 3.5.5). Then the first eight rows of 9 × 9 matrix P are given by The X sub-state structure for Example 8 where X dwell time distribution follows the maximum of two Erlang random variables with rates r 1 and r 2 , respectively, and shape parameters k i = 2. As in Sect. 3.5.4, where the minimum is assumed instead of the maximum, X can be partitioned using indices based on organizing particles by which events they are awaiting under each Poisson process associated with each Erlang distribution. Upon reaching the target event (here, the 2nd event) under any given Poisson process, particles transition to sub-states with an asterisk in the corresponding index position (e.g., see figure). In general, these sub-states all have a dwell times given by the 1st event time under a Poisson process, but with differing rates (see Example 8): here they follow exponential distributions with either rate r = r 1 + r 2 (white backgrounds), rate r 2 (gray backgrounds), or rate r 1 (lined backgrounds) Thus, by the GLCT (Theorem 10), the corresponding mean field ODEs are
Consider the assumptions of Theorem 10 above. Assume vectors ρ(t) = ρ and R(t) = R, and matrices P X (t) = P X , and P Y (t) = P Y are all constant. As above, assume the probability of entering states in Y is zero, thus our initial distribution vector for this CTMC (with n + m states) is fully determined by the (length n) vector ρ. Also assume-just to define the CTMC that describes transitions among transient states X up to (but not after) entering states Y-that each state in Y is absorbing (i.e., p ii = 1, i > n). Then the X dwell time distribution follows the hitting time distribution for a CTMC with initial distribution vector ρ and (n + m)×(n + m) transition probability matrix To clearly state the GLCT for phase-type distributions, we must reparameterize the above CTMC. First, there is an equivalent parameterization of this CTMC which corresponds to thinning the underlying Poisson processes so that we only track transitions between distinct states, and ignore when an individual leaves and instantly returns to it's current state (this thinned process is often called the embedded jump process).
The rate for the thinned process that determines a particle's dwell time in transient state i goes from r i to If 0 ≤ p ii < 1, the transition probabilities out of state i then get normalized to , for j = i, and p ii = 0.
The rows for absorbing states (Y) remain unchanged, i.e., p i j = p i j for i > n. The resulting transition probability matrix P and rate vector λ define the embedded jump process description of the CTMC with transition probability matrix P and rate vector R (initial probability vector ρ is the same for both representations of this CTMC). Lastly, this CTMC can again be reparameterized by combining the jump process transition probability matrix P and rate vector λ to yield this CTMC's transition rate matrix (also sometimes called the infinitesimal generator matrix or simply the generator matrix) which is defined as follows. Let matrix G be the same dimension as P (and thus, P) and let the first n terms in the diagonal of G be the negative of the jump process rates, −λ (i.e., G ii = −λ i , i ≤ n). Let the off diagonal entries of the first n rows of G be the jump process transition probabilities p i j multiplied by the ith rate λ i (i.e., G i j = λ i p i j , j = i). Thus, the first row of G is [ − λ 1 p 12 λ 1 · · · p 1n λ 1 · · · p 1,n+m λ 1 ] and so on. Since the transition rates out of absorbing states (e.g., the last m rows of G) are 0, G has the form Note that P and λ can be recovered from G using the definitions above. This third parameterization of the given CTMC, determined solely by initial distribution ρ and transition rate matrix G, can now to used to formally describe the phase-type distribution associated with this CTMC, i.e., the distribution of time spent in the transient states X before hitting absorbing states Y. Specifically, the phase-type distribution density function and CDF are where 1 is a n × 1 vector of ones. Importantly, this distribution depends only on the n × n matrix G X and length n initial distribution vector ρ.
The GLCT for phase-type distributions can now be stated as follows.
Corollary 2 (GLCT for phase-type distributions) Assume particles enter state X at rate Λ(t) and that the dwell time distribution for a state X follows a continuous phase-type distribution given by the n × 1 initial probability vector ρ and n × n matrix G X .
Let I X i (t) = ρ i Λ(t) and I X (t) = [I X 1 , . . . , I X n ] T . Then Eq. (69) in Theorem 10 becomes d dt Example 9 (Serial LCT and hypoexponential distributions) Assume the dwell time in state X is given by the sum of independent (not identically distributed) Erlang distributions or, more generally, Poisson process k i th event time distributions with rates r i (t), i.e., T = i T i , i = 1, . . . , N (note the special case where all k i = 1 and r i (t) = r i are constant, which yields that T follows a hypoexponential distribution). Let n = i k i and further assume particles go to Y with probability p upon leaving X, = 1, . . . , m. Using the GLCT framework above, this corresponds to partitioning X into sub-states X j , where j = 1, . . . , n, and where the first k 1 elements of R(t) are r 1 (t), the next k 2 are r 2 (t), etc., and By the GLCT (Theorem 10), using r ( j) (t) to denote the jth element of R(t) in Eq. (78), the corresponding mean field equations are Note, the phase-type distribution form of the hypoexponential distribution could be used with Corollary 2 to derive the x i equations in Eq. (80).
Example 10 (SIR model with Phase-Type Duration of Infectiousness.) Consider the simple SIR model given by Eq. (2) with mass action transmission λ(t) = β I (t). Suppose the assumed exponentially distributed dwell times in state I were to be replaced by a phase-type distribution with initial distribution vector α and matrix A (note that, in some cases, it is possible to match the first three or more moments using only a 2 × 2 or 3 × 3 matrix A and note that Matlab and Python routines for making such estimates are freely available in Horváth and Telek 2017). Then, letting x be the vector of substates of I and I (t) = x 1 (t) + · · · + x n (t), by the GLCT for phase-type distributions (Corollary 2) the corresponding mean field ODE model, with

Discussion
ODEs are arguably the most popular framework for modeling continuous time dynamical systems in applications. This is in part due to the relative ease of building, simulating, and analyzing ODE models, which makes them very accessible to a broad range of scientists and mathematicians. The above results generalize the Linear Chain Trick (LCT), and detail how to construct mean field ODE models for a broad range of scenarios found in applications based on explicit individual-level stochastic model assumptions. Our hope is that these contributions improve the speed and efficiency of constructing mean field ODE models, increase the flexibility to make more appropriate dwell time assumptions, and help clarify (for both modelers and those reading the results of their work) how individual-level stochastic assumptions are reflected in the structure of mean field ODE model equations. We have provided multiple novel theorems that describe how to construct such ODEs directly from underlying stochastic model assumptions, without formally deriving them from an explicit stochastic model or from intermediate integral equations. The Erlang distribution recursion relation (Lemma 1) that drives the LCT has been generalized to include the time-varying analogues of Erlang distributions-i.e., kth event time distributions under nonhomogeneous Poisson processes (Lemma 2)-and distributions that reflect "competing Poisson process event times" defined as the minimum of a finite number of independent Poisson process event times (Lemma 3). These new lemmas, and our generalization of the memorylessness property of the exponential distribution (which we refer to in Sect. 3.1.2 as the weak memorylessness property of nonhomogeneous Poisson process 1st event time distributions) together allow more flexible dwell time assumptions to be incorporated into mean field ODE models. We have also introduced a novel Generalized Linear Chain Trick (GLCT; Theorem 10 in Sect. 3.7) which complements previous extensions of the LCT (e.g., Jacquez and Simon 2002;Diekmann et al. 2017) and allows modelers to construct mean field ODE models for a broad range of dwell time assumptions and associated sub-state configurations (e.g., conditional dwell time distributions for intermediate sub-state transitions), including the phase-type family of distributions and their time-varying analogues as detailed in Sect. 3.7.1. The GLCT also provides a framework for considering other scenarios not specifically addressed by the preceding results, as illustrated by Example 8 in which the dwell time distribution follows the maximum of multiple Erlang distributions. These results not only provide a framework to incorporate more accurate dwell time distributions into ODE models, but also hopefully encourage more comparative studies, such as Feng et al. (2016), that explore the dynamic and application-specific consequences of incorporating non-Erlang dwell time distributions, and conditional dwell time distributions, into ODE models. More study is needed, for example, comparing integral equation models with non-exponential dwell time distributions and their ODE counterparts based on approximating those distributions with Erlang and phase-type distributions.
The GLCT also permits modelers to incorporate the flexible phase-type distributions (i.e., the hitting-time distributions for Continuous Time Markov Chains) into ODE models. This family of probability distributions includes mixtures of Erlang distributions (a.k.a. hyper-Erlang distributions), the minimum or maximum of multiple Erlang distributions, the hypoexponential distributions, generalized Coxian distributions, and others (Reinecke et al. 2012a;Horváth et al. 2016). While the phase-type distributions are currently mostly unknown to mathematical biologists, they have received some attention in other fields and modelers can take advantage of existing methods that have been developed to fit phase-type distributions to other distributions on R + and to data (see Sect. 3.7.1 for references). These results provide a flexible framework for approximating dwell time distributions, and incorporating those empirically or analytically derived dwell time distributions into ODE models.
There are some additional considerations, and potential challenges to implementing these results in applications, that are worth addressing. First, the increase in the number of state variables may lead to both computational and analytical challenges, however we have a growing number of tools at our disposal for tackling higher dimensional systems (e.g., van den Driessche and Watmough 2002; Guo et al. 2008;Li and Shuai 2010;Du and Li 2014;Bindel et al. 2014). We intend to explore the costs associated with this increase in dimensionality in future publications. Second, it is tempting to assume that the sub-states resulting from the above theorems correspond to some sort of sub-state structure in the actual system being modeled. This is not necessarily the case, and we should be cautious about interpreting these sub-states as evidence of, e.g., cryptic population structure. Third, some of the above theorems make a simplifying assumption that, either at time t = 0 or upon entry into X, the initial distribution of particles is only into the first sub-state. This may not be the appropriate assumption to make in some applications, but it is fairly straight forward to modify these these initial condition assumptions within the context of the GLCT. Fourth, it may be appropriate in some applications to avoid mean field models, and instead analyze the stochastic model dynamics directly [e.g., see Allen (2010Allen ( , 2017 and references therein]. Lastly, the history of successful attempts to garner scientific insights from mean field ODE models seems to suggest that such distributional refinements are unnecessary. However, this is clearly not always the case, as evidenced by the many instances in which modelers have abandoned ODEs and instead opted to use integral equations to model systems with non-Erlang dwell time distributions. We hope these results will facilitate more rigorous comparisons between such models and their simplified and/or ODE counterparts, like Feng et al. (2016) and Piotrowska and Bodnar (2018), to help clarify when the use of these ODE models is warranted.
In closing, these novel extensions of the LCT help clarify how underlying assumptions are reflected in the structure of mean field ODE models, and provide a means for incorporating more flexible dwell time assumptions into mean field ODE models directly from first principles without a need to derive ODEs from stochastic models or intermediate mean field integral equations. The Generalized Linear Chain Trick (GLCT) provides both a conceptual framework for understanding how individual-level stochastic assumptions are reflected in the structure of mean field model equations, and a practical framework for incorporating exact, approximate, or empirically derived dwell time distributions and related assumptions into mean field ODE models.

A.1 Deriving mean field integral equations and ODEs
Here we illustrate one approach to derive mean field equations starting from an explicit stochastic model using a simple toy example. To do this, we begin by describing a discrete time approximation (with arbitrarily small time step Δt) of a stochastic system of particles transitioning among multiple states, then derive the corresponding discretetime mean field model which then yields the desired mean field model by taking the limit as Δt → 0. Other more systematic approaches exist, e.g., see Kurtz (1970Kurtz ( , 1971. Consider a large number of particles that begin (at time t = 0) in state W and then each transitions to state X after an exponentially distributed amount of time (with rate a). Particles remain in state X according to a Erlang(r , k) distributed delay before entering state Y. They then go to state Z after an exponentially distributed time delay with rate μ. Let w(t), x(t), y(t), and z(t) be the amount in each of the corresponding states at time t ≥ 0, with w(0) = N 0 , and x(0) = y(0) = z(0) = 0. Then, as we will derive below, the corresponding continuous time mean field model is given by Eq. (A1) where the state variables w, x, y, and z correspond to the amount in each of the corresponding states, and we assume the initial conditions w(0) = w 0 1 and x(0) = y(0) = z(0) = 0.
First, to derive the linear ODEs (A1a) and (A1d), note that the number of particles that transition from state W to state X in a short time interval (t, t + Δt) is binomially distributed: if we think of a transition from W to X as a "success" then the number of "trials" n = w(t) and the probability of success p is given by the exponential CDF value p = 1 − exp(−aΔt). For sufficiently small Δt this implies p = aΔt + O(Δt 2 ). Let w(t + s) = E( w(t + s)| w(t)) for s ≥ 0. Since the expected value of a binomial random variable is np it follows that Dividing both sides of (A2) by Δt and then taking the limit as Δt → 0 yields  (A1b) and (A1c) Similarly, define z(t) in terms of z(t) then it follows that d dt z(t) = μ y(t).
Next, we derive the integral equations (A1b) and (A1c) by similarly deriving a discrete time mean field model and then taking its limit as bin width Δt → 0 (i.e., as the number of bins M → ∞).
Partition the interval [0, t] into M 1 equally wide intervals of width Δt ≡ t/M (see Fig. 10). Let I i = (i − 1)Δt, iΔt denote the ith such time interval (i = 1, . . . , M) and let t i = (i − 1)Δt denote the start time of the ith interval. We may now account for the number transitioning into and out of state X during time interval I i , and sum these values to compute x(t).
The number in state X at time t ( x(t)) is the number that entered state X between time 0 and t, less the number that transitioned out of X before time t (for now, assume x(0) = 0). A particle that enters state X at time s ∈ (0, t) will still be in state X according to a Bernoulli random variable with p = S k r (t − s) (the expected proportion under the given gamma distribution). Therefore, to compute x(t) we can sum over our M intervals and add up the number that entered state X during interval I i and, from each of those M cohorts, count how many remain in X at time t. Specifically, the number entering X during the ith interval [t, t + Δt] is given by N i ≡ w(t i + Δt) − w(t i ) (see Fig. 10), and thus the number remaining in X at time t is the sum of the number remaining at time t from each such cohort (i.e., the sum over i = 1 to M) where the number remaining in X at t from each cohort follows a compound binomial distribution given by the sum of N i Bernoulli random variables B X each with probability p = S k r (t − t i ) + O(Δt). This defines our stochastic state transition model, which yields a mean field model as follows.
The expected amount entering X during [t i , t i + Δt] is E(N i ) = E( w(t i ) − w(t i + Δt)) = a w(t i )Δt, and the expected proportion of the ith cohort remaining at time t is E(B X ) = S k r (t − t i ) + O(Δt). Thus, the expected number from the ith cohort remaining in X at time t is E(N i )E(B X ) = a w(t i ) S k r (t − t i ) Δt + O(Δt 2 ). Summing these expected values over all intervals, and letting Δt → 0, yields To calculate y(t), let K j be the number entering state Y during interval I j that are still in state Y at time t. As above, K j can be calculated by summing (over I 1 to I j−1 ) the number in each cohort that entered state X during I i then transitioned to state Y during time interval I j and are still in state Y at t. Therefore K j can be written as the sum of j − 1 compound distributions given by counting how many of the N i particles that entered state X during I i then transitioned to state Y during interval I j and then persisted until time t without transitioning to Z. To count these, notice that each such particle entering state X during I i , state Y during I j and persisting in Y at time t follows a Bernoulli random variable B i j with probability Therefore, the number of particles that entered Y at I j and remain in state Y at time t, K j , can be written as a compound random variable Summing over all intervals and letting Δt → 0 (M → ∞) gives Numerical comparisons suggest this mixture is a very good approximation of the target gamma(α, β) distribution for shape β values larger than roughly 3 to 5, depending (to a lesser extent) on α.
Alternatively, to approximate a gamma(α, β) distribution with a mixture of Erlang distributions as described above, one could also select the mixing probabilities by, for example, using an alternative metric such as a distance in probability space (e.g., see Rachev 1991), e.g., the L ∞ -norm on their CDFs, or information-theoretic quantities such as KL or Jensen-Shannon divergence.