Skip to main content
Log in

Convergence Properties of Posttranslationally Modified Protein–Protein Switching Networks with Fast Decay Rates

  • Original Article
  • Published:
Bulletin of Mathematical Biology Aims and scope Submit manuscript

Abstract

A significant conceptual difficulty in the use of switching systems to model regulatory networks is the presence of so-called “black walls,” co-dimension 1 regions of phase space with a vector field pointing inward on both sides of the hyperplane. Black walls result from the existence of direct negative self-regulation in the system. One biologically inspired way of removing black walls is the introduction of intermediate variables that mediate the negative self-regulation. In this paper, we study such a perturbation. We replace a switching system with a higher-dimensional switching system with rapidly decaying intermediate proteins, and compare the dynamics between the two systems. We find that the while the individual solutions of the original system can be approximated for a finite time by solutions of a sufficiently close perturbed system, there are always solutions that are not well approximated for any fixed perturbation. We also study a particular example, where global basins of attraction of the perturbed system have a strikingly different form than those of the original system. We perform this analysis using techniques that are adapted to dealing with non-smooth systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bree Cummins.

Appendix

Appendix

We begin by proving Lemma 4.2, which is an immediate consequence of Lemmas 6.1 and 6.2 below.

Lemma 6.1

Consider a black wall w(xx) with an exit hyperplane \(y_J=\theta _{x,J}\) and flanking cells \(\kappa \) and \(\bar{\kappa }\) such that \(\kappa \,\cap \, \bar{\kappa }= w(x,x)\). Define \(e(x,J) := \{y_J=\theta _{x,J}\} \,\cap \, \mathop {\mathrm {int}}\nolimits (\kappa \,\cup \, \bar{\kappa })\). Then there exists a nonempty, compact, rectangular region \({\mathcal {R}}\subset \kappa \,\cup \, \bar{\kappa }\) such that \({\mathcal {R}}\,\cap \, w(x,x) \,\cap \, e(x,J) \ne \emptyset \) and for every \(I^0 \in {\mathcal {R}}\), there exists \(T_{I^0}\ge 0\) such that \((x(t),y_i(t)) \in {\mathcal {R}}\) for \(t \in [0,T_{I^0})\) and \((x(T_{I^0}),y_i(T_{I^0})) \in {\mathcal {R}}\,\cap \, e(x,J)\).

Proof

We first observe that both the black wall w(xx) and the exit hyperplane \(y_J=\theta _{x,J}\) must lie on the boundary of \(\kappa \) and on the boundary of \(\bar{\kappa }\). Therefore, by Definition 3.9, the definition of an exit hyperplane, and Definition 3.4, describing the flow near a black wall, there exists a neighborhood V of e(xJ) such that

  • \(V \cap \mathop {\mathrm {int}}\nolimits \kappa \), \(V\cap \mathop {\mathrm {int}}\nolimits \bar{\kappa }\), and \(V \cap w(x,x)\) are all relatively open and nonempty;

  • for any \(I^0 \in V \cap \kappa \) or \(I^0 \in V \cap \bar{\kappa }\), the solution starting at \(I^0\) exits \(\kappa \) or \(\bar{\kappa }\) either through \(\{ y_J=\theta _{x,J}\} \) or through w(xx);

  • for any \(I^0 \in V \cap w(x,x)\) the solution starting at \(I^0\) exits w(xx) through \(w(x,x) \cap e(x,J)\).

Consider three flows: \(\varphi _\kappa \) on \(\kappa \) and \(\varphi _{\bar{\kappa }}\) on \(\bar{\kappa }\) from (8) and \(\varphi _{w(x,x)}\) on w(xx) defined in (7). Since the latter flow is defined as restriction of both \(\varphi _\kappa \) and \(\varphi _{\bar{\kappa }}\), their union defines a continuous flow \(\tilde{\varphi }\) on \(V \cap (\kappa \cup \bar{\kappa }\cup w(x,x))\).

Take \(I^0 \in w(x,x) \cap e(x,J) \) such that the flow \(\varphi _{w(x,x)}\) is transversal to e(xJ). Such a point must exist by the assumption that \(\{ y_J = \theta _{x,J}\}\) is an exit wall for w(xx). Since the flow on the black wall (7) is a restriction of \(\varphi _\kappa \) and \(\varphi _{\bar{\kappa }}\), there is a neighborhood of \(I^0\)

$$\begin{aligned} \tilde{\mathcal {R}}:= {\mathcal {R}}_x \times {\mathcal {R}}_{y_1} \times \ldots {\mathcal {R}}_{y_{J-1}} \times \theta _{x,J} \times {\mathcal {R}}_{y_{J+1}} \times \ldots \times {\mathcal {R}}_{y_{n-1}} \subset e(x,J) \cap V, \end{aligned}$$

where \({\mathcal {R}}_{v}\) is an interval in the variable v, such that the flow \(\tilde{\varphi }\) is transversal to e(xJ).

By continuity of \(\tilde{\varphi }\) on V and transversality of \(\tilde{\varphi }\) on \(\tilde{\mathcal {R}}\), there is an interval \({\mathcal {R}}_{y_J}\) in variable \(y_J\) such that all solutions in

$$\begin{aligned} {\mathcal {R}}:= {\mathcal {R}}_{y_J} \times \tilde{\mathcal {R}}= {\mathcal {R}}_x \times {\mathcal {R}}_{y_J} \times \prod _{i\ne J} {\mathcal {R}}_{y_i} \end{aligned}$$

exit \({\mathcal {R}}\) directly through \(\tilde{\mathcal {R}}\setminus w(x,x)\), or enter the black wall w(xx) and exit \({\mathcal {R}}\) through \(w(x,x) \cap e(x,J)\). \(\square \)

When we add the modified protein \(\hat{z}\) to the protein-only system, the \(n-1\)-dimensional black wall w(xx) where \(x=\theta _x\) disappears. The analogous region to w(xx) in the PTM system (9) is the set of two transparent walls

$$\begin{aligned} {\mathcal {W}}:= \{ (\hat{x}, \hat{y}_i, \hat{z}) \;|\; (\hat{x},\hat{y}_i) \in w(x,x), \hat{z} \in [\theta _z - \beta _z, \theta _z + \beta _z] \}. \end{aligned}$$
(22)

Notice that this set is one dimension higher than w(xx). We now show that there exists a region \(\hat{\mathcal {R}}(\epsilon )\) intersecting \({\mathcal {W}}\), analogous to \({\mathcal {R}}\), where only \(\hat{x}\), \(\hat{z}\), and \(\hat{y}_{J}\) are crossing thresholds, and where all \(\hat{y}_i\) solutions match their counterpart \(y_i\) solutions within \({\mathcal {R}}\).

Lemma 6.2

In the protein-only system (8), fix a black wall w(xx) and a region \({\mathcal {R}}= {\mathcal {R}}_x \times {\mathcal {R}}_{y_J} \times \prod _{i\ne J} {\mathcal {R}}_{y_i}\) as in Lemma 6.1. Let \({\mathcal {W}}\) be as in (22). Then, for \(\epsilon \) sufficiently small, there exists a compact rectangular region \( \hat{\mathcal {R}}(\epsilon ) =\hat{\mathcal {R}}_x(\epsilon )\times [\theta _z-\beta _z,\theta _z+\beta _z] \times {\mathcal {R}}_{y_J} \times \prod _{i\ne J} {\mathcal {R}}_{y_i} \) such that \(\hat{\mathcal {R}}(\epsilon ) \cap {\mathcal {W}}\ne \emptyset \), and if \((\hat{x}(0),\hat{y}_i(0),\hat{z}(0)) \in \hat{\mathcal {R}}(\epsilon )\), then

  1. 1.

    For \(\hat{y}_i(0)=y_i(0)\), \(i=1,\ldots ,n-1\), we have \(\hat{y}_i(t) = y_i(t) \text{ as } \text{ long } \text{ as } y_i(t) \in {\mathcal {R}}_{y_i};\)

  2. 2.

    The forward trajectory \((\hat{x}(t), \hat{y}_i(t),\hat{z}(t))\) remains in \({\mathcal {R}}\times [\theta _z-\beta _z,\theta _z+\beta _z]\) until the exit at \(y_J=\hat{y}_J=\theta _{x,J}\).

Proof

By the observation in Remark 4.1 the right hand side of equations for \(\hat{y}_i\) and \(y_i\) in (8) and (9) are identical. Since the projections of regions \({\mathcal {R}}\) and \(\hat{\mathcal {R}}\) onto the x and \(\hat{x}\) variables, respectively, is identical, the inputs \(X_i\) and \(\hat{X}_i\) into the right hand side of the equations for \(\hat{y}_i\) and \(y_i\) are identical. This implies the first point. Therefore, the exit hyperplanes \(y_J=\hat{y}_J =\theta _{x,J}\) match. Moreover, \([\theta _z-\beta _z,\theta _z+\beta _z]\) is an invariant region for \(\hat{z}\). So it only remains to decide the form of \(\hat{\mathcal {R}}_x(\epsilon )\). By examination of the Poincaré map introduced in Sect. 6.3 below, there are oscillations in the \(\hat{x}\)-\(\hat{z}\) feedback system about the point \((\theta _x,\theta _z)\). So the key to finding \(\hat{\mathcal {R}}_x(\epsilon )\) is bounding the value of \(\hat{x}\) into the interval \({\mathcal {R}}_x\), which we choose to write as \({\mathcal {R}}_x:=[\theta _x-\mu _1,\theta _x+\mu _2]\).

We claim that an interval of the form \(\hat{\mathcal {R}}_x(\epsilon ) := [\theta _x-\mu _1+\eta _\epsilon ,\theta _x+\mu _2-\nu _\epsilon ]\) is sufficient to prove the lemma, leading to a region

$$\begin{aligned} \hat{\mathcal {R}}(\epsilon ) =\hat{\mathcal {R}}_x(\epsilon ) := [\theta _x-\mu _1+\eta _\epsilon ,\theta _x+\mu _2-\nu _\epsilon ]\times [\theta _z-\beta _z,\theta _z+\beta _z] \times {\mathcal {R}}_{y_J} \times \prod _{i\ne J} {\mathcal {R}}_{y_i} \end{aligned}$$
(23)

with adjustments

$$\begin{aligned} \eta _\epsilon>&\left( 2^{\epsilon \gamma _x} -1 \right) \left( \theta _x - \mu _1 - {\varPhi }_x(\bar{\kappa }) \right) \nonumber \\ \nu _\epsilon> & {} \left( 2^{\epsilon \gamma _x} -1 \right) \left( {\varPhi }_x(\kappa ) - \theta _x - \mu _2 \right) \end{aligned}$$
(24)

for \({\varPhi }_x(\bar{\kappa }) < \theta _x\) and \({\varPhi }_x(\kappa )>\theta _x\). Notice that these lower bounds shrink to 0 as \(\epsilon \rightarrow 0\), so that \(\hat{\mathcal {R}}_x(\epsilon ) \rightarrow {\mathcal {R}}_x\). We remark that if \({\varPhi }_x(\bar{\kappa })\in [\theta _x-\mu _1,\theta _x)\) and \({\varPhi }_x(\kappa )\in (\theta _x,\theta _x+\mu _2]\), then \(\eta _\epsilon =\nu _\epsilon =0\). However, \({\mathcal {R}}_x\) may always be taken sufficiently small so that \({\varPhi }_x(\bar{\kappa })<\theta _x-\mu _1\) and \({\varPhi }_x(\kappa )>\theta _x+\mu _2\). We observe that \(\hat{\mathcal {R}}(\epsilon ) \cap {\mathcal {W}}\ne \emptyset \) by construction.

To prove the claim, consider taking the left endpoint of \(\hat{\mathcal {R}}_x(\epsilon )\) as an initial condition, \(x(0)=\theta _x-\mu _1+\eta _\epsilon \). If \(\hat{z}(0) \in (\theta _z,\theta _z+\beta _z]\), then \(\hat{x}\) is decreasing. We want to ensure that \(\hat{x}\) will not decrease below \(\theta _x-\mu _1\) in the time it takes \(\hat{z}\) to reach \(\theta _z\) and reverse the \(\hat{x}\) flow. The time it takes \(\hat{z}\) to reach \(\theta _z\) from its extremal location \(\theta _z+\beta _z\) is calculated from (11):

$$\begin{aligned} \hat{T}_z=-\epsilon \;\ln \left( \frac{\theta _z-(\theta _z-\beta _z)}{(\theta _z+\beta _z)-(\theta _z-\beta _z)}\right) = \epsilon \ln 2. \end{aligned}$$

Therefore, we require that the time taken for \(\hat{x}\) to reach \(\theta _x-\mu _1\) must not exceed \(\epsilon \ln 2\). Again making use of (11), denoting \({\varPhi }_x(\bar{\kappa })<\theta _x\) to be the focal point of \(\hat{x}\) when \(\hat{z} > \theta _{z}\), we find the travel time of \(\hat{x}\) from \(\theta _x-\mu _1+\eta _\epsilon \) to \(\theta _x-\mu _1\) to be

$$\begin{aligned} \hat{T}_x = \frac{1}{\gamma _x}\ln \left( 1+ \frac{\eta _\epsilon }{\theta _x-\mu _1-{\varPhi }_x(\bar{\kappa })} \right) . \end{aligned}$$

Solving \(\hat{T}_x < \epsilon \ln 2\) gives us the first expression in (24); a similar calculation yields the second expression for \(\nu _\epsilon \).

Point (III) of Theorem 4.3 takes more work to establish. We will first reduce the protein-only (8) and PTM (9) systems into the study of two- and three-dimensional systems, respectively. We will then introduce a Poincaré map in the three-dimensional system to describe and quantify the oscillations in the PTM system as they are projected onto the \(\hat{x}\)-\(\hat{z}\) plane. With the Poincaré map as a tool, we will then prove Points (III)a and (III)b of Theorem 4.3.

1.1 Simplification of the Protein-Only System Near a Black Wall

We consider initial conditions \(I^0 \in {\mathcal {R}}\). For \((x(t;I^0),y_i(t;I^0)) \in {\mathcal {R}}\), the effects of \(y_i\) for \(i\in \{1,\dots ,J-1,J+1,\ldots ,n-1\}\) on the right hand sides of \(\{\dot{x} ,\dot{y}_i\}_{i=1}^{n-1}\) are constant. Similarly, x and \(y_J\) have a constant effect on \(\{\dot{y}_i\}_{i=1}^{n-1}\). In other words, the only non-constant values in (8) on the region \({\mathcal {R}}\) occur in the equation for \(\dot{x}\), so that system (8) may be rewritten as

$$\begin{aligned} \dot{x}= & {} -\gamma _x x + e_1 X^\pm +e_2 Y^\pm _{x,y_J}+ e_3X^\pm Y^\pm _{x,y_J} +C_x \end{aligned}$$
(25)
$$\begin{aligned} \dot{y}_J= & {} -\gamma _J y_J+C_J \end{aligned}$$
(26)
$$\begin{aligned} \dot{y}_i= & {} -\gamma _{i} y_i + C_i, \;\; i\in \{1,\ldots ,J-1,J+1,\ldots ,n-1\}. \end{aligned}$$
(27)

The equation for \(\dot{x}\) is a general multilinear form for a two-dimensional system. The constant \(C_x\) depends on the locations of \(y_i\) with respect to their thresholds. The constants \(\{C_i\}_{i=1}^{n-1}\) depend on the locations \(\{x,y_i\}_{i=1}^{n-1}\) with respect to their thresholds. The signs of \(e_i\) and the superscripts ± must be consistent with negative self-regulation in x.

Remark 6.3

The key observation that simplifies the analysis is that Eqs. (25) and (26) are decoupled from (27) in the region \({\mathcal {R}}\). The solutions of (27) in \({\mathcal {R}}\) are exponentially decaying toward the focal points of the \(y_i\) variables without crossing thresholds. Therefore, it is sufficient to study the two-dimensional system . Therefore, it is sufficient to study the local two-dimensional system (25)–(26) near the black wall, even in the case where there are other self-regulating variables \(y_i\).

By combining the terms and only considering the relevant threshold, the system is

$$\begin{aligned} \dot{x}= & {} -\gamma _x x + \left\{ \begin{array}{cc} A, &{} x> \theta _x, \;\; y> \theta _y \\ B, &{} x> \theta _x, \;\; y< \theta _y \\ C, &{} x< \theta _x, \;\; y > \theta _y \\ D, &{} x< \theta _x, \;\; y < \theta _y \end{array} \right. \nonumber \\ \dot{y}= & {} -\gamma _y y + E \end{aligned}$$
(28)

where we have written \(y_J\) as y and \(\theta _{x,J}\) as \(\theta _y\) for simpler notation. We will now also write \({\mathcal {R}}^2 = {\mathcal {R}}_x \times {\mathcal {R}}_y\) to denote the region of initial conditions of interest in the two-dimensional system. \({\mathcal {R}}^2\) can be thought of as the two lower cells in Fig. 9 (see also Fig. 12).

The hyperplanes \(x=\theta _x\) and \(y=\theta _y\) divide the two-dimensional phase space into four cells. These cells will be named \(\kappa _A\)-\(\kappa _D\) using the convention that in \(\kappa _U\) the focal point of x is \({\varPhi }_x(U) := U/\gamma _x\) for \(U \in \{ A,B,C,D\}\), see Fig. 9b. The system (28) has a black wall if \({\varPhi }_x(A)< \theta _x < {\varPhi }_x(C)\) or \({\varPhi }_x(B)< \theta _x < {\varPhi }_x(D)\). In the remainder of the work, we assume without loss of generality that \({\varPhi }_x(B)< \theta _x < {\varPhi }_x(D)\) defines the black wall w(xx), see Fig. 9b. The order of \({\varPhi }_x(A)\), \({\varPhi }_x(C)\), and \(\theta _x\) determines whether the adjoining wall is white or transparent, and whether or not y is an up-regulator or down-regulator of x in this region. To ensure an exit from this black wall in Fig. 9, we have chosen that \({\varPhi }_x(A), {\varPhi }_x(C) < \theta _x\). Our choices are for illustration; our arguments work for all ABCD that can result from (25) to (27), provided that either \({\varPhi }_x(A)< \theta _x < {\varPhi }_x(C)\) or \({\varPhi }_x(B)< \theta _x < {\varPhi }_x(D)\).

Fig. 9
figure 9

a The reduced regulatory network for the protein-only system (8) assuming x and y mutually up-regulate and b a schematic of the dynamics near the black wall w(xx) for the parameter choice \({\varPhi }_x(B)< \theta _x < {\varPhi }_x(D)\) and \({\varPhi }_x(A),\,{\varPhi }_x(C) > \theta _x\)

1.2 Simplification of PTM System Near a Feedback Plane

We write the \(n+1\)-dimensional PTM system within \(\hat{\mathcal {R}}(\epsilon )\) as:

$$\begin{aligned} \dot{\hat{x}}&=-\gamma _x x + \left\{ \begin{array}{cc} A, &{} \hat{z}> \theta _z, \;\; \hat{y}> \theta _y \\ B, &{} \hat{z}> \theta _z, \;\; \hat{y}< \theta _y \\ C, &{} \hat{z}< \theta _z, \;\; \hat{y}> \theta _y \\ D, &{} \hat{z}< \theta _z, \;\; y< \theta _y \end{array} \right. \nonumber \\ \dot{\hat{z}}&=\frac{1}{\epsilon }\left( -\hat{z}+ \left\{ \begin{array}{cc} \theta _z+\beta _z, &{} \hat{x} > \theta _x \\ \theta _z-\beta _z, &{}\hat{x} < \theta _x \end{array} \right. \right) \nonumber \\ \dot{\hat{y}}_J&=-\gamma _{J} \hat{y}_J + E\nonumber \\ \dot{\hat{y}}_i&=-\gamma _i \hat{y}_i + C_i, \; i=1,\ldots ,J-1, J+1, n-1 \end{aligned}$$
(29)

All of the parameters match between the PTM and protein-only systems in \(\hat{\mathcal {R}}(\epsilon )\) and \({\mathcal {R}}\) respectively. In particular, \(E=C_J\) and \(C_i\) in (29) match \(C_i\), \(i=1,\ldots ,n-1\) in (27) due to Lemma 6.2. The parameters A-D in (28) and (29) are the same due to Remark 4.1 and because \(\hat{z}\) down-regulates \(\hat{x}\).

The equality in parameters means that the focal point components between systems are equal, \(\hat{\varPhi }_x = {\varPhi }_x\) and \(\hat{\varPhi }_y = {\varPhi }_y\), so we will omit the hats on the focal points. As in (28), we analyze the case where \({\varPhi }_x(B)< \theta _x < {\varPhi }_x(D)\). We study the decoupled three-dimensional system

$$\begin{aligned} \dot{\hat{x}}= & {} -\gamma _x x + \left\{ \begin{array}{cc} A, &{} \hat{z}> \theta _z, \;\; \hat{y}> \theta _y \\ B, &{} \hat{z}> \theta _z, \;\; \hat{y}< \theta _y \\ C, &{} \hat{z}< \theta _z, \;\; \hat{y}> \theta _y \\ D, &{} \hat{z}< \theta _z, \;\; y< \theta _y \end{array} \right. \nonumber \\ \dot{\hat{z}}= & {} \frac{1}{\epsilon }\left( -\hat{z}+ \left\{ \begin{array}{cc} \theta _z+\beta _z, &{} \hat{x} > \theta _x \\ \theta _z-\beta _z, &{}\hat{x} < \theta _x \end{array} \right. \right) \nonumber \\ \dot{\hat{y}}= & {} -\gamma _{y} \hat{y} + E \end{aligned}$$
(30)

pictured in Fig. 10, where we dropped index J in y equation. Then the \(\hat{x}\) and \(\hat{y}\) variables are directly comparable to x and y in (28). Analogously to \({\mathcal {R}}^2\), we denote by \(\hat{\mathcal {R}}^3(\epsilon )\) the projection of \(\hat{\mathcal {R}}(\epsilon )\) onto the 3D-system. The three-dimensional domain shown in Fig. 10 with a schematic of the flow in \(\hat{\mathcal {R}}^3(\epsilon )\).

Fig. 10
figure 10

a The regulation in (30) using mutual up-regulation for \(\hat{x}\) and \(\hat{y}\), as in the protein-only system in Fig. 9, and b the local dynamics near \(\hat{x}=\theta _x\) assuming \(\hat{y}\) is increasing as in Fig. 9. The cube with \(\hat{y}\) increasing is the projection of \(\hat{\mathcal {R}}(\epsilon )\) onto the 3D-system, \(\hat{\mathcal {R}}^3(\epsilon )\). \({\mathcal {W}}\) from (22) is the plane \(\hat{x}=\theta _x\) slicing the cube down the center (Color figure online)

Fig. 11
figure 11

A schematic of the flow in the \(\hat{x}\)\(\hat{z}\) system, showing the location of the Poincaré sections \(S_1\) and \(S_2\) and the quadrants where \(\kappa _{U,\pm }\) means that the focal point in that quadrant is given by \(({\varPhi }_x(U),\theta _z\pm \beta _z)\). These focal points and the fact that \({\varPhi }_x(B)<\theta _x<{\varPhi }_x(D)\) define the vector field

As in (28), the cube \(\hat{\mathcal {R}}^3(\epsilon )\) is divided into cells, \(\kappa _{B,+} \times {\mathcal {R}}_y\), \(\kappa _{B,-} \times {\mathcal {R}}_y\), \(\kappa _{D,+} \times {\mathcal {R}}_y\), and \(\kappa _{D,-} \times {\mathcal {R}}_y\). The projections of these cells onto the \((\hat{x},\hat{z})\) plane are shown in Fig. 11. The notation means the following: \(\kappa _{U,\pm }\times {\mathcal {R}}_y\) has a focal point of \(({\varPhi }_x(U),{\varPhi }_y,\theta _z\pm \beta _z)\).

1.3 Poincaré Map

In this section we will define a Poincare map for (30). Let

$$\begin{aligned} \hat{S}_1 := \{ (\hat{x}, \hat{y}, \hat{z}) \; |\; \hat{x} \ge \theta _x, \hat{z} = \theta _z\} \quad \text{ and } \quad \hat{S}_2 := \{ (\hat{x}, \hat{y}, \hat{z}) \; |\; \hat{x} \le \theta _x, \hat{z} = \theta _z\} . \end{aligned}$$
(31)

Let \(\Psi (\hat{x},\hat{y},\hat{z},t)\) be the flow generated by (30) in \(\hat{\mathcal {R}}^3(\epsilon ) \setminus \{y=\theta _y\}\). Note that the equations for the \(\hat{x}(t)\) and \(\hat{z}(t)\) trajectories are decoupled from the \(\hat{y}\) variable on \(\hat{\mathcal {R}}^3(\epsilon ) \setminus \{y=\theta _y\}\). Therefore the projection of \(\Psi (\hat{x},\hat{y},\hat{z},t)\) onto the first two variables \((\hat{x}, \hat{z})\)

$$\begin{aligned} \varphi \left( \hat{x},\hat{z}, t\right) := \left( \hat{x}(t),\hat{z}(t)\right) \end{aligned}$$

is a well-defined flow and so \(\Psi (\hat{x},\hat{y},\hat{z},t) = (\varphi ( \hat{x},\hat{z}, t), y(t)) \) is a product flow.

Therefore it is sufficient to study the flow \(\varphi (\hat{x},\hat{z}, t)\) in the \( \hat{x}(t)\)-\(\hat{z}(t)\) plane. It is clear from the \(\hat{x}\)-\(\hat{z}\) vector field pictured in Fig. 11 that as long as y does not cross \(\theta _y\), the flow \(\varphi (\hat{x},\hat{z}, t)\) will alternately cross \(\hat{x} = \theta _x\) and \(\hat{z} = \theta _z\), thus oscillating in the \((\hat{x}, \hat{z})\) plane. Let

$$\begin{aligned} S_1 := \{(\hat{x}, \hat{z}) \; |\; \hat{x} \ge \theta _x, \hat{z} = \theta _z\} \quad \text{ and } \quad S_2 := \{(\hat{x}, \hat{z}) \; |\; \hat{x} \le \theta _x, \hat{z} = \theta _z\}. \end{aligned}$$

The interiors \(\mathop {\mathrm {int}}\nolimits S_1\) and \(\mathop {\mathrm {int}}\nolimits S_2\) are transverse sections of \(\varphi \). The point \(S_1 \cap S_2 = (\theta _x, \theta _z)\) is an equilibrium of the flow \(\varphi \), which we can observe from the closed form of the Poincaré map that we now introduce. Let \(g_1:S_1 \rightarrow S_2\) and \(g_2: S_2 \rightarrow S_1\) be \(\varphi \)-defined maps. The composition \(P=g_2\circ g_1:S_1 \rightarrow S_1\) defines a Poincaré map on \(S_1\).

Let \(\hat{I}^0=(\hat{x}(0), \theta _z)\) be an initial condition on \(S_1 \cap \; \hat{\mathcal {R}}^3(\epsilon )\). We find a closed form solution for \(g_1 : S_1 \rightarrow S_2\) by calculating the time \(\hat{T}_x\) needed to reach the line \(x=\theta _x\) from \(\hat{I}^0\), and then the time \(\hat{T}_z\) needed to reach \(S_2\) from \((\theta _x,\hat{z}(\hat{T}_x))\). Then \(g_1(\hat{x}(0)):=\varphi (\hat{T}_z+\hat{T}_x, \hat{x}(0)).\) The calculation makes repeated use of (10)–(11):

$$\begin{aligned} \hat{T}_x^1&= -\frac{1}{\gamma _x}\ln \left( \frac{\theta _x - {\varPhi }_x(B)}{\hat{x}(0)-{\varPhi }_x(B)} \right) ; \quad \hat{z}(\hat{T}_x^1) = \theta _z + \beta _z(1-e^{-\hat{T}_x^1/\epsilon }) \end{aligned}$$
(32)
$$\begin{aligned} \hat{T}_z^1&= -\epsilon \ln \left( \frac{\beta _z}{\hat{z}(\hat{T}_x^1)-\theta _z+\beta _z} \right) ; \quad \hat{x}(\hat{T}_z^1) = {\varPhi }_x(B)+(\theta _x-{\varPhi }_x(B))e^{-\gamma _x\hat{T}_z^1} \end{aligned}$$
(33)

for \(\hat{x}(0) \in S_1\), which yields

$$\begin{aligned} g_1(\hat{x}(0))&= \hat{x}(\hat{T}_z^1) = {\varPhi }_x(B)+(\theta _x-{\varPhi }_x(B))\left( 2-\left( \frac{\theta _x-{\varPhi }_x(B)}{\hat{x}(0) - {\varPhi }_x(B)}\right) ^{1/\epsilon \gamma _x} \right) ^{-\epsilon \gamma _x} \end{aligned}$$
(34)

Similarly, the map \(g_2: S_2\rightarrow S_1\), is given by calculating:

$$\begin{aligned} \bar{T}_x^1&= -\frac{1}{\gamma _x}\ln \left( \frac{{\varPhi }_x(D)-\theta _x}{{\varPhi }_x(D)-\hat{x}(\hat{T}_z^1)} \right) ; \quad \hat{z}(\bar{T}_x^1) = \theta _z - \beta _z(1-e^{-\bar{T}_x^1/\epsilon })\end{aligned}$$
(35)
$$\begin{aligned} \bar{T}_z^1&= -\epsilon \ln \left( \frac{\beta _z}{\theta _z+\beta _z-\hat{z}(\bar{T}_x^1)} \right) ; \quad \hat{x}(\bar{T}_z^1) = {\varPhi }_x(D)+(\theta _x-{\varPhi }_x(D))e^{-\gamma _x\bar{T}_z^1} \end{aligned}$$
(36)

for \(\hat{x}(0) \in S_2\), which yields

$$\begin{aligned} g_2(\hat{x}(0))={\varPhi }_x(D)+(\theta _x-{\varPhi }_x(D))\left( 2-\left( \frac{{\varPhi }_x(D)-\theta _x}{{\varPhi }_x(D) -\hat{x}(\hat{T}_z^1)}\right) ^{1/\epsilon \gamma _x} \right) ^{-\epsilon \gamma _x} \end{aligned}$$
(37)

The first return time of the Poincaré map is then given by

$$\begin{aligned} T_R^1:= & {} \hat{T}_x^1 + \hat{T}_z^1 + \bar{T}_x^1 + \bar{T}_z^1 \nonumber \\= & {} -\frac{1}{\gamma _x} \ln \left( \frac{(\theta _x-{\varPhi }_x(B))({\varPhi }_x(D)-\theta _x)}{(\hat{x}(0) - {\varPhi }_x(B))({\varPhi }_x(D)-\hat{x}(\hat{T}_z^1))} \right) \nonumber \\&-\epsilon \ln \left( \frac{\beta _z^2}{(\hat{z}(\hat{T}_x^1) - \theta _z+\beta _z)(\theta _z+\beta _z-\hat{z}(\bar{T}_x^1))} \right) \end{aligned}$$
(38)

We observe that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} T_R^1 = \hat{T}_x^1, \end{aligned}$$
(39)

since \(\hat{x}(\hat{T}_z^1) \rightarrow \theta _x\), \(\hat{z}(\hat{T}_x^1) \rightarrow \theta _z+\beta _z\), and \(\hat{z}(\bar{T}_x^1) \rightarrow \theta _z - \beta _z\).

We now show that for more than one oscillation about the point \((\theta _x,\theta _z)\), the additional return time for each oscillation goes to zero as \(\epsilon \rightarrow 0\). To see this, we first note that the expression (38) can be viewed as a function of four values that characterize four consecutive interceptions with the \(x=\theta _x\) and \(z=\theta _z\) thresholds

$$\begin{aligned} T_{R}^1 = F \left( \hat{x}(0), \hat{z}\left( \hat{T}_x^1\right) , \hat{x}\left( \hat{T}_{z}^1\right) , \hat{z}\left( \bar{T}_{x}^1\right) \right) . \end{aligned}$$

Then the return time for the n-th oscillation \(T^n_R\) for \(n>1\) can be written as

$$\begin{aligned} T_{R}^{n} := \hat{T}_{x}^{n} + \hat{T}_{z}^{n} + \bar{T}_{x}^{n} + \bar{T}_{z}^{n} = F \left( \hat{x}\left( \sum _{i=1}^{n-1} T_{R}^{i}\right) , \hat{z}\left( \hat{T}_{x}^{n}\right) , \hat{x}\left( \hat{T}_{z}^{n}\right) , \hat{z}\left( \bar{T}_{x}^{n}\right) \right) . \end{aligned}$$
(40)

Lemma 6.4

For a fixed n, the n-th return time \(\sum _{i=1}^n T_R^i\) can be written as

$$\begin{aligned} \sum _{i=1}^n T_R^i = \hat{T}_x^1 + \sum _{i=1}^n f(\epsilon ;i )\quad \text{ with } \lim _{\epsilon \rightarrow 0} f(\epsilon ;i) =0 . \end{aligned}$$
(41)

Proof

Given (39), it is sufficient to show that \(T_R^i = \hat{T}_x^i + \hat{T}_z^i + \bar{T}_x^i + \bar{T}_z^i\) approaches zero as \(\epsilon \rightarrow 0\) for all \(2\le i\le n\). We will make use of Eqs. (32)–(33) and (35)–(36) for a general i.

First we show that for any fixed i

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \hat{T}_{z}^{i} + \bar{T}_{x}^{i} + \bar{T}_{z}^{i} = 0. \end{aligned}$$
(42)

To see this, we observe that by (33) and (36) both \(\hat{T}_{z}^{i}\) and \(\bar{T}_{z}^{i}\) are proportional to \(\epsilon \), and so they tend to zero as \(\epsilon \rightarrow 0\). Now consider the middle term \(\bar{T}_{x}^{i}\) in (42) and its formula in  (35). Note that this expression depends on the term \(\hat{x}(\hat{T}_z^i)\), given by (33). By examining (33) and recalling that \(\lim _{\epsilon \rightarrow 0} \hat{T}_z^i =0\), we see that \(\lim _{\epsilon \rightarrow 0}\hat{x}(\hat{T}_z^i) = \theta _x\). Thus \(\lim _{\epsilon \rightarrow 0} \bar{T}_x^i= 0\), by inspection of (35). This finishes the proof of (42).

We now show by induction that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} T_R^i = 0 \quad \text{ for } \text{ all } \quad i=2,\ldots ,n. \end{aligned}$$

For \(n=2\), we notice that by  (38) and (40) the value \(\hat{T}_x^2\) depends on \(\hat{x}(T_R^1)\). However, by (39) and the fact that \(\hat{T}_x^1\) is the time needed to reach \(\theta _x\) from \(\hat{x}(0)\)

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \hat{x}(T_R^1) = \hat{x}(\hat{T}_x^1)=\theta _x. \end{aligned}$$

Therefore by (32) with \(\hat{x}(T_R^1)\) replacing \(\hat{x}(0)\) we conclude \(\lim _{\epsilon \rightarrow 0} \hat{T}_x^2 = 0\). Finally, using (42), we have \(\lim _{\epsilon \rightarrow 0} T_R^2 = \hat{T}_x^2 + \hat{T}_z^2+\bar{T}_x^2+\bar{T}_z^2 = 0\), which proves the induction step with \(n=2\).

Proceeding to a general n, we make an inductive assumption that \(\lim _{\epsilon \rightarrow 0} T_R^i = 0\) for \(i=2,\dots ,n-1\). We wish to show that \(\lim _{\epsilon \rightarrow 0} T_R^n = 0\). We first note that by the inductive assumption and (39) we have

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \sum _{i=1}^{n-1} T_R^i = \hat{T}_x^1. \end{aligned}$$

Then, since \(\hat{T}_x^1\) is the time needed to reach \(\theta _x\) it follows that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \hat{x}\left( \sum _{i=1}^{n-1} T_R^i \right) = \theta _x. \end{aligned}$$

Using that fact plus (32) and (40), we have

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \hat{T}_x^n = \frac{1}{\gamma _x}\ln \left( \frac{\hat{x}\left( \sum _{i=1}^{n-1} T_R^i \right) -{\varPhi }_x(B)}{\theta _x-{\varPhi }_x(B)} \right) = 0 , \end{aligned}$$

and it follows from  (42) that \(\lim _{\epsilon \rightarrow 0} T_R^n = \hat{T}_x^n + \hat{T}_z^n + \bar{T}_x^n + \bar{T}_z^n = 0\) as desired. \(\square \)

Lemma 6.5

The Poincaré map \(P=g_2\circ g_1:S_1 \rightarrow S_1\) is a contraction map.

Proof

The Poincaré map has a derivative \(|P^{\prime }| = |g_2^{\prime }(g_1)||g_1^{\prime }| < 1\). We calculate

$$\begin{aligned} g_i^{\prime }&= \left( \frac{A_i^{1/\epsilon \gamma _x}}{2-A_i^{1/\epsilon \gamma _x}} \right) ^{1+\epsilon \gamma _x}, \; i=1,2 \\ A_1(\hat{x}(0))&= \frac{\theta _x - {\varPhi }_x(B)}{\hat{x}(0) - {\varPhi }_x(B)} \\ A_2(g_1(\hat{x}(0)))&= \frac{\theta _x - {\varPhi }_x(D)}{g_1(\hat{x}(0)) - {\varPhi }_x(D)} \end{aligned}$$

Observe that \(0<A_1,A_2<1\) since \({\varPhi }_x(B)< \theta _x < \hat{x}(0) \in S_1\) and \(g_1(\hat{x}(0))< \theta _x < {\varPhi }_x(D) \in S_2\). Then the quotient in parentheses is less than 1, so that \(|P^{\prime }|<1\). We conclude P is a contraction map and observe that the oscillations in \((\hat{x}(t), \hat{z}(t))\) decay toward the equilibrium \((\theta _x,\theta _z)\) of \(\varphi \). \(\square \)

The following corollary is immediate.

Corollary 6.6

Given \(\delta >0\), then for any \(\epsilon >0\) and \(T<T_y\) where \(T_y\) is the exit time from \({\mathcal {R}}^2\times [\theta _z-\beta _z,\theta _z+\beta _z]\),

$$\begin{aligned} (\hat{x}(T),\hat{z}(T)) \in [\theta _x-\delta ,\theta _x+\delta ] \times \{\theta _z\} \subset S_1 \,\cup \,S_2 \end{aligned}$$

implies \(\hat{x}(t) \in [\theta _x-\delta ,\theta _x+\delta ]\) for \(t \in [T,T_y]\).

1.4 Point (1) of Theorem 4.3

For the proof of Point (1), we first describe in detail the correspondence between the dynamics of the x and \(\hat{x}\) variables in (28) and (30). Even when we require that \(x(0) = \hat{x}(0)\), the x and \(\hat{x}\) components of the focal points, \({\varPhi }_x\) and \(\hat{\varPhi }_x\), in the two- and three-dimensional systems may be different, because the focal point will depend on the choice of \(\hat{z}\) in the interval \([\theta _z-\beta _z,\theta _z+\beta _z]\).

Fig. 12
figure 12

The relationship between initial conditions \({\mathcal {R}}^2\) in the x-y plane in (28) and those in \(\hat{\mathcal {R}}^3(\epsilon )\) projected onto the \(\hat{x}\)-\(\hat{z}\) plane in system (30). The colored regions in a and b correspond to matching directions of motion for x and \(\hat{x}\). In the red regions, x and \(\hat{x}\) are increasing. In the blue regions, they are decreasing. In the unshaded regions the \(\hat{x}\) trajectory is headed away from the value \(\theta _x\) (Color figure online)

In Fig. 12a, we see the rectangular region \({\mathcal {R}}^2\) in the (xy) plane in system (28). In Fig. 12b, depicting system (30), we see the projection of \(\hat{\mathcal {R}}^3(\epsilon )\) onto the \((\hat{x}\),\(\hat{z}\)) plane. By inspection of (28) and (30), we see that x and \(\hat{x}\) share equations of motion in \(\kappa _{B}\) and \(\kappa _{B,+}\) with the x-component of the focal point \({\varPhi }_x(B)\), and likewise in \(\kappa _{D}\) and \(\kappa _{D,-}\) with the x-component of the focal point \({\varPhi }_x(D)\). In the other two quadrants, \(\hat{x}\) is moving away from \(\theta _x\), a phenomenon that never occurs in system (28). We say that \(\kappa _{B,+} \cup \kappa _{D,-}\) is the matching region, and \(\kappa _{B,-} \cup \kappa _{D,+}\) is the non-matching region.

Proof

(Proof of Point (1) of Theorem 4.3)

Recall that we wish to show that for \(\delta >0\) and \(\hat{I}^0 \in \hat{\mathcal {R}}(\epsilon )\), there exists an \(\epsilon (\delta ,\hat{I}^0)\) such that \(|\hat{x}(T_y) - x(T_y)| \le \delta \). We shall assume for this proof that x reaches the black wall in (28), so that \(x(t) = \theta _x\) for \(t \in [T_x,T_y]\) for some \(T_x < T_y\). A similar proof works for the case when x does not reach the black wall, but we will not show the details here.

Corollary 6.6 states that if for some \(T<T_y\)

$$\begin{aligned} \varphi (\hat{x}(0), \hat{z}(0), T) \in [\theta _x-\delta ,\theta _x+\delta ] \times \{ \theta _z \} \subset S_1 \cup S_2 , \end{aligned}$$

then \(\hat{x}(t) \in [\theta _x-\delta ,\theta _x+\delta ]\) for \(t \in [T,T_y]\). Therefore it is sufficient to find \(\epsilon \) sufficiently small to ensure that \(\hat{x}(T) \in [\theta _x-\delta ,\theta _x+\delta ] \times \{ \theta _z \}\) for some \(T<T_y\).

We will do this by splitting the trajectory \(\varphi (\hat{x}(0), \hat{z}(0), t)\) into pieces that cross the matching and non-matching regions in an alternating fashion. The time taken to cross the matching regions is controlled by the decay rate \(\gamma _x\) and cannot be altered, but the time taken to cross a non-matching region is bounded above by \(\epsilon \ln 2\). To see this we set \(\hat{z}(0)\) to its maximum and minimum possible values \(\hat{z}(0) = \theta _z\pm \beta _z\) in Eq. (11) for \(\hat{T}_z\), the time taken to travel from \(\hat{z}(0)\) to \(\theta _z\):

$$\begin{aligned} \hat{T}_z=-\epsilon \;ln\left( \frac{\theta _z-(\theta _z\mp \beta _z)}{\hat{z}(0)-(\theta _z\mp \beta _z)}\right) \le \epsilon \ln 2. \end{aligned}$$
(43)

Therefore our strategy will be to limit the time spent in the non-matching region by taking \(\epsilon \) small.

Fig. 13
figure 13

Schematic demonstrating the proof of Point (1) of Theorem 4.3. In reality, the straight line segments would be curves. Intermediate labels are given to points on the trajectory, and each intervening segment is labeled by the travel time along the segment. The dotted lines denote the region \([\theta _x-\delta ,\theta _x+\delta ] \times \{\theta _z\}\). a The initial condition is in a matching cell. After time \(T_x\), the trajectory reaches the hyperplane \(\hat{x}=\theta _x\). After a further time \(t_1\), it intersects the hyperplane \(\hat{z} = \theta _z\) within the \(\delta \) region. b The initial condition is in a non-matching cell. At time \(T_x\), the trajectory has not yet reached \(\hat{x} = \theta _x\); it requires a further time \(t_2\) to arrive. Then the time \(t_1\) to reach \(\hat{z} = \theta _z\) is analogous to that in the matching case (a) (Color figure online)

We first consider \((\hat{x}(0),\hat{z}(0))\) in a matching region, shown in Fig. 13a for the example \((\hat{x}(0),\hat{z}(0)) \in \kappa _{B,+}\). A similar picture exists for \((\hat{x}(0),\hat{z}(0)) \in \kappa _{D,-}\), and the following argument holds for both cells. The solutions \(\hat{x}(t)\) and x(t) are identical until they reach threshold \(\theta _x\) after time \(T_x\). At this point the solution \(\varphi (\hat{x}(0), \hat{z}(0), t)\) enters the non-matching region while \(x(t)=\theta _x\). The difference \(|\hat{x} (t)-x(t) |\) will therefore grow for some time \(t_1\) until \(\varphi (\hat{x}(0), \hat{z}(0), t)\) reaches the next threshold \(\hat{z}(T_x+t_1) = \theta _z\). We denote \(\hat{x}_1 :=\hat{x}(T_x+t_1)\). We want to chose \(\epsilon \) small enough so that \(t_1 + T_x < T_y\) and \(|\hat{x}_1 - \theta _x| \le \delta \).

Using (43) we choose \(\epsilon _1>0\) such that \(t_1 \le \epsilon _1 \ln 2 < T_y-T_x\). Then, noticing that

$$\begin{aligned} |\hat{x}_1 - \theta _x|&= | {\varPhi }_x+(\theta _x-{\varPhi }_x)e^{-\gamma _xt_1} - \theta _x |, \end{aligned}$$
(44)

we choose \(\epsilon _2>0\) such that

$$\begin{aligned} t_1&\le \epsilon _2 \ln 2 \le -\frac{1}{\gamma _x}\ln \left( 1-\frac{\delta }{|{\varPhi }_x-\theta _x|}\right) . \end{aligned}$$
(45)

This completes the matching case if we take \(\epsilon \le \min \{\epsilon _1,\epsilon _2\}\).

Now take \((\hat{x}(0),\hat{z}(0))\) in the non-matching region, as shown in Fig. 13b for the example \((\hat{x}(0),\hat{z}(0)) \in \kappa _{D,+}\). As before, the following argument also holds for \(\kappa _{B,-}\). In the non-matching case, x(0) has the x-component of the focal point \({\varPhi }_x\) and \(\hat{x}(0)\) has the x-component of the focal point \({\varPhi }_x^{\prime }\). This means that if \((\hat{x}(0),\hat{z}(0)) \in \kappa _{B,-} \) then \({\varPhi }_x = {\varPhi }_x(B), {\varPhi }_x^{\prime }={\varPhi }_x(D)\) while if \((\hat{x}(0),\hat{z}(0)) \in \kappa _{D,+}\) we have \({\varPhi }_x = {\varPhi }_x(D)\) and \( {\varPhi }_x^{\prime }={\varPhi }_x(B)\). We will need to analyze the crossing of three consecutive cells, while still applying the key estimate (45). We will step through the trajectory cell by cell, counting down a sequence of turning points \(\hat{x}(0) \rightarrow \hat{x}_3 \rightarrow \hat{x}_2 \rightarrow \hat{x}_1\) in order to match the notation in the first case. The matching notation can be seen side-by-side in Fig. 13.

We outline the gist of our argument here, with formulas to follow. Let \(t_3\) be the time until \(\varphi (\hat{x}(0), \hat{z}(0), t)\) reaches the \(\theta _z\) threshold. If the initial condition satisfies \(|\hat{x}(0)-\theta _x| = |x(0)-\theta _x| \ge \delta \), then \(\hat{x}_3 := \hat{x}(t_3) \not \in [\theta _x-\delta ,\theta _x+\delta ] \times \{ \theta _z \}\). Therefore we must wait until the second encounter with \(\theta _z\) threshold to ensure that \(\hat{x}\) is in the desired region \([\theta _x-\delta ,\theta _x+\delta ] \times \{ \theta _z \}\).

From \((\hat{x}_3, \theta _z)\), the trajectory \(\varphi (\hat{x}(0), \hat{z}(0), t)\) enters a matching region. To compare this trajectory to the trajectory of the two-dimensional system, we note that at time \(T_x\), \(x(T_x) = \theta _x\). However, \(\varphi (\hat{x}(0), \hat{z}(0), T_x)\) has not finished crossing the matching region to reach \(\theta _x\), because \(|\hat{x}_3 - \theta _x|>|\hat{x}(0)-\theta _x|\). Therefore \(\hat{x}_2 := \hat{x}(T_x) \not =\theta _x\). We define \(t_2\) to be the time it takes the trajectory to reach \(\theta _x\); that is

$$\begin{aligned} \varphi (\hat{x}(0), \hat{z}(0), T_x+t_2) = (\theta _x, \hat{z}_2) . \end{aligned}$$

At this point, \(\hat{x}(T_x+t_2)=\theta _x\). Lastly we define a time \(t_1\), which is the travel time from \((\theta _x, \hat{z}_2)\) to a point \((\hat{x}_1, \theta _z)\) where \(\hat{x}_1 := \hat{x}(T_x+t_2+t_1)\). Analogously to the matching case, we must ensure that \(|\hat{x}_1 - \theta _x|\le \delta \) and \(T_x+t_2+t_1 < T_y\).

The value of \(\hat{x}(t) \) on its first visit to \(\theta _z\) threshold is given by

$$\begin{aligned} \hat{x}_3 := {\varPhi }_x^{\prime }+(\hat{x}(0)-{\varPhi }_x^{\prime })e^{-\gamma _xt_3}, \end{aligned}$$

and the value of \(\hat{x}_2 := \hat{x}(T_x)\) is given by

$$\begin{aligned} \hat{x}_2 := {\varPhi }_x+(\hat{x}_3-{\varPhi }_x)e^{-\gamma _x(T_x-t_3)} \ne \theta _x. \end{aligned}$$

The travel time from \(\hat{x}_2\) to the threshold \(\theta _x\) is given by the time \(t_2\):

$$\begin{aligned} t_2=-\frac{1}{\gamma _x}\;\ln \left( \frac{\theta _x-{\varPhi }_x}{\hat{x}_2-{\varPhi }_x}\right) . \end{aligned}$$

Since by (43) we have \(t_3 \le \epsilon \ln 2\), it is easy to see that \(\hat{x}_2 \rightarrow \theta _x\) as \(\epsilon \rightarrow 0\), which implies \(t_2 \rightarrow 0\) as \(\epsilon \rightarrow 0\).

To complete the argument, we need to estimate \(t_1\). To do this we note that \(\hat{x}(T_x+t_2) = \theta _x\) which puts us in the same situation that has been discussed in the matching region case. Therefore, we can use the formula as in (44) for \(|\hat{x}_1-\theta _x|\) to estimate \(t_1 \le \epsilon \ln 2\) as in (45). Thus we choose \(\epsilon _1>0\) such that \(t_2+t_1 < T_y-T_x\), chose \(\epsilon _2>0\) such that (45) holds, and then take \(\epsilon < \min \{\epsilon _1,\epsilon _2\}\). This completes the proof.

1.5 Point (2) of Theorem 4.3

The proof of Point (2) requires some preliminary discussion, definitions, and lemmas. The excluded region \({\mathcal {E}}(\epsilon ,\delta )\), is composed of initial conditions \(\hat{I}^0\) that lead to trajectories in system (30) whose \(\hat{x}\) component is insufficiently close to the corresponding x trajectories in system (28). The symbolic description of \({\mathcal {E}}(\epsilon ,\delta )\) in terms of inequalities and set operations is possible, but tedious. For our purposes, it is sufficient to describe some of the more interesting regions of \(\tilde{\mathcal {E}}(\epsilon ,\delta )\) and prove that they have nonempty interior, where recall that \({\mathcal {E}}(\epsilon ,\delta ) = \tilde{\mathcal {E}}(\epsilon ,\delta ) \times (\prod _{i \not = J} {\mathcal {R}}_{y_i})\), and \(\tilde{\mathcal {E}}(\epsilon ,\delta ) \subset \hat{\mathcal {R}}^3(\epsilon )\).

We shall consider initial conditions \(I^0 \in {\mathcal {R}}^2\) where the time for the solution to reach the black wall \(T_x\) is less than \(T_y\), so that the trajectory starting at \(I^0\) enters the black wall. We know that there is an open set of initial conditions in \({\mathcal {R}}^2\) satisfying this condition, so by the proof of Lemma 6.2, there is a corresponding open set of initial conditions \({\mathcal {I}}\subset \hat{\mathcal {R}}^3(\epsilon )\) intersecting \({\mathcal {W}}\) from (22). Part of the excluded region \(\tilde{\mathcal {E}}(\epsilon ,\delta )\) is defined by the inequality \(\{ \hat{I}^0 \in {\mathcal {I}}\; \mid \; |\hat{x}(T_y;\hat{I}^0) - \theta _x| > \delta \}\). This is still a complicated set to describe exactly, so we will intersect it with the Poincaré section \(S_1\). This projection of \(\tilde{\mathcal {E}}(\epsilon ,\delta )\) exhibits interesting “stripes” in the \((\hat{x}, \hat{z})\) phase space.

Definition 6.7

We denote the flow-defined map \( h_{T_y}: \hat{\mathcal {R}}^3(\epsilon ) \rightarrow \{ \hat{y} = \theta _y \}\) via (30) as

$$\begin{aligned} h_{T_y} \,:\, (\hat{x}(0), \hat{y}(0), \hat{z}(0)) \mapsto (\hat{x}(T_y), \theta _y, \hat{z}(T_y)). \end{aligned}$$

This map takes the initial value \(\hat{I}^0\) to its value at the time of exit \(T_y\) from \(\hat{\mathcal {R}}^3(\epsilon )\).

Our basic approach will be to take the preimage of the \(\delta \)-strip

$$\begin{aligned} U(\delta ) := [\theta _x-\delta ,\theta _x+\delta ] \times [\theta _z-\beta _z,\theta _z+\beta _z] \times \{ y=\theta _y\} \end{aligned}$$
(46)

under \(h_{T_y}\). The regions which fall outside of the preimage will not map into the desirable region \(U(\delta )\). To motivate our work, we provide an illustration of the preimage of \(U(\delta )\) projected onto (\(\hat{x}\),\(\hat{z}\)) space in Fig. 3 for increasing exit times \(T_y\). The preimage really exists on some plane \(\{ y(0) \;\mid \; y(T_y) = \theta _y \text{ for } \text{ a } \text{ fixed } T_y \}\), so that the unions of the preimages over \(T_y\) are stacked images like those in Fig. 3, with the more twisted regions farther from the hyperplane \(\{y=\theta _y\}\).

Definition 6.8

For \(\delta >0\), let \(P(\delta ,T_y) = h_{T_y}^{-1}(U(\delta ))\) be the preimage of the compact region \(U(\delta )\) from (46). The set \(P(\delta ,T_y) \cap S_1\) is the collection of intervals

$$\begin{aligned} P_i(\delta ,T_y) := [\hat{x}_i^{*}-\alpha _i^-,\hat{x}_i^{*}+\alpha _i^+] \end{aligned}$$

where \(h_{T_y}(\hat{x}_i^{*},\theta _z,y(0)) = (\theta _x,z(T_y),\theta _y)\) and \(h_{T_y}(\alpha _i^+,\theta _z,y(0)) = (\theta _x+\delta ,\hat{z}(T_y),\theta _y)\). Due to the intersection with \(S_1\), either \(h_{T_y}((\alpha _i^-,\theta _z,y(0)) = (\theta _x-\delta ,\hat{z}(T_y),\theta _y)\) or \(\alpha _i^- = \hat{x}_i^{*} - \theta _x\) if the preimage value of \(\theta _x-\delta \) is less than \(\theta _x\).

The intervals \(P_i\) are seen in Fig. 3 as the intersection of the shaded regions with the horizontal axis to the right of \(\hat{x} = \theta _x\). The first panel has one such interval, the second and third have two, and the fourth has three.

Observe that the region

$$\begin{aligned} \bar{S_{x}}(\delta ,T_y) = S_1 \setminus \bigcup _i P_i(\delta ,T_y) \end{aligned}$$

contains initial conditions that lead to solutions that will be outside of the \(\delta \)-strip in \({\mathcal {R}}^3(\epsilon )\) after time \(T_y\). The set \(\bar{S_{x}}(\delta ,T_y) \,\cap \, \{\hat{x}(0) \mid T_x < T_y \}\) consists of initial data \((x(0),\theta _z, y(0))\) such that their projection (x(0), y(0)) will reach the black wall in system (28), and subsequently slide along the black wall to the exit at \((\theta _x, \theta _y)\). Yet, solutions of system (30) starting in \(\bar{S_{x}}(\delta ,T_y) \cap \{\hat{x}(0) \mid T_x < T_y \}\) will exit \({\mathcal {R}}^3(\epsilon )\) outside of the \(\delta \)-strip surrounding \((\theta _x, \theta _y)\). If we define

$$\begin{aligned} Q(\delta ,T_y) := \left( \bar{S_x}(\delta ,T_y) \cap \{\hat{x}(0) \mid T_x < T_y \} \right) \times \{\hat{z} = \theta _z\} \times \{ y(0) \mid y(T_y) = \theta _y\}, \end{aligned}$$
(47)

then

$$\begin{aligned} \left( \hat{\mathcal {R}}^3(\epsilon ) \,\cap \, \bigcup _{T_y} Q(\delta ,T_y) \right) \subset \tilde{\mathcal {E}}(\epsilon ,\delta ). \end{aligned}$$
(48)

To complete the proof of Theorem 4.3, it remains to show that the set in (48) is nonempty and can be widened to a region with nonempty interior. Observe that if \(T_y^1 \ne T_y^2\), then \(Q(\delta ,T_y^1) \,\cap \, Q(\delta ,T_y^2)=\emptyset \), as can be seen in (47). Therefore (48) is a disjoint union parameterized by \(T_y\) and it is sufficient for nonemptiness to show that \(\hat{\mathcal {R}}^3(\epsilon ) \cap Q(\delta ,T_y) \ne \emptyset \) for one \(T_y\). Several intermediate results will be of use.

Lemma 6.9

Consider an initial condition \((x(0), y(0), \theta _z) \in S_1\) and let \(T_y\) be the exit time for the resulting solution of (30) from \(\hat{\mathcal {R}}^3(\epsilon )\). Then \(T_y \rightarrow 0\) if, and only if, \(y(0) \rightarrow \theta _y\).

Proof

By (11),

$$\begin{aligned} T_y = -\frac{1}{\gamma _y}\;ln\left( \frac{\theta _y-{\varPhi }_y}{y(0)-{\varPhi }_y}\right) . \end{aligned}$$

Notice that y(0) uniquely defines the exit time \(T_y\), and that \(T_y\) approaches 0 only if \(y(0) \rightarrow \theta _y\).

Lemma 6.10

Some consequences of Definition 6.8.

  1. 1.

    For a fixed y(0) and associated \(T_y\), there exists a maximal element \(\hat{x}_0^{*}\) of the set \(\{\hat{x}^{*}_i \}\) as given in Definition 6.8.

  2. 2.

    For a fixed y(0) and associated \(T_y\), if \(\hat{x}(0) \in [\theta _x,\hat{x}_0^{*})\) then \(\hat{T}_x < T_y\) is satisfied, where \(\hat{T}_x\) is the travel time to \(\theta _x\) from (11)

    $$\begin{aligned} \hat{T}_x = -\frac{1}{\gamma _x}\;ln\left( \frac{\theta _x-{\varPhi }_x}{\hat{x}(0)-{\varPhi }_x}\right) . \end{aligned}$$
  3. 3.

    For a fixed y(0) and associated \(T_y\), as \(\delta \rightarrow 0\), the points \(\alpha _i^\pm \) as given in Definition 6.8 satisfy \(\alpha _i^\pm \rightarrow \hat{x}_i^{*} \).

  4. 4.

    As \(y(0) \rightarrow \theta _y\) and \(T_y \rightarrow 0\), \(\hat{x}_0^{*} \rightarrow \theta _x\).

Proof

Fix y(0) and therefore \(T_y\). Define \((\hat{x}_0^{*},\theta _z,y(0)) \in S_1\) to be the initial condition whose trajectory arrives at intersection of \(\hat{x} = \theta _x\) and \(y = \theta _y\) exactly at time \(T_y\). The formula for \(x_0^{*}\) is

$$\begin{aligned} \hat{x}_0^{*} = {\varPhi }_x(B) + (\theta _x-{\varPhi }_x(B))e^{\gamma _x T_y} \end{aligned}$$
(49)

by (10) and the fact that trajectories starting in \(S_1\) flow through \(\kappa _{B,+}\) as shown in Fig. 11.

By Definition 6.8, each \(x_i^{*}\) is the preimage of \(\theta _x\). Consider \(\hat{x}(0) > \hat{x}_0^{*}\). By the monotonicity of each component of the flow in \(\kappa _{B,+}\), the x component of \(h_{T_y}(\hat{x}(0),\theta _z, y(0))\) is greater than the \(\hat{x}\) component of \(h_{T_y}(\hat{x}_0^{*},\theta _z, y(0))\), which is \(\theta _x\). In other words, \(\hat{x}(T_y)>\theta _x\) means \(\hat{x}(0)\) cannot equal any \(x_i^{*}\). Therefore \(x_0^{*}\) is maximal, as desired in (1).

Now consider \(\hat{x}(0) < x_0^{*}\). Again by monotonicity in \(\kappa _{B,+}\), there exists a time \(\hat{T}_x <T_y\) such that \(h_{\hat{T}_x}(\hat{x}(0),\theta _z,y(0)) = (\theta _x,z(\hat{T}_x),y(\hat{T}_x))\). This proves (2).

As \(\delta \rightarrow 0\), by continuity we have that \((\hat{\alpha }_i^\pm ,\theta _z,y(0)) = h^{-1}_{T_y}(\theta _x\pm \delta ,\hat{z}(T_y),\theta _y) \rightarrow h^{-1}_{T_y}(\theta _x,\hat{z}(T_y),\theta _y) = (\hat{x}_i^{*},\theta _z,y(0))\). This proves (3).

Now let \(y(0) \rightarrow \theta _y\) and \(T_y \rightarrow 0\). By the equation for \(\hat{x}_0^{*}\) in (49), we see that \(\hat{x}_0^{*} \rightarrow \theta _x\), as desired in (4). \(\square \)

Lemma 6.11

For sufficiently small \(T_y\), there exists \(\Delta (\epsilon )>0\) such that for \(0<\delta <\Delta (\epsilon )\), \(Q(\delta ,T_y) \cap \hat{\mathcal {R}}^3(\epsilon ) \ne \emptyset \).

Proof

We can choose \(T_y\) sufficiently small so that \(\hat{x}_0^{*} < \theta _x+\mu _2-\nu _\epsilon \) by Lemma  (4), and also small enough so that \(y(0) \in {\mathcal {R}}_y\) by Lemma 6.9. Then \((x^{*}_0,y(0),\theta _z) \in \hat{\mathcal {R}}^3(\epsilon )\) as in (23). Now let \(\hat{x}_1^{*}\) be the next largest preimage of \(\theta _x\) under \(h_{T_y}\), so that \(x_i^{*}< x_1^{*} <x_0^{*} \) for all i using Lemma  (1). Then choose \(\Delta (\epsilon )\) sufficiently small so that

  1. 1.

    \(\hat{x}_1^{*} > \theta _x + \Delta (\epsilon )\), and

  2. 2.

    \(\alpha _1^+ < \alpha _0^-\).

The first can be accomplished by taking \(\Delta (\epsilon ) < \hat{x}^{*} _1 - \theta _x\), and the second can be satisfied because \(\alpha _1^+ \rightarrow \hat{x}_1^{*} \) and \(\alpha _0^- \rightarrow \hat{x}_0^{*} \) as \(\Delta (\epsilon ) \rightarrow 0\) by Lemma  (3).

By construction, the interval \((\alpha _1^+,\alpha _0^-) \subset \hat{\mathcal {R}}_x(\epsilon )\) is nonempty, maps to the right of \([\theta _x,\theta _x+\Delta (\epsilon )]\), and satisfies \(\hat{T}_x < T_y\) by Lemma  (2). Then the region

$$\begin{aligned} Q(\Delta (\epsilon ),T_y) = (\alpha _1^+,\alpha _0^-) \times \{\hat{z} = \theta _z\} \times \{ y(0) \in {\mathcal {R}}_y \mid y(T_y) = \theta _y\} \subset \hat{\mathcal {R}}^3(\epsilon ) \end{aligned}$$

is nonempty.

Now take \(0< \delta <\Delta (\epsilon )\) while holding \(T_y\) constant. We know that the preimages of the \(\delta \)- and \(\Delta (\epsilon )\)-strips satisfy \(P(\delta ,T_y) \subset P(\Delta (\epsilon ),T_y)\), so that

$$\begin{aligned} \Big ( Q(\delta ,T_y) \, \cap \, \hat{\mathcal {R}}^3(\epsilon ) \Big ) \supset \Big ( Q(\Delta (\epsilon ),T_y) \, \cap \, \hat{\mathcal {R}}^3(\epsilon ) \Big ) \end{aligned}$$

is nonempty as desired. \(\square \)

To finish the proof of Point (2) of Theorem 4.3, choose a \(T_y^{*} \) satisfying the requirements of Lemma 6.11 and fix the resulting \(\Delta (\epsilon )\). For \(0<\delta <\Delta (\epsilon )\), the excluded region

$$\begin{aligned} {\tilde{\mathcal {E}}}(\epsilon ,\delta ) \supset \left( \hat{\mathcal {R}}^3(\epsilon ) \,\cap \, \bigcup _{T_y} Q(\delta ,T_y) \right) \supset \left( \hat{\mathcal {R}}^3(\epsilon ) \,\cap \, Q(\delta ,T_y^{*} ) \right) \ne \emptyset . \end{aligned}$$

Therefore \({\mathcal {E}}(\epsilon ,\delta ) = {\tilde{\mathcal {E}}}(\epsilon ,\delta ) \times \prod _{i\ne J} {\mathcal {R}}_{y_i}\) is nonempty. Finally, continuous dependence on initial conditions assures that there is an open neighborhood around any initial condition in \(Q(\delta ,T_y^{*} )\) that also belongs to \(\tilde{\mathcal {E}}(\epsilon ,\delta )\). In other words, we can thicken the set in y and \(\hat{z}\), leading to a nonempty interior in \(\hat{\mathcal {R}}^3(\epsilon )\). This completes the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, G., Cummins, B. & Gedeon, T. Convergence Properties of Posttranslationally Modified Protein–Protein Switching Networks with Fast Decay Rates. Bull Math Biol 78, 1077–1120 (2016). https://doi.org/10.1007/s11538-016-0175-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11538-016-0175-z

Keywords

Navigation