Skip to main content

Tracking Computability of GPAC-Generable Functions

  • Conference paper
  • First Online:
Logical Foundations of Computer Science (LFCS 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11972))

Included in the following conference series:

  • 601 Accesses

Abstract

Analog computation attempts to capture any type of computation, that can be realized by any type of physical system or physical process, including but not limited to computation over continuous measurable quantities. A pioneering model is the General Purpose Analog Computer (GPAC), initially presented by Shannon in 1941. The GPAC is capable of manipulating real-valued data streams; however, it has been shown to be strictly less powerful than other models of computation on the reals, such as computable analysis.

In previous work, we proposed an extension of the Shannon GPAC, denoted LGPAC, designed to overcome its limitations. Not only is the LGPAC model capable of expressing computation over general data spaces \(\mathcal {X}\), it also directly incorporates approximating computations by means of a limit module. In this paper, we compare the LGPAC with a digital model of computation based on effective representations (tracking computability). We establish general conditions under which LGPAC-generable functions are tracking computable.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    More details can be found in [11, 12].

  2. 2.

    Here, assumption (1d), that the original family of pseudonorms on \(\mathcal {X}\) is nondecreasing, is required; alternatively, one could introduce a double-indexing family such as \(\Vert u\Vert _{n,m}=\Vert u(0)\Vert _n+\sup _{0\le t\le m}\Vert u'(t)\Vert _n\).

  3. 3.

    By assumption, addition and scalar multiplication are defined on \(\mathcal {X}\). The integral can be generalized to \(C^1(\mathbb {T},\mathcal {X})\) via Riemann sums: see, for example, [14, p. 89].

  4. 4.

    For ease of notation we write \(\alpha \{T\}(n)\) instead of \(\alpha (\{T\}(n))\).

  5. 5.

    The computability of basic algebraic operations is usually one of the first results to be proved for a model of computation. For example, in the framework of computable analysis, this is proved in [10, Sect. 0.4]; and in the framework of type-2 theory of effectivity, this is proved in [22, Sect. 2.1]. The techniques carry over to the tracking computability framework in this paper.

  6. 6.

    Observe that \(t_j\le 2^{t_j-1}\) for any \(t_j=j+2\ge 2\).

References

  1. Bournez, O., Campagnolo, M.L., Graça, D.S., Hainry, E.: Polynomial differential equations compute all real computable functions on computable compact intervals. J. Complex. 23(3), 317–335 (2007). https://doi.org/10.1016/j.jco.2006.12.005

    Article  MathSciNet  MATH  Google Scholar 

  2. Bush, V.: The differential analyzer. A new machine for solving differential equations. J. Frankl. Inst. 212(4), 447–488 (1931)

    Article  Google Scholar 

  3. Campagnolo, M.L., Moore, C., Costa, J.F.: Iteration, inequalities, and differentiability in analog computers. J. Complex. 16(4), 642–660 (2000). https://doi.org/10.1006/jcom.2000.0559

    Article  MathSciNet  MATH  Google Scholar 

  4. Coddington, E.A., Levinson, N.: Theory of Ordinary Differential Equations. Tata McGraw-Hill Education, New York (1955)

    MATH  Google Scholar 

  5. Graça, D.S., Campagnolo, M.L., Buescu, J.: Robust simulations of turing machines with analytic maps and flows. In: Cooper, S.B., Löwe, B., Torenvliet, L. (eds.) CiE 2005. LNCS, vol. 3526, pp. 169–179. Springer, Heidelberg (2005). https://doi.org/10.1007/11494645_21

    Chapter  Google Scholar 

  6. Hartree, D.R.: Calculating Instruments and Machines. Cambridge University Press, Cambridge (1950)

    MATH  Google Scholar 

  7. Kohlenbach, U., Lambov, B.: Bounds on iterations of asymptotically quasi-nonexpansive mappings. BRICS Rep. Ser. 10(51) (2003)

    Google Scholar 

  8. Kohlenbach, U., Leuştean, L.: Asymptotically nonexpansive mappings in uniformly convex hyperbolic spaces. J. Eur. Math. Soc. 12(1), 71–92 (2010). https://doi.org/10.4171/JEMS/190

    Article  MathSciNet  MATH  Google Scholar 

  9. Mal’cev, A.I.: Constructive algebras I. Rus. Math. Surv. 16, 77–129 (1961)

    Article  MathSciNet  Google Scholar 

  10. Pour-El, M.B., Richards, I.: Computability in Analysis and Physics. Springer, Heidelberg (1989)

    Book  Google Scholar 

  11. Poças, D., Zucker, J.: Analog networks on function data streams. Computability 7(4), 301–322 (2018). https://doi.org/10.3233/COM-170077

    Article  MathSciNet  MATH  Google Scholar 

  12. Poças, D., Zucker, J.: Approximability in the GPAC. Log. Methods Comput. Sci. 15(3) (2019). https://doi.org/10.23638/LMCS-15(3:24)2019

  13. Reed, M., Simon, B.: Methods of Modern Mathematical Physics: Functional Analysis. Academic Press Inc., Cambridge (1980)

    MATH  Google Scholar 

  14. Rudin, W.: Principles of Mathematical Analysis. International Series in Pure and Applied Mathematics, 3rd edn. McGraw-Hill, New York (1976)

    MATH  Google Scholar 

  15. Shannon, C.: Mathematical theory of the differential analyser. J. Math. Phys. 20, 337–354 (1941)

    Article  Google Scholar 

  16. Stoltenberg-Hansen, V., Tucker, J.V.: Effective algebras. Handbook of Logic in Computer Science, vol. 4, pp. 357–526. Oxford University Press, Oxford (1995)

    Google Scholar 

  17. Stoltenberg-Hansen, V., Tucker, J.V.: Concrete models of computation for topological algebras. Theoret. Comput. Sci. 219(1–2), 347–378 (1999)

    Article  MathSciNet  Google Scholar 

  18. Thomson, W., Tait, P.: Treatise on Natural Philosophy, 2nd edn, pp. 479–508. Cambridge University Press, Cambridge (1880)

    MATH  Google Scholar 

  19. Tucker, J.V., Zucker, J.I.: Abstract versus concrete computation on metric partial algebras. ACM Trans. Comput. Logic (TOCL) 5(4), 611–668 (2004). https://doi.org/10.1145/1024922.1024924

    Article  MathSciNet  MATH  Google Scholar 

  20. Tucker, J.V., Zucker, J.I.: Computable total functions on metric algebras, universal algebraic specifications and dynamical systems. J. Logic Algebraic Program. 62(1), 71–108 (2005). https://doi.org/10.1016/j.jlap.2003.10.001

    Article  MathSciNet  MATH  Google Scholar 

  21. Tucker, J.V., Zucker, J.I.: Abstract versus concrete computability: the case of countable algebras. In: Stoltenberg-Hansen, V., Väänänen, J. (eds.) Logic Colloquium ’03. Lecture Notes in Logic, pp. 377–408. Cambridge University Press, Cambridge (2006). https://doi.org/10.1017/9781316755785.019

  22. Weihrauch, K.: Computable Analysis: An Introduction. Texts in Theoretical Computer Science. Springer, Heidelberg (2000). https://doi.org/10.1007/978-3-642-56999-9

    Book  MATH  Google Scholar 

Download references

Acknowledgements

The research of Diogo Poças was supported by the Alexander von Humboldt Foundation with funds from the German Federal Ministry of Education and Research (BMBF). The research of Jeffery Zucker was supported by the Natural Sciences and Engineering Research Council of Canada.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Diogo Poças .

Editor information

Editors and Affiliations

Appendix: Technical Details in the Proof of Lemma 2

Appendix: Technical Details in the Proof of Lemma 2

Addition: let \(e_1,e_2,\ell \) be natural numbers, where \(e_1\) and \(e_2\) encode computable \(\mathcal {X}\)-streams \(u_1=\alpha (e_1)\) and \(u_2=\alpha (e_2)\). We need to show how to effectively compute a code \(e_+\) of some element \(u_+=\alpha (e_+)\) that approximates \(u_1+u_2\) to precision \(2^{-\ell }\), that is, such that \(\Vert u_+-(u_1+u_2)\Vert _{\ell }<2^{-\ell -2}\).

We know that \(u_1\) and \(u_2\) are given by some data tuples \((x^1_0,y^1_0,\ldots ,y^1_{N_1^2})\) and \((x^2_0,y^2_0,\ldots ,y^2_{N_2^2})\) respectively. First, we build a large common refinement, that is, a large discretization parameter \(\bar{N}\) which is a multiple of both \(N_1\) and \(N_2\), and data tuples \((\tilde{x}^1_0,\tilde{y}^1_0,\ldots ,\tilde{y}^2_{\bar{N}^2})\), \((\tilde{x}^2_0,\tilde{y}^2_0,\ldots ,\tilde{y}^2_{\bar{N}^2})\) that correspond to approximations \(\tilde{u}_1,\tilde{u}_2\) of \(u_1,u_2\) on a finer grid. For example, if \(\bar{N}=k\times N_1\), \(\tilde{u}_1\) can be obtained by setting \(\tilde{y}^1_{ki+\ell }\approx \frac{k\,-\,\ell }{k}y^1_i+\frac{\ell }{k}y^1_{i+1}\); each value in the new discretization is a convex combination of two consecutive values in the old discretization. Since addition and scalar multiplication are tracking computable in \(\mathcal {X}\), these convex combinations can be approximated to arbitrarily high precision.

To compute the addition on the common refinement, we can simply add the pointwise values, that is, set \(x^+_0\approx \tilde{x}^1_0+\tilde{x}^2_0\) and \(y^+_j\approx \tilde{y}^1_j+\tilde{y}^2_j\). By computing these sums with sufficiently high precision, we have indeed produced the desired code \(e_+\).

By the previous discussion, we have a procedure \(\mathbf {add}:(e_1,e_2,\ell )\mapsto e_+\) such that, for \(u_1=\alpha (e_1),u_2=\alpha (e_2),u_+=\alpha (e_+)\), we have \(\Vert u_+-(u_1+u_2)\Vert _\ell <2^{-\ell -2}\). Next, assume we have codes \(c_1=\langle T_1,M_1\rangle , c_2=\langle T_2,M_2\rangle \) for computable elements u and v respectively; we wish to find a code \(c_+=\langle T_+,M_+\rangle \) for their sum \(w=u+v\). Let us introduce the notation \(u_i=\alpha \{T_1\}(i)\), \(v_i=\alpha \{T_2\}(i)\), \(w_i=\alpha \{T_+\}(i)\).

We shall set \(\{T_+\}(j)=\mathbf {add}(\{T_1\}(k_1(j)),\{T_2\}(k_2(j)),j)\), where \(k_1(j)=\{M_1\}(2j+2)\) and \(k_2(j)=\{M_2\}(2j+2)\). Intuitively, \(w_j\) is a (sufficiently good) approximation of \(u_{k_1(j)}+v_{k_2(j)}\). Furthermore, we set \(M_+\) as a code for the identity function. To show that \((w_j)\) is \({\text {id}}\)-convergent, fix \(\nu \) and suppose that \(i,j\ge \nu \). Observe that

$$\begin{aligned}\Vert w_i-w_j\Vert _\nu \le \,&\Vert w_i-(u_{k_1(i)}+v_{k_2(i)})\Vert _\nu +\Vert u_{k_1(i)}-u_{k_1(j)}\Vert _\nu \\&+\,\Vert v_{k_2(i)}-v_{k_2(j)}\Vert _\nu +\Vert w_j-(u_{k_1(j)}+v_{k_2(j)})\Vert _\nu .\end{aligned}$$

To bound the first term above, we observe that \(\Vert w_i-(u_{k_1(i)}\,+\,v_{k_2(i)})\Vert _\nu \le \Vert w_i-(u_{k_1(i)}+v_{k_2(i)})\Vert _i<2^{-i-2}\le 2^{-\nu -2}\); a similar argument holds for the fourth term. For the second term, note that by our choice of \(k_1(\nu )\) we have \(d(u_{k_1(i)},u_{k_1(j)})<2^{-2\nu -2}\), and by Proposition 1 this implies \(\Vert u_{k_1(i)}-u_{k_1(j)}\Vert _\nu <2^{-\nu -2}\); similarly for the third term. Putting all this together yields \(\Vert w_i-w_j\Vert _\nu <2^{-\nu }\), which again by Proposition 1 implies \(d(w_i,w_j)<2^{-\nu }\), as desired. A similar reasoning also proves that \(w_i\) converges to \(u+v\). Hence addition is tracking computable.

Scalar multiplication: in the same way as for addition, we show how to approximately compute the scalar multiplication at “two levels”. At the “first level”, let \(e_1,e_2,\ell \) be natural numbers encoding a computable \(\mathbb {R}\)-stream \(r=\alpha (e_1)\) and \(\mathcal {X}\)-stream \(u=\alpha (e_2)\) respectively. We need to show that we can compute the scalar multiplication ru to an arbitrary precision. In particular, we will show how to effectively compute a code \(e_\times \) of some element \(u_\times =\alpha (e_\times )\) such that \(\Vert u_\times -ru\Vert _{\ell }<2^{-\ell -2}\).

We know that r and u are given by some data tuples \((p_0,q_0,\ldots ,q_{N_1^2})\) and \((x_0,y_0,\ldots ,y_{N_2^2})\) respectively. First, we effectively find an upper bound \(K_\ell \) on the pseudonorms \(\Vert r\Vert _\ell \) and \(\Vert u\Vert _\ell \) by (approximately) computing the maximum of \(|p_0|,|q_j|,\Vert x_0\Vert _\ell ,\Vert y_0\Vert _\ell \) (by assumption, pseudonorm evaluation is tracking computable on \(\mathcal {X}\)).

Next, we construct a large common refinement, say \((\tilde{p}_0,\tilde{q}_0,\ldots ,\tilde{q}_{\bar{N}^2})\) and \((\tilde{x}_0,\tilde{y}_0,\ldots ,\tilde{y}_{\bar{N}^2})\), corresponding to approximations \(\tilde{r},\tilde{u}\) of ru on a finer grid, as we did for addition. To compute the multiplication on the common refinement, we recall the product rule for derivatives, \((ru)'(t)=r(t)u'(t)+r'(t)u(t)\). To compute this expression at equispaced values of t, we must first find the values of \(\tilde{r}(j/N),\tilde{u}(j/N)\). Since \(\tilde{r},\tilde{u}\) are piecewise quadratic, these can be recursively obtained by integration using the trapezoid rule,

$$\begin{aligned} \tilde{p}_{j+1}\approx \tilde{p}_j+\frac{1}{2N}(\tilde{q}_{j}+\tilde{q}_{j+1}),\qquad \qquad \tilde{x}_{j+1}\approx \tilde{x}_j+\frac{1}{2N}(\tilde{y}_{j}+\tilde{y}_{j+1}).\end{aligned}$$
(7)

Again, \(\tilde{p}_{j},\tilde{x}_{j}\) can be approximated to arbitrarily high precision. Therefore, \(\tilde{r}\tilde{u}\) can be approximated by the function \(u_\times \) given by \((x^\times _0,y^\times _0,\ldots ,y^\times _{\bar{N}^2})\), where \(x^\times _0\) is (the approximating computation of) \(\tilde{p}_0\tilde{x}_0\); and each \(y^\times _j\) is (the approximating computation of) \(\tilde{p}_j\tilde{y}_j+\tilde{q}_j\tilde{x}_j\).

There is one more error term appearing in our analysis, since \(u_\times \) is piecewise quadratic whereas ru is piecewise quartic (as functions of t). To describe an effective bound on the approximation error \(\Vert u_\times -ru\Vert _\ell \), we need to take into account: the approximation errors for the refinement and the multiplications over \(\mathcal {X}_c\), the upper bound \(K_\ell \) on \(\Vert r\Vert _\ell \) and \(\Vert u\Vert _\ell \); the consecutive differences \(\max \Vert \tilde{q}_{j+1}-\tilde{q}_j\Vert _n,\max \Vert \tilde{y}_{j+1}-\tilde{y}_j\Vert _n\); and the discretization \(\bar{N}\). Ultimately, we can bound this error in an effective way by choosing \(\bar{N}\) large enough.

By the previous discussion, we have a procedure \(\mathbf {mult}:(e_1,e_2,\ell )\mapsto e_\times \) such that, for \(r=\alpha (e_1),u=\alpha (e_2),u_\times =\alpha (e_\times )\), we have \(\Vert u_\times -ru\Vert _\ell <2^{-\ell -2}\). At the “second level”, assume we have codes \(c_1=\langle T_1,M_1\rangle , c_2=\langle T_2,M_2\rangle \) for computable elements r and u respectively; we wish to find a code \(c_\times =\langle T_\times ,M_\times \rangle \) for their product \(v=ru\). Let us introduce the notation \(r_i=\alpha \{T_1\}(i)\), \(u_i=\alpha \{T_2\}(i)\), \(v_i=\alpha \{T_\times \}(i)\).

First, for any \(\nu \in \mathbb {N}\), we can effectively find a uniform bound \(K(\nu )\) such that \(\Vert r_i\Vert _\nu , \Vert u_i\Vert _\nu <K(\nu )\) independently of i. This is because, letting \(\mu =\{M_1\}(\nu )\), we know that for \(i>\mu \) one has \(d(r_i,r_\mu )<2^{-\nu }\) and hence \(\Vert r_i-r_\mu \Vert _\nu <1\) by Proposition 1, so that \(\Vert r_i\Vert _\nu <\Vert r_\mu \Vert _\nu +1\). On the other hand, we can approximately compute \(\Vert r_i\Vert _\nu \) for each of the finitely many \(i\le \mu \). A similar analysis holds for \(\Vert u_i\Vert _\nu \). Taking (a sufficiently close approximation of) the maximum of these values gives the desired uniform bound.

Next, observe that for any \(r,\tilde{r}\in \mathbb {R}\), \(x,\tilde{x}\in \mathcal {X}\), \(\nu \in \mathbb {N}\), we have \(\Vert rx-\tilde{r}\tilde{x}\Vert _\nu \le |r|\Vert x-\tilde{x}\Vert _\nu +|r-\tilde{r}|\Vert \tilde{x}\Vert _n\); together with (3), we can derive the useful bound

$$\begin{aligned} \Vert r_{i_1}u_{j_1}-r_{i_2}u_{j_2}\Vert _\nu \le (\nu +1)K(\nu )\left( \Vert r_{i_1}-r_{i_2}\Vert _\nu +\Vert u_{j_1}-u_{j_2}\Vert _\nu \right) .\end{aligned}$$
(8)

We are now in condition to describe how to compute \(\{T_\times \}(\nu )\) for a given \(\nu \). First, find a uniform bound \(K(\nu )\) as described above. Second, find an integer C such that \(2^C>K(\nu )(\nu +1)\). Third, compute \(k_1(\nu )=\{M_1\}(2\nu +C+2)\) and \(k_2(\nu )=\{M_2\}(2\nu +C+2)\). Finally, return

$$\{T_\times \}(\nu )=\mathbf {mult}(\{T_1\}(k_1(\nu )),\{T_2\}(k_2(\nu )),\nu ).$$

Intuitively, this means that \(v_i\) is a (sufficiently good) approximation of \(r_{k_1(i)}u_{k_2(i)}\). We show that the sequence \(v_i\) constructed in this way is \({\text {id}}\)-convergent. Fix \(\nu \) and suppose that \(i,j\ge \nu \). Observe that

$$\begin{aligned} \Vert v_i-v_j\Vert _\nu \le \Vert v_i-r_{k_1(i)}u_{k_2(i)}\Vert _\nu \,+\,\Vert r_{k_1(i)}u_{k_2(i)}-r_{k_1(j)}u_{k_2(j)}\Vert _\nu \,+\,\Vert r_{k_1(j)}u_{k_2(j)}-v_j\Vert _\nu . \end{aligned}$$

The first term above, by construction, can be bounded as \(\Vert v_i-r_{k_1(i)}u_{k_2(i)}\Vert _\nu \le \Vert v_i-r_{k_1(i)}u_{k_2(i)}\Vert _i<2^{-i-2}\le 2^{-\nu -2}\), and similarly for the third term. In order to bound the second term, note that by our choice of \(k_1(\nu )\) we have that \(d(r_{k_1(i)},r_{k_1(j)})<2^{-2\nu -2-C}\). By Proposition 1, this implies \(\Vert r_{k_1(i)}-r_{k_1(j)}\Vert _\nu<2^{-\nu -2-C}<\frac{2^{-\nu -1}}{2K(\nu )(\nu \,+\,1)}\). A similar bound holds for \(\Vert u_{k_2(i)}-u_{k_2(j)}\Vert _\nu \). Putting these in (8) yields \(\Vert r_{k_1(i)}u_{k_2(i)}-r_{k_1(j)}u_{k_2(j)}\Vert _\nu <2^{-\nu -1}\). Thus we conclude that \(\Vert v_i-v_j\Vert _\nu <2^{-\nu -2}+2^{-\nu -1}+2^{-\nu -2}=2^{-\nu }\), and hence \(d(v_i,v_j)<2^{-\nu }\), i.e. \(v_i\) is \({\text {id}}\)-convergent. A similar reasoning proves that \(v_i\) converges to ru. Hence the above describes a tracking function for scalar multiplication.

Continuous limit: let \(u\in \mathcal {Z}_c\) be represented by the tuple \((x_0,y_0,\ldots y_{N^2})\), where each \(x_0,y_j\in \mathcal {X}_c\). We first observe that, for any natural number \(n\in \mathbb {N}\), the value of u(n) can be approximated as

$$\begin{aligned} u(n)\approx \left\{ \begin{array}{cl}x_{nN}&{}\text { if }n\le N;\\ x_{N^2}+(N-n)y_{N^2}&{}\text { if }n\ge N,\end{array}\right. \end{aligned}$$

where the \(x_j\) are again recursively obtained via the trapezoid rule. Consequently, one can devise a computable procedure \(\mathbf {eval}:(e,n,\ell )\mapsto e_{\mathrm {eval}}\) such that, given a code e of some element \(u=\alpha _\mathcal {Z}(e)\) and natural numbers n, \(\ell \), it produces a code \(e_{\mathrm {eval}}\) of some element \(x=\alpha _\mathcal {X}(e_{\mathrm {eval}})\) with \(d(x,u(n))<2^{-\ell }\); i.e. x approximates u(n) within an error of \(2^{-\ell }\).

Now let \(c=\langle T,M\rangle \) be a code for an effective Cauchy sequence \(u_j=\alpha _\mathcal {Z}\{T\}(j)\) in \(\mathcal {Z}_c\) converging to a computable element \(u\in C^1(\mathbb {T},\mathcal {X})\). We want to compute a code \(c_\infty =\langle T_\infty ,M_\infty \rangle \) for an effective Cauchy sequence \(x_j=\alpha _\mathcal {X}\{T_\infty \}(j)\) in \(\mathcal {X}_c\) converging to the limit \(x=\mathcal {L}u=\lim _{t\rightarrow \infty }u(t)\in \mathcal {X}\).

The idea is to define \(\{T_\infty \}(j)=\mathbf {eval}(\{T\}(k_j),t_j,\ell _j)\), for a suitable choice of \(\ell _j=j+3\), \(t_j=j+2\) and \(k_j=\{M\}(3j+5)\). To prove that \((x_j)\) is an \({\text {id}}\)-convergent Cauchy sequence, let \(\nu \in \mathbb {N}\) be given, and suppose that \(i,j\ge \nu \). By applying the triangular inequality, \(d_\mathcal {X}(x_i,x_j)\) is upper bounded as

$$\begin{aligned}d_\mathcal {X}(x_i,x_j)\le&d_\mathcal {X}(x_i,u_{k_i}(t_i))+d_\mathcal {X}(u_{k_i}(t_i),u(t_i))+d_\mathcal {X}(u(t_i),u(t_j))\\&+\,d_\mathcal {X}(u(t_j),u_{k_j}(t_j))+d_\mathcal {X}(u_{k_j}(t_j),x_j).\end{aligned}$$

By our choice of \(\ell _j=j+3\) we immediately get that \(d_\mathcal {X}(x_i,u_{k_i}(t_i))<2^{-\nu -3}\) and \(d_\mathcal {X}(u_{k_j}(t_j),x_j)<2^{-\nu -3}\). Since u is an \({\text {id}}\)-convergent Cauchy stream, and by our choice of \(t_j=j+2\), we can also bound \(d_\mathcal {X}(u(t_i),u(t_j))<2^{-\nu -2}\). Next we need to handle the terms \(d_\mathcal {X}(u_{k_i}(t_i),u(t_i))\) and \(d_\mathcal {X}(u_{k_j}(t_j),u(t_j))\), which amounts to show that \(k_j=\{M\}(3j+5)\) is suitably large.

Indeed, observe that \(d_\mathcal {Z}(u_{k_j},u)\le 2^{-3j-5}=2^{-3t_j+1}\). Using Proposition 1 then yields \(\Vert u_{k_j}-u\Vert _{t_j}\le 2^{-2t_j+1}\), and using (3) we haveFootnote 6

$$\Vert u_{k_j}(t_j)-u(t_j)\Vert _{t_j}\le \frac{t_j}{2^{t_j-1}}2^{-t_j}\le 2^{-t_j}.$$

Once more by Proposition 1 we get \(d_\mathcal {X}(u_{k_j}(t_j),u(t_j))\le 2^{-t_j}\le 2^{-\nu -2}\). The same reasoning also gives the bound \(d_\mathcal {X}(u_{k_i}(t_i),u(t_i))\le 2^{-\nu -2}\). Combining all these bounds yields \(d_\mathcal {X}(x_i,x_j)<2^{-\nu }\), so that \((x_j)\) is an id-convergent Cauchy sequence. In particular, we can take \(M_\infty \) to be a code for the identity function.

This construction shows that \(c=\langle T,M\rangle \mapsto c_\infty =\langle T_\infty ,M_\infty \rangle \) is an effective procedure. We also proved that, for all \(j\in \mathbb {N}\), \(d_\mathcal {X}(x_j,u(t_j))<2^{-j-3}+2^{-j-2}\), implying that \(\lim _j x_j=\lim _t u(t)\); hence \(c_\infty \) encodes an effective Cauchy sequence converging to \(\mathcal {L}u\) as desired.\(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Poças, D., Zucker, J. (2020). Tracking Computability of GPAC-Generable Functions. In: Artemov, S., Nerode, A. (eds) Logical Foundations of Computer Science. LFCS 2020. Lecture Notes in Computer Science(), vol 11972. Springer, Cham. https://doi.org/10.1007/978-3-030-36755-8_14

Download citation

Publish with us

Policies and ethics