A new framework for polynomial approximation to differential equations

In this paper, we discuss a framework for the polynomial approximation to the solution of initial value problems for differential equations. The framework is based on an expansion of the vector field along an orthonormal basis, and relies on perturbation results for the considered problem. Initially devised for the approximation of ordinary differential equations, it is here further extended and, moreover, generalized to cope with constant delay differential equations. Relevant classes of Runge-Kutta methods can be derived within this framework.


Introduction
In this paper, we shall deal with the definition of a framework to discuss polynomial approximations to the solution of initial value problems for ordinary differential equations (ODEs), and delay differential equations (DDEs) in the form, where τ > 0 is a constant delay and, usually, y 0 = φ(t 0 ). In the sequel, we shall always assume that f and φ are suitably regular in their respective arguments. As is well known, the two problems are related in many ways but, at the same time, have quite different features, which reflect on their numerical solution. We refer, e.g., to the comprehensive monograph [27], concerning (1), and [5] (see also [20]) for (2). In more detail, in this paper we shall fully develop a novel framework for deriving numerical methods for solving (1), which is then extended to cope with (2).
The framework we are interested in relies on a local expansion of the vector field in (1) along an orthonormal basis. Such basis will be, in the present case, the Legendre polynomial basis {P j } j ≥0 : where, as is usual, j is the vector space of polynomials of degree j , and δ ij is the Kronecker symbol. The idea is actually not new: early use of this approach are, for example, Hulme [29,30], Bottasso [7], and Betsch and Steinmann [6]; it is also at the basis of the energy-conserving class of Runge-Kutta methods named HBVMs [12] (see also the monograph [9] and the review paper [10]). The approach that we shall pursue has been initially devised in [14], where the target was problem (1), and its potentialities have been disclosed by using HBVMs as spectral methods in time for efficiently solving highly oscillatory problems [19] and, subsequently, Hamiltonian PDEs [11]. A corresponding error analysis is given in [3]. Moreover, this allows deriving a formulation of HBVMs as continuous-stage Runge-Kutta methods [1,2].
Starting from this background, in this work we carry out a complete perturbation analysis of problems (1) and (2), and set up a unique and comprehensive framework to deal with the numerical solution of both problems by exploiting the same discretization procedure. In more detail, the truncated Fourier series may be interpreted as a projection of the differential problem onto a finite dimensional vector space, leading to a new, numerically easy-to-handle, differential problem. This latter may be regarded as a perturbation of the original one, so that the perturbation analysis turns out to be crucial to understand how the solutions of the two problems are related. At the best of our knowledge, the perturbation results for problem (2) are new, and provide a powerful general tool of analysis. That the same framework may cover problems of different nature constitute, in our opinion, a specific advancement in this field, and reveals its potentialities to deal with other classes of problems (which will be the subject of future investigations).
With this premise, the paper is organized as follows: Section 2 concerns the result pertaining to the ODE case; Section 3 contains the corresponding results for the DDE case; Section 4 contains some numerical tests for the DDE case, involving methods which are new in this setting; at last, some concluding remarks and possible developments are reported in Section 5.

Preliminary results
We here provide a few preliminary results, which will be needed to derive the main ones in the following subsections. Some of them are taken from [14] but we also report them here, for sake of completeness.
with V a vector space, admit a Taylor expansion at 0. Then, for all j ≥ 0: Proof By virtue of (3), one has: As a straightforward consequence, setting G(ζ h) := f (z(ζ h)), the following result is proved.

Let us denote by
the solution of the problem (compare with (4)): Hereafter, for sake of brevity, we may use either one of the two notations in (16), depending on the needs. The following theorem contains standard perturbation results w.r.t. all the arguments (see, e.g., [27,Section I.14]).

Theorem 2
With reference to the solution (16) of problem (17), one has: From this theorem, the following result readily follows where, hereafter, | · | will denote any convenient vector norm.

Main results (ODE case)
With reference to (5)-(15), we are now in the position of stating the results concerning the approximation error at the grid points, and, more in general, on each subinterval [t n−1 , t n ]: For the first step of the approximation procedure, the following theorem holds true, the proof being similar to that of [14,Theorem 1]. (19) and (20), one has:

Theorem 3 With reference to
Proof By virtue of Corollary 1 and Theorem 2, one has: Consequently, the second part of the statement follows for c ∈ (0, 1), whereas, when c = 1 one deduces, by virtue of Theorem 1: For the remaining steps, the following result holds true.

Remark 1
We observe that the two equivalent equations (see (10), (12), and (14)): define a so-called HBVM(∞, s) method on the interval [t n−1 , t n ] (equation (21) is named Master Functional Equation in [12]. See also [9,10]). Consequently, such method defines an order 2s approximation procedure for all s ≥ 1, which can be also recast as a continuous-stage Runge-Kutta method [1]. In particular, the case s = 1 corresponds to the so-called AVF method [40]; the case s ≥ 1 has been also considered in [26].
An interesting question concerns the difference between the Fourier coefficients of the solution (8)-(10) and those of the polynomial approximation (14) on the interval [t n−1 , t n ]. The next result clarifies the issue. (8), (10), and (14), for all n = 1, . . . , N one has:

Conservative/dissipative problems
An interesting case [9,25,36] is that when problem (4) is in the forṁ with S ∈ R m×m either a skew-symmetric matrix, S = −S, or a negative semidefinite matrix, S ≤ 0, whereas ∇H is the gradient of a scalar function usually called the Hamiltonian. As is clear: so that H is a conserved quantity, and the problem is said to be conservative; • when S ≤ 0: and the problem is said to be dissipative.
The next result shows that this behavior is preserved by the approximations (12)- (15), upon observing that in this case (10) can be conveniently rewritten as Proof In fact, by considering that y n = σ n (h), y n−1 = σ n (0), and taking into account (24), one has: Consequently, if S = −S, then H n = 0, whereas H n ≤ 0, when S ≤ 0.
Remark 2 According to Remark 1, one then obtains that HBVM(∞, s) methods can preserve the conservative/dissipative feature of problem (23).

Discretization and Runge-Kutta formulation
Quoting Dahlquist and Björk [22, p. 521] "as is well known, even many relatively simple integrals cannot be expressed in finite terms of elementary functions, and thus must be evaluated by numerical methods." In this context, this quite obvious statement means that the approximation procedure defined by (12) and (10) does not yet provide a "true" numerical method. In fact, the integrals defining the Fourier coefficients, need to be numerically approximated by using a quadrature formula. Since we are dealing with a polynomial approximation, it is quite natural to do this by using an interpolatory quadrature with abscissae and weights (c i , b i ), i = 1, . . . , k (we shall always assume k distinct abscissae): where j (h) is the quadrature error. The following straightforward result holds true.

Theorem 7
If the quadrature (c i , b i ), i = 1, . . . , k has order q, i.e., it is exact for polynomial integrands of degree q − 1, then Remark 3 As is well known, since the quadrature (26) is based at k (distinct) abscissae, q ∈ {k, . . . , 2k}: the lower limit is obtained by a generic choice of the abscissae, whereas the upper one is achieved by placing them at the zeros of P k (c).
When using a quadrature, clearly the Fourier coefficients (25) may be not exactly evaluated anymore. This implies that we are actually computing a possibly different piecewise polynomial approximation u(t) such that (compare with (10)-(15)), for all n = 1, . . . , N: with (see (26)) Actually, (29)-(31) define the nth integration step, by using a timestep h, performed with the k stage Runge-Kutta method having stages: In fact, evaluating (30) at the abscissae c 1 , . . . , c k , and substituting in it the s approximate Fourier coefficients (29), one obtains, after rearranging terms, In other words, we have derived the k-stage Runge-Kutta method with abscissae and weights (c i , b i ), i = 1, . . . , k, and Butcher matrix A = a i ∈ R k×k . Next theorem puts the Butcher tableau in a more compact form [9].

Theorem 8 The Butcher tableau of the Runge-Kutta method (33)-(34) is given by
It is possible to derive an alternative formulation of the Runge-Kutta method (35). In fact, using the relation (22) between the integrals of the Legendre polynomials and the polynomials themselves, and considering that Consequently, the Butcher tableau (35) can be rewritten as When the quadrature (26) has order q ≥ 2s, it is quite straightforward to prove that where in general, hereafter, I r ∈ R r×r is the identity matrix (when the dimension of the identity matrix is not explicitly indicated, it will be easily deducible from the context). Consequently, which can be regarded as a generalization of the W -transformation in [28,Theorem 5.6,p. 79]. In addition to this, when q ≥ 2s also the following results hold true (for sake of brevity, we do not discuss the case q < 2s, since it has no practical interest).

Theorem 12
With reference to (27)-(31) applied for approximating problem (23), and assuming that the quadrature formula (26) has order q ≥ 2s, for all n = 1, . . . , N one has: • if H is a polynomial of degree not larger than q/s, then the result of Theorem 6 continues to hold; We here provide only the proof of Theorem 9 (see also [14,Theorem 4]), since those of Theorems 10, 11, and 12 can be similarly obtained by slightly adapting the corresponding proofs of Theorems 4, 5, and 6, respectively.
Proof (of Theorem 9) By taking into account the result of Theorem 7, one has: . Consequently, the second part of the statement follows by considering that, for c ∈ (0, 1), this quantity is since q ≥ 2s, whereas, when c = 1 one deduces, by virtue of Theorem 1, and considering that t 1 = t 0 + h: Remark 4 When the k abscissae are placed at the zeros of P k (c), and k ≥ s, one obtains a HBVM(k, s) method, whose order is 2s [9,10,12]. It is worth mentioning that the HBVM(s, s) method is nothing but the s-stage Gauss-Legendre collocation method. Moreover, the HBVM(k, 1) methods correspond to the second-order Runge-Kutta methods described in [21]. Different choices of the quadrature have been also considered in [15,[31][32][33].

Solving the discrete problems
Sometimes, the number of stages k of the Runge-Kutta method (35) can be much larger than the degree s of the underlying polynomial approximation (29)- (31). This is the case, for example, of HBVM(k, s) methods when used as energy-conserving methods [9,10,12] (see also Theorem 12 in Section 2.3). In such a case, it is clear that the usual implementation of the Runge-Kutta method leads to the solution of a discrete problem having (block) dimension k. Nevertheless, for sake of completeness we now recall how the discrete problem to be solved can be actually recast so as to have (block) dimension s, independently of k [13]. This clearly allows for relatively large values of k, thus making possible the use of an arbitrarily high-order quadrature (26). Let us then consider the first integration step of the method for solving (4) with timestep h, (thus, we can skip the index n of the step). Setting 1 = 1 , . . . , 1 ∈ R k , and Y the stage vector of (block) dimension k, one obtains that the stage equation for (35) is given by: with an obvious meaning of f (Y ). However, we observe that [13] i.e., the (block) vector with the s coefficients of the polynomial approximation u 1 (ch) (see (29)- (30)). Consequently, (36) can be rewritten as By combining the last two equations one eventually obtains: which is a discrete problem, equivalent to (36), having (block) dimension s, independently of k. Once this equation has been solved, the new approximation is derived, according to (31), as It is also worth mentioning that very effective nonlinear iterations have been devised for solving (37) [8,9,13] (the most effective being that derived from the so-called blended iteration introduced in [17], see also [18]).

The DDE case
As for the ODE case, also for DDEs we shall consider, without loss of generality, the simpler probleṁ in place of (2) where, usually, y 0 = φ(t 0 ). Moreover, we shall suppose that both the timestep h defining the discrete mesh (5) and the width of the integration interval, T − t 0 , are commensurable with the delay: so that the discrete mesh is now given by: On one hand, similarly as done in the ODE case, let us denote, for notational purposes, byσ (t) ≡ y(t) the solution of (38), and its restriction to the time interval [t n−1 , t n ]. Consequently, whereas, for n = 1, . . . , N, one has (compare with (7)-(10)): so that, and where, in general, for any suitably regular functions z, w : On the other hand, we shall look for a piecewise approximation toσ (t), i.e., σ (t), such that (compare with (11)- (15)) denotes its restriction to the time interval [t n−1 , t n ]. Consequently, one has: whereas, for n = 1, . . . , N, σ n ∈ s satisfies the differential equatioṅ so that, and y n = y n−1 + hγ 0 (σ n , σ n−ν ) =: σ n (h), with γ j (σ n , σ n−ν ) defined according to (46). In the sequel, we shall discuss the accuracy of the approximations: δσ n (ch) :=σ n (ch) − σ n (ch), c ∈ (0, 1), n = 1, . . . , N.
For this purpose, some preliminary results are given in the next section.

Preliminary results
We start with the generalization of Corollary 1 to the present setting.
We also need perturbation results corresponding to those of Theorem 2 for ODEs. For this purpose, it is sufficient to discuss them for a local problem defined on two contiguous time subintervals of width τ : the former containing the memory, the latter containing the solution to be computed. Without loss of generality, we shall then fix the reference interval [t 0 − τ, t 0 + τ ], where we consider the following problem, defined for a generic ξ ∈ [t 0 , t 0 + τ ]: Problem (53) defines a generalization of the localized one associated to (38) (obtained for ξ = t 0 and η = y 0 ), and we shall denote its solution by in order to emphasize its dependence on the first four parameters, whereas the last one refers to the time subinterval. We shall also use the following notation: Remark 5 It is clear that the function φ in (53) represents the memory term of the equation, and it is a known function. The same will happen in the subsequent reference interval, [t 0 , t 0 + 2τ ], obtained by shifting to the right the previous one by τ , once the solution of (53) has been computed, and so forth.
To begin with, let us state the following straightforward result, whose proof is omitted for brevity.
By taking into account the results of the previous points a) and b), one derives: The statement c) then follows, by taking into account that t * is generic.
One main difference with the ODE case, stems from the fact that now (54) also depends on the memory term φ, which is a functional parameter. Consequently, we now look for a Frechét derivative such that, for any perturbation δφ ∈ C([t 0 − τ, t 0 ]) and t ∈ [t 0 − τ, t 0 + τ ]: where is the functional derivative of (54) (see, e.g., [24, Appendix A]), with y i and φ j the respective entries of y and φ. For later use, we recall that, for a givent ∈ [t 0 − τ, t 0 ) and i, j = 1, . . . , m, with e j ∈ R m the j th unit vector and, hereafter, δt (t) is the Dirac delta function centered att. The following result holds true.

Lemma 1 With reference to (58) and (59), for any fixed
, setting as usual δ ζ the Dirac delta centered at ζ , one has: In fact, by virtue of (53), the solution (54) is independent of the values of φ outside the interval [ξ − τ, t − τ ]. Consequently, by taking into account (60), it follows that: Taking into account Lemma 1, the following result provides a more practical characterization of the functional derivative (58)-(59). Figure 1 displays the location of the most relevant points and subintervals involved in Theorem 15.

Main results (DDE case)
We are now in the position of discussing the accuracy of the approximations (52). To begin with, the following result holds true.
This result allows us to state the following one, which generalizes that of Theorem 5 to the present case. Proof The proof is by generalized induction. For n = 1, . . . , ν the statement follows from Theorems 5 and 16 since, in this case, so that we are dealing with an ODE. Assume now it true up to n − 1, and prove for n. By hypothesis, and from (45) and (51), we know that y(t n ) − y n = y(t n−1 ) − y n−1 + O(h 2s+1 ) ≡ y(t n−1 ) − y n−1 + hδγ n 0 , so that δγ n 0 = O(h 2s ) follows. Then, by taking into account (55), it follows that: O(h 2s ) = δγ n 0 = γ 0 (σ n ,σ n−ν ) − γ 0 (σ n , σ n−ν )

From Theorem 16, it follows that
By considering that Consequently, , and also the first part of the statement follows.
Formulae (69) and (46) form a subclass of the so-called natural continuous RK methods for DDEs (see [5,Sec. 6.2]). As a consequence, their convergence properties could be as well derived by more classical approaches such as Bellman's method of steps, which is an analytic procedure specific for DDEs. In the present context, the main goal is to show how the framework based on the perturbation theory applied to the truncated Fourier expansion is easily adapted to cope with DDEs, therefore we will pursue this route of investigation. A further strength of this approach is the possibility of analyzing the convergence properties of the truncated Fourier approximations when these are used as spectral methods in time. In this regard, the analysis for the ODE case has been addressed in [3], while a spectral implementation of the methods for DDEs has been considered in [16].
By using standard arguments (which we omit, as done in the ODE case), we can derive the following results, representing the corresponding counterparts of Theorems 17 and 18, respectively.

Theorem 20
With reference to (44), (46), (69)-(71), and assuming that the quadrature formula (72) has order q ≥ 2s, for n = 1, . . . , N ≡ Kν one has: Remark 6 It is worth mentioning that the result of Theorem 20 states that the superconvergence order 2s at the mesh-points t n is obtained, even though possibly different Runge-Kutta methods are used at each integration step, provided that they define a polynomial approximation of degree s. This, in turn, represents a generalization of the results in [4] for collocation methods.
We conclude this section, by recalling that the considerations in Remark 4 continue to hold in the DDE case and by observing that, concerning the implementation of the resulting Runge-Kutta method used for solving problem (38), the arguments in Section 2.5, mutatis mutandis, apply as well.

Numerical tests
In this section we report a few numerical tests for the DDE case. In fact, in the ODE case, HBVMs have been extensively used as energy-conserving methods for Hamiltonian systems (see, e.g., [2,[9][10][11]19]). We show that, under some circumstances, their use can be advantageous also in the DDE case. Hereafter, we consider a class of DDEs defined by a Hamiltonian function through the equationṡ with α a real parameter, τ > 0 the delay, and H q and H p the partial derivatives of H w.r.t. q and p, respectively. The problem is completed by the initial conditions The introduction of such a kind of delay Hamiltonian system is partly inspired by the problem of looking for periodic orbits of DDEs, which has been attacked by many authors in the past (see, e.g., [23,34,35,[37][38][39]41]). In this respect, the first two examples below show an attractive periodic orbit with integer period lying on a level set of the Hamiltonian function (73) which is, therefore, a constant of motion once the periodic orbit has been approached. In the third example we are instead interested in simulating the correct qualitative behavior of a dissipative Hamiltonian delay problem in the phase space when the dynamics takes place in a neighborhood of a separatrix. Taking aside a theoretical discussion of problem (73)-(74) which would go beyond the scopes of the present work, we infer its properties for the three considered instances by preliminarily applying a high-order integrator with very small stepsize, in order to get a very accurate numerical solution that will be taken as a reference trajectory in the phase space. For all the three problems, we show that a very accurate approximation of the Hamiltonian function allows us to reproduce the correct geometric features of the solution in the discrete setting. To the best of our knowledge, this is the first instance of the use of HBVMs in the context of DDEs displaying geometric properties. For comparison purposes, we also solve the problems with the classical Gauss collocation integrator of the same order. The numerical tests have been implemented in Matlab (R2020b) on a 3 GHz Intel Xeon W10 core computer with 64GB of memory.
Both methods are fourth-order, according to Theorem 10, with HBVM(4,2) energyconserving once a periodic orbit of integer period is eventually reached (see Theorem 12). Problem (76) possesses an attracting periodic orbit with period T = 2τ = 2 which suggests using a stepsize h equal to a submultiple of τ , in order to mimic a corresponding discrete periodic solution. As we are going to see, unlike the 2-stage Gauss collocation method, the conservation property of HBVM(4,2) results in a precise resolution of this task. We solve the problem on the interval [0, 2 · 10 3 ] by using a timestep h = τ/5 = 0.2. Figure 2 summarizes the obtained results.
• In the upper row of the figure are the plots of the numerical Hamiltonian, H (q n , p n ), from which one deduces that both methods quite soon reach a stationary behavior. • To better discern the asymptotic behavior of the two numerical solutions, the central pictures show the plots of |H (q n , p n ) − H (q n−1 , p n−1 )| for the two methods. From these plots one infers that, while the stationary value of the Hamiltonian is constant for the HBVM(4,2) method, it is only approximately constant for the HBVM(2,2) method, with oscillations having amplitude of order 10 −2 . • The bottom row contains the plots of the numerical trajectory in the phase plane for both methods, relative to the interval [2 · 10 2 , 2 · 10 3 ] (i.e., after the transient phase). For both methods, the solution seems to repeat every 10 points (i.e., with period T = 2τ = 2). However, the points obtained by the HBVM(2,2) method are not actually periodic, whereas they are (within to machine precision and independently of the used stepsize h) for the HBVM(4,2) method. To confirm this, in Table 1 we list the last 20 points of the trajectories computed after each period T = 10h and lying inside the two small circles highlighted in the plots. As one may see, only the first 4 digit of the points of the trajectory computed by the HBVM(2,2) method are retained, whereas the points computed by the HBVM(4,2) method differ at most on the last digit.

Problem 2
The second example is similar in nature to the previous one but considers a nonpolynomial Hamiltonian function with two degrees of freedom. With reference to (73)-(75), it is defined by: Again, we have experienced the existence of a periodic orbit with period T = 2τ = 2.
Both methods are fourth-order, the latter being practically energy-conserving, for the given timestep, in the event that a periodic orbit is reached.
Also in this case, the conservation property of HBVM(10,2) turns out to be crucial in reproducing a discrete orbit with period precisely equal to 2, while a small phase drift affects the solution yielded by the 2-stage Gauss collocation method. Figure 3, which is similar to Fig. 2, summarizes the obtained results.
• In the upper row of the figure are the plots of the numerical Hamiltonian, namely H (q n , p n ): for both methods it seems to reach a stationary behavior. • The second row shows the plots of |H (q n , p n ) − H (q n−1 , p n−1 )| for the two methods. From these plots one infers that, while the stationary value of the Hamiltonian is constant (up to round-off) for the HBVM(10,2) method, it is only approximately constant for the HBVM(2,2) method, with oscillations having amplitude of order 10 −3 . • The bottom row contains the plots of the numerical trajectory in the q 1 − q 2 plane for both methods, relative to the interval [10 2 , 10 3 ] (i.e., after the transient phase). For both methods, the solution seems to repeat every 20 points (i.e., with period T = 2τ = 2). However, only the points obtained by the HBVM(10,2) method are actually periodic. To confirm this, in Table 2 we list the last 20 points of the trajectories computed after each period T = 20h and lying inside the two small circles displayed in the plots. As one may see, the Gauss collocation method only retain the first 5 digits after each period, whereas the points computed by the HBVM(10,2) method differ at most on the last digit.

Problem 3
For the last problem, we are no more interested in periodic trajectories. Instead, we consider a delay Hamiltonian problem with dissipation. This can be achieved by choosing a negative value of the parameter α in (74). With reference to (73)-(75), the selected parameters are: m = 1, H(q,p) = 1 2 p 2 − cos q, α = −10 −5 , This problem is a dissipative delay-variant of the nonlinear pendulum, with the initial condition chosen close to the separatrix (the level set H (q, p) = 1) between the two different regimes of the pendulum: librations around the straight-down stationary position, and rotations. For the given initial conditions, the pendulum should undergo damped oscillations with a decreasing trend of the Hamiltonian function H (q n , p n ).
Consequently, when using relatively large stepsizes, it is fundamental to reproduce the correct dissipation of the Hamiltonian along the numerical trajectory.
We solve this problem on the interval [0, 500], with a timestep h = τ/2 = 0.5, by using the following methods: • HBVM(2,2) (i.e., the 2-stage Gauss method), • HBVM(10,2). H (q n , p n ), from which one deduces that both methods have a dissipation trend of the energy H . Nevertheless, for HBVM(2,2) the values of the Hamiltonian becomes quite larger than 1 in the initial part of the trajectory and undergoes fictitious oscillations which cause the numerical solution to escape the correct region of the phase space where the dynamics should take place, as we are going to see. This is not the case for the HBVM(10,2) method, whose numerical Hamiltonian decreases in the correct way, thus remaining always smaller than 1. • The central pictures show the numerical solution in the phase space. As one may see, the numerical solution provided by HBVM(2,2) "jumps" twice, before being trapped into an invariant region. This means that the pendulum undergoes two complete rotations until it looses enough energy and begins oscillating around the rest position. On the contrary, the numerical solution obtained by using HBVM(10,2) always remains in the correct region. • The bottom row contains the plots of the numerical solution w.r.t. time, confirming that the numerical solution provided by the HBVM(2,2) method "jumps" twice, whereas that obtained by the HBVM(10,2) method does not.

Conclusions
In this paper we have fully developed a thorough approach for obtaining polynomial approximations to the solution of initial value ODE and DDE problems. It allows us to derive a wide class of Runge-Kutta methods, whose properties are easily discussed within the framework, as well as their actual implementation. Some numerical tests, concerning the numerical simulation of solutions of certain DDE problems of Hamiltonian type, confirm this. The present approach leaves room for generalizations along several directions: in particular to different kind of problems, besides the ones considered here. Another relevant direction of investigation consists in looking for approximations belonging to functional subspaces different than polynomials: that is, by considering orthonormal functional bases different from (3). Both directions will be the subject of future investigations.
Funding Open access funding provided by Università degli Studi di Bari Aldo Moro within the CRUI-CARE Agreement. The authors received financial support from the mrSIR crowdfunding https://www. mrsir.it/en/about-us/.

Conflict of interest The authors declare no competing interests
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/.