1 Introduction

Health Management is defined as being the process of making optimal specific decisions, respectively recommendations about operation, mission and maintenance actions, based on the health evaluation data gathered by Health Monitoring Systems (HMS). As a part of Health Management procedure, the technology of Structural Health Monitoring (SHM) [1] is based on the process of implementing a damage detection and characterization strategy with applications in aerospace, civil and mechanical engineering infrastructure domain [1,2,3,4,5,6]. Each mode can be represented as a SDOF (single degree of freedom system) and characterized by its modal parameters: natural (modal or resonant) frequency, modal damping, and a mode shape—the change of structural properties causes their variation (it was experimentally proved the correlation between damage severity and modal parameters). The experimental modal parameters are obtained from a set of the frequency response function data (e.g., processing the SDOF or MDOF). In literature a large number of analytical and experimental studies have been conducted to establish correlations between damage severity and modal parameters. The wavelet analysis can make possible to evaluate the health of structures, see [1,2,3,4,5,6]. For example, in his research work Newland [7,8,9,10] applied the harmonic wavelet analysis to the study of vibration of the structures caused by underground trains and road traffic. The signal is analyzed with a flexible time-frequency window that narrows for high frequencies and extends for low frequencies, i.e. breaking up of the signal into shifted and scaled versions of a “mother wavelet” (scaling function) or “father wavelet” (basis function).

In the time domain, the mathematical model of SDOF is a 2nd-order differential equation of the form

$$\begin{aligned} m\ddot{x}(t)+c\dot{x}(t)+kx(t)=f(t), \end{aligned}$$
(1)

where f(t) represents the external excitation m is system mass, k stiffness, c viscous damping coefficient and x(t) the displacement response. In general, the displacement response \(x=x(t)\) is the solution of Cauchy problem

$$\begin{aligned} m\ddot{x}+c\dot{x}+kx=b ~,~x(0)=x_{0},\,\dot{x}(0)=v_{0}, \end{aligned}$$
(2)

where \(m,c\in {\mathbb{R}}_{+}^{*}\), \(k,b\in C_{I}^{0}\), \(I\subseteq {\mathbb{R}}_{+}\), \(k=k(t)\) is the variable displacement coefficient (in the Sects. 2.1.1, 3.1.1) and \(b=b(t)\) represents the disturbance. Next (except Sects. 2.1.1, 3.1.1) the displacement coefficient k is constant, i.e. \(k\in {\mathbb{R}}\) and the Eq. (2) scaled with m becomes

$$\begin{aligned} \ddot{x}+c_{m}\dot{x}+k_{m}x=b_{m}, \end{aligned}$$
(3)

i.e., \(c_{m}=\dfrac{c}{m}\in {\mathbb{R}}_{+}^{*}\), \(k_{m}=\dfrac{k}{m}\in {\mathbb{R}}\), \(b_{m}=\dfrac{b}{m}\in C_{I}^{0}\) (\(k_{m}\in {\mathbb{R}}_{+}^{*}\) implies \(b_{m}\) is positive definite) and that affects all the system parameters. Also, for the unforced damped SDOF system, the value of the damping ratio \(\zeta =\dfrac{c_{m}}{2\sqrt{k_{m}}}\), \(k_{m}\in {\mathbb{R}}^{*}_{+}\) determines the way the system oscillations go to zero, i.e., the cases underdamped (\(0<\zeta <1\)), overdamped (\(\zeta >1\)), respectively critically damped (\(\zeta =1\)) system. On the other hand the constant case allows “a calibration of dyadic wavelets” (see Sect. 4).

2 An Inventory of Methods

In the Sects. 2.12.4 (except Sect. 2.1.1)are presented the methods of classical mathematics (see [11]) which are developed in Sect. 3 (the first two methods by comparison), respectively, in the Sects. 2.52.7 are presented the methods of non-classical mathematics and the first is developed in the Sect. 4. On the other hand in the Sect. 2.1.1 there are some transformations of (non)homogeneous equations in the scaled forms which are used in the Sect. 3.1.1.

2.1 Second Order Linear Nonhomogeneous Differential Equations with Constant Coefficients

The solution of Cauchy problem (2), (3) is obtained as a particular solution of a 2nd-order linear nonhomogeneous differential equation with constant coefficients [12, 13]—according to the initial conditions, i.e.

$$\begin{aligned} x_{no}^{c}=x_{o}^{c}+x_{p}^{c}, \end{aligned}$$
(4)

and

$$\begin{aligned}&x_{no}=x_{o}+x_{p}\nonumber \\&x_{o}=c_{1}x_{1}+c_2{}x_{2} \nonumber \\&x_{no}(0)=x_{o}(0)+x_{p}(0)=x_{0} \nonumber \\&\dot{x}_{no}(0)=\dot{x}_{o}(0)+\dot{x}_{p}(0)=v_{0}, \end{aligned}$$
(5)

where \(x_{no}\), \(x_{o}\) are respectively the general solutions of nonhomogeneous, respectively homogeneous equation, \(x_{1},x_{2}\), respectively \(x_{p}\) are particular solutions of homogeneous equation, respectively nonhomogeneous equation; be seen as \(x_{o}^{c}\) depends on the \(b_{m}\). Because \(x_{1},x_{2}\) are quasi-polynomials—according to the base (see Sect. 3.1)

$$\begin{aligned} B=\left\{ t^{k}, t^{k}\mathrm {e}^{\alpha t}\mathrm {sc}(\beta t)|\mathrm {sc}=\sin ,\cos , k\in \mathbb N\right\} \end{aligned}$$
(6)

is naturally as \(b_{m}\) be quasi-polynomial

$$\begin{aligned} b_{m}=\displaystyle \sum _{i=1}^{l}\gamma _{i}\,t^{n_{i}}\,\mathrm {e}^{\alpha _{i}t}\,\mathrm {sc}(\beta _{i}t)\,; \end{aligned}$$
(7)

consequently \(x_{p}\) is quasi-polynomial with the association

$$\begin{aligned} \gamma _{i}\,t^{n_{i}}\,\mathrm {e}^{\alpha _{i}t}\,\mathrm {sc}(\beta _{i}t)&\vdash \mathrm {e}^{\alpha _{i}t}\, \left( \sin (\beta _{i}t)\displaystyle \sum _{j=0}^{n_{i}-1}\sigma _{i}^{j}\,t^{j} \right. \\&+\left. \cos (\beta _{i}t)\displaystyle \sum _{j=0}^{n_{i}-1}\gamma _{i}^{j}\,t^{j} \right), \end{aligned}$$
(8)

according to the method of non-determined coefficients. For some developments see Sect. 3.1.

2.1.1 Variable Coefficients

Consider a 2nd-order nonhomogeneous linear equation in the canonical form with temporally notation and variable coefficients

$$\begin{aligned} a_{0}\ddot{x}+a_{1}\dot{x}+a_{2}x=b, \end{aligned}$$
(9)

where \(x\in \mathcal {C}_{{I}}^{2}\), \(a_{0}\), \(a_{1}\), \(a_{2}\in \mathcal {C}_{{I}}^{0}\) (usually \(I\cap \mathcal {Z}_{{a_{0}}}=\emptyset \)), J=codom(x), \( I,J\subseteq {\mathbb{R}}_{{[+]}}\) intervals (\({\mathbb{R}}_{{[+]}}\) option for \({\mathbb{R}}_{{+}} \)), \(b\in \mathcal {C}_{{I}}^{0}\) the disturbance and the associated homogenous equation

$$\begin{aligned} a_{0}\ddot{x}+a_{1}\dot{x}+a_{2}x=0. \end{aligned}$$
(10)

Observation 2.1.1.1 (solutions) The general solution of (9), (10) are respectively

$$\begin{aligned} x_{o}=c_{1}x_{1}+c_{2}x_{2}~,~x_{no}=x_{o}+x_{p}, \end{aligned}$$
(11)

where \(x_{1}, x_{2}\), respectively \(x_{p}\) are particular solutions of (10) respectively of (9) (and the associated Wronskian of \(x_{1}\), \(x_{2}\), \(W(x_{1}, x_{2}\)) is non-zero). In addition the solution of Cauchy problem is according to the initial conditions (see 2.1).

Definition 2.1.1.1 i) (scaled forms) The scaled form (implicitly relative to \(a_{0}\)) of (9) is

$$\begin{aligned} \ddot{x}+{}_{[s]}a_{1}\dot{x}+{}_{[s]}a_{2}x={}_{[s]}b, \end{aligned}$$
(12)

where \({}_{[s]}a_{1}=\dfrac{a_{1}}{a_{0}}\), \({}_{[s]}a_{2}=\dfrac{a_{2}}{a_{0}}\), \({}_{[s]}b=\dfrac{b}{a_{0}}\) (s is implicit unless there are amibiguities in the context); the scaled form is native if \(a_{0}=1\). The scaled form of (10) is

$$\begin{aligned} \ddot{x}+{}_{[s]}a_{1}\dot{x}+{}_{[s]}a_{2}x=0. \end{aligned}$$
(13)

The absolute scaled form (implicitly relative to \(a_{0}\)) of (9) is

$$\begin{aligned} { sgn}(a_{0})\ddot{x}+{}_{|s|}a_{1}\dot{x}+{}_{|s|}a_{2}x={}_{|s|}b, \end{aligned}$$
(14)

where \({}_{|s|}a_{1}=\dfrac{a_{1}}{|a_{0}|}\), \({}_{|s|}a_{2}=\dfrac{a_{2}}{|a_{0}|}\), \({}_{|s|}b=\dfrac{b}{|a_{0}|}\); the absolute scaled form of (10) is

$$\begin{aligned} { sgn}( a_{0})\ddot{x}+{}_{|s|}a_{1}\dot{x}+{}_{|s|}a_{2}x=0. \end{aligned}$$
(15)

ii) (dual scaled forms) Particularly, the dual scaled form (implicitly relative to \(a_{2}\ne 0\)) of (9) is

$$\begin{aligned} {}_{s'}a_{0}\ddot{x}+{}_{s'}a_{1}\dot{x}+x={}_{s'}b, \end{aligned}$$
(16)

where \({}_{s'}a_{0}=\dfrac{a_{0}}{a_{2}}\), \({}_{s'}a_{1}=\dfrac{a_{1}}{a_{2}}\), \({}_{s'}b=\dfrac{b}{a_{2}}\) ; the dual scaled form of (10) is

$$\begin{aligned} {}_{s'}a_{0}\ddot{x}+{}_{s'}a_{1}\dot{x}+x=0. \end{aligned}$$
(17)

The absolute dual scaled form of (9) is

$$\begin{aligned} {}_{|s'|}a_{0}\ddot{x}+{}_{|s'|}a_{1}\dot{x}+{ sgn}( a_{2})x={}_{|s'|}b, \end{aligned}$$
(18)

where \({}_{|s'|}a_{0}=\dfrac{a_{0}}{|a_{2}|}\), \({}_{|s'|}a_{1}=\dfrac{a_{1}}{|a_{2}|}\), \({}_{|s'|}b=\dfrac{b}{|a_{2}|}\); the absolute dual scaled form of (10) is

$$\begin{aligned} {}_{|s'|}a_{0}\ddot{x}+{}_{|s'|}a_{1}\dot{x}+{ sgn}(a_{2})x=0. \end{aligned}$$
(19)

Observation 2.1.1.2 (Wronskian, see [14]) The definition expression of the Wronskian associated with \(x_{1}, x_{2}\) is

$$\begin{aligned} W=W(x_{1},x_{2})=\begin{vmatrix} x_{1}&x_{2}\\ \dot{x}_{1}&\dot{x}_{2} \end{vmatrix}=x_{1}\dot{x}_{2}- \dot{x}_{1}x_{2}, \end{aligned}$$
(20)

and the Liouville’s formula is

$$\begin{aligned} W=e^{-F}~,~F=\int \frac{a_{1}}{a_{0}} dt. \end{aligned}$$
(21)

Notice that relative to the scaled form the Liouville’s formula becomes

$$\begin{aligned} W=e^{-F}~,~F=\int a_{1} dt. \end{aligned}$$
(22)

It is presented below an inventory of methods of obtaining the exact solutions.

Observation 2.1.1.3 (relative to a known solution) Relative to a known particular solution \(x_{1}\) the particular solutions \(x_{2}\), respectively \(x_{p}\) are given (according to Observation 2.1.1.2)

$$\begin{aligned} &x_{2}=x_{1}\displaystyle \int \dfrac{W}{x_{1}^{2}} dt, \\&x_{p}=x_{2}\displaystyle \int x_{1}\dfrac{b}{a_{o}W} dt - x_{1}\displaystyle \int x_{2}\dfrac{b}{a_{o}W}dt, \end{aligned}$$
(23)

or scaled form

$$\begin{aligned} x_{p}=x_{2}\displaystyle \int x_{1}\dfrac{b}{W}dt - x_{1}\displaystyle \int x_{2}\dfrac{b}{W}dt. \end{aligned}$$
(24)

It is presented below some transformations of homogeneous equation in the scaled form by change of variable, respectively by change of function.

Theorem 2.1.1.1 (transformation) i) (change of variable) For \(i\in {\{1,2\}}\) by the change of variable

$$\begin{aligned} v_{i}=v_{i}(t)= \displaystyle \int w_{i}(t)dt, \end{aligned}$$
(25)

where \(w_{i}=\varphi \circ (\alpha a_{i} ),\, \varphi \in C_{{\mathbb{R}}_{[+]}}^{1},\, \alpha \in \mathbb {R}^{*}\) the equation (13) (in the native scaled form) is transformed in

$$\begin{aligned} x''_{v_{i}^{2}} + {}_{i}a_{1}x'_{v_{i}^{}}+ {}_{i}a_{2}x=0, \end{aligned}$$
(26)

where \({}_{i}a_{1}=\dfrac{\dot{w_{i}}}{w_{i}^{2}}+\dfrac{{a_{1}}}{w_{i}^{}} ,\,{}_{i}a_{2}=\dfrac{{a_{2}}}{w_{i}^{2}}\) and \( \dot{w_{i}}=\alpha \varphi '(\alpha a_{i})\dot{a_{i}} \);

ii) (change of function) For \(i \in {\{1,2\}}\) by the change of function

$$\begin{aligned} x=\varepsilon _{i}u, \end{aligned}$$
(27)

where \(u=u(t),\, \varepsilon _{i}=\varepsilon _{i}(t),\, \varepsilon _{i}=\varphi \circ (\alpha b_{i})\), \( b_{i}(t)=\displaystyle \int a_{i}dt\), \(\varphi \in C_{{\mathbb{R}}_{[+]}}^{2}\) the equation (13) is transformed in

$$\begin{aligned} \ddot{u}+{}_{i}b_{1}\dot{u} + {}_{i}b_{2}u=0, \end{aligned}$$
(28)

where \({}_{i}b_{1} = 2 \alpha \dfrac{\varphi ' }{\varphi }(\alpha b_{i})a_{i}+a_{1}\), \({}_{i}b_{2} = \alpha ^{2}\dfrac{\varphi ''}{\varphi }(\alpha b_{i})a_{i}^{2}+\alpha \dfrac{\varphi '}{\varphi } (\alpha b_{i})a_{1}a_{i} + \alpha \dfrac{\varphi '}{\varphi }(\alpha b_{i})\dot{a_{i}}+a_{2}\).

Proof

i) In each case \(i\in {\{1,2\}}\) according to the relationship of functional dependence \(x=x(t)=x(v_{i}(t))\) it is obtained successively \(\dot{x}=\dot{v_{i}}x'_{v_{i}}=w_{i}x'_{v_{i}}\), \(\ddot{x}=\dot{w}_{i}x'_{v_{i}}+w^{2}_{i}{x}''_{v_{i}^{2}}\), \(\ddot{x}+a_{1}\dot{x}+a_{2}x=w_{i}^{2}x''_{v_{i}^{2}}+(\dot{w}_{i}+ w_{i}a_{1})x'_{v_{i}}+a_{2}x=w_{i}^{2}( x''_{v_{i}^{2}}+ {}_{i}a_{1}x'_{v_{i}}+{}_{i}a_{2}x)=0\), \(x''_{v_{i}^{2}}+{}_{i}a_{1}x'_{v_{i}}+{}_{i}a_{2}x=0\) and \(\dot{w_{i}}=\alpha \varphi '(\alpha a_{i})\dot{a_{i}}\).

ii) In each case \(i\in {\{1,2\}}\) it is obtained successively \(\dot{x}=\alpha \varphi '(\alpha b_{i})a_{i}u + \varepsilon _{i}\dot{u}\), \(\ddot{x}=\alpha ^{2} \varphi ''(\alpha b_{i})a_{i}^{2}u+\alpha ^{} \varphi '(\alpha b_{i})\dot{a}_{i}u+\alpha ^{} \varphi '(\alpha b_{i})a_{i}\dot{u}+\alpha ^{} \varphi '(\alpha b_{i})a_{i}\dot{u}+\varepsilon _{i}\ddot{u}= \varphi (\alpha b_{i})\ddot{u} + 2\alpha \varphi ' (\alpha b_{i})a_{i}\dot{u} + \left( \alpha ^{2}\varphi ''(\alpha b_{i})a_{i}^{2}+\alpha \varphi ^{'}(\alpha b_{i})\dot{a_{i}}\right) u\). Also, \(\ddot{x} + a_{1}\dot{x}+a_{2}x=\varphi (\alpha b_{i})\ddot{u}+\left( 2\alpha \varphi ^{'}(\alpha b_{i})a_{i} +\varphi ( \alpha b_{i})a_{1}\right) \dot{u} + (\alpha ^{2}\varphi ''(\alpha b_{i})a_{i}^{2} + \alpha \varphi ^{'} (\alpha b_{i})a_{1}a_{i} + \alpha \varphi '(\alpha b_{i})\dot{a_{i}} +\varphi (\alpha b_{i})a_{2})u\) \(=\varphi (\alpha b_{i})(\ddot{u}+{}_{i}b_{1}\dot{u}+{}_{i}b_{2}u)=0\), \(\ddot{u}+ {}_{i}b_{1}\dot{u}+{}_{i}b_{2}=0\). \(\square \)

Observation 2.1.1.4 i) (remarkable particular cases) Relative to point i) of the previous theorem in the particular case \(w_{2}^{2}=|a_{2}|\), i.e. \(i=2,\,\alpha =1,\, w_{2}=\varphi (a_{2})=\sqrt{|a_{2}|}\) it is obtained the scaled and absolute dual scaled form

$$\begin{aligned} \ddot{x} + {}_{|s'|}a_{1}\dot{x} + { sgn}(a_{2})x=0, \end{aligned}$$
(29)

where \({}_{|s'|}a_{1} = \dfrac{\dot{w_{2}}}{w_{2}^{2}} + \dfrac{a_{1}}{w_{2}} = \dfrac{1}{|a_{2}|}\dot{w_{2}}+\dfrac{a_{1}}{\sqrt{|a_{2}|}}\), \(\dot{w_{2}}=\dfrac{\sqrt{|a_{2}|}}{2}\cdot \dfrac{\dot{a_{2}}}{a_{2}}\). This form is formally mentioned as an equation with constant coefficients in [14]; relative to point ii) in the particular case \(\varepsilon _{1}=\mathrm {e}^{-\frac{1}{2}b_{1}(t)}\), i.e. \(i=1,\,\alpha =-\dfrac{1}{2},\,\varphi =exp\) it is obtained the ”reduced” canonical form

$$\begin{aligned} \ddot{u} + au=0, \end{aligned}$$
(30)

where \(a = a_{2} - \dfrac{1}{4}a_{1}^{2}-\dfrac{1}{2}\dot{a_{1}}\). This form is mentioned as a canonical form relative to the equation (10) in [14]. Notice that for this form it is true (see Observations 2.1.1.2, 2.1.1.3)

$$\begin{aligned} W = 1,\quad x_{2}=x_{1}\displaystyle \int \dfrac{dt}{x_{1}^{2}}. \end{aligned}$$
(31)

ii) (reduction to the Riccati equation) By the change of function

$$\begin{aligned} u=\dfrac{\dot{x}}{x},\quad u=u(t), \end{aligned}$$
(32)

it is obtained the Riccati equation

$$\begin{aligned} \dot{u}=\alpha u^{2}+\beta u+\gamma , \end{aligned}$$
(33)

where \(\alpha =-1, \,\beta =-a_{1},\, \gamma = -a_{2}.\) Indeed, it is successively obtained \(\dot{x}=xu\), \(\ddot{x}=\dot{x}u+x\dot{u}\), \( \ddot{x}+a_{1}\dot{x}+a_{2}x=\dot{x}u+x\dot{u}+a_{1}xu+a_{2}x= x(u^{2}+\dot{u}+a_{1}u+a_{2})=0\), \( \dot{u}=\alpha u^{2}+\beta u+\gamma =0\) with \(\alpha =-1,\, \beta =-a_{1}, \,\gamma = -a_{2}.\) This is mentioned in another form in [14]. Notice that it is true

$$\begin{aligned} x=c\mathrm {e}^{v}, \end{aligned}$$
(34)

where \(v=\displaystyle \int u dt\), \(c \in {\mathbb{R}}^{*}\). For the forms (30), respectively (29) the “reduced” Riccati equation is respectively

$$\begin{aligned} \dot{u}=-(u^{2}+a)~, ~\dot{u}=-(u^{2}+au)-{ sgn}(a_{2}), \end{aligned}$$
(35)

where \(a=a_{2}-\dfrac{1}{4}a_{1}^{2}-\dfrac{1}{2}\dot{a}_{1}\), respectively \(a=\dfrac{1}{\left| a_{2} \right| }\dot{w_{2}}+\dfrac{a_{1}}{\sqrt{\left| a_{2} \right| }},\dot{w_{2}}=\dfrac{\sqrt{\left| a_{2} \right| }}{2}\cdot \dfrac{\dot{a_{2}}}{a_{2}}\) (see point i)).

Example 2.1.1.1 (the case of Euler and Bessel equations) Relative to the 2nd-order Euler and Bessel equations the absolute dual form is not with constant coefficients.

For the determination of equations that are reducible to equations with constant coefficients of type (29) (see Observation 2.1.1.4, i)) an extension of the Bernoulli equation is required as “absolute” Bernoulli equation.

Definition 2.1.1.2 (“absolute” Bernoulli equation) The “absolute” Bernoulli equation (with temporally notation) is the equation

$$\begin{aligned} \dot{x} = ax + b|x|^{r}, \end{aligned}$$
(36)

where \(x \in C_{I}^{1},\,0 \notin J =codom(x),\,I,J\subseteq {\mathbb{R}}\) intervals \(a,b \in C_{I}^{0},\, r \in {\mathbb{R}}^{*} \backslash \{1\}\).

Theorem 2.1.1.2 (transformation of the “absolute” Bernoulli equation) By the change of function

$$\begin{aligned} u=x\left| x \right| ^{-r}, \end{aligned}$$
(37)

the “absolute” Bernoulli equation (36) is transformed in a first-order nonhomogeneous linear equation.

Proof

It is obtained \(\dot{u}=\dot{x}\left| x \right| ^{-r}-rx\left| x \right| ^{-r-1}\cdot \dfrac{\left| x \right| }{x}\dot{x}=(1-r)\left| x \right| ^{-r}\dot{x}\); the equation becomes

$$\begin{aligned} \dot{u}=(1-r)au+(1-r)b, \end{aligned}$$
(38)

or

$$\begin{aligned} \dot{u}=(1-r)(au+b). \end{aligned}$$
(39)

\(\square \)

Theorem 2.1.1.3 (conditional reduction) The “absolute” Bernoulli equation

$$\begin{aligned} \dot{x}=-2a_{1}x+2\alpha \left| x \right| ^{\frac{3}{2}}, \end{aligned}$$
(40)

where \(\alpha \in \mathbb {R}^{*}\) has the general integral

$$\begin{aligned} x\left| x \right| ^{-\frac{3}{2}}=\mathrm {e}^{b_1} \left( c-\alpha \displaystyle \int e^{-b_1}dt \right) , \end{aligned}$$
(41)

where \(b_{1}=\displaystyle \int a_{1}dt,\,c \in \mathbb {R}^{*}\).

Proof

By the change of function (see Theorem 2.1.1.2)

$$\begin{aligned} u=x\left| x \right| ^{-\frac{3}{2}},\quad u=u(t), \end{aligned}$$
(42)

it is obtained \(\dot{u}=-\dfrac{1}{2}\left| x \right| ^{-\frac{3}{2}}\dot{x}\) and the equation becomes

$$\begin{aligned} \dot{u}=a_{1}u-\alpha \end{aligned}$$
(43)

(first-order nonhomogeneous linear equation); the associated homogeneous equation has the general solution \(u_{o}=cu_{1},\,u_{1}=\mathrm {e}^ {b_{1}},\,b_{1}=\displaystyle \int a_{1}dt\) and a particular solution for (43) is obtained from \(\dot{c}u_{1}=-\alpha ,\, \dot{c}=-\alpha \mathrm {e}^ {-b_{1}},\, c=-\alpha \displaystyle \int e^ {-b_{1}} dt\), \(u_{p}=-\alpha \mathrm {e}^ {b_{1}} \displaystyle \int \mathrm {e}^ {-b_{1}} dt\). Finally, the general integral is given by (41). \(\square \)

Observation 2.1.1.5 (i) (positive definition) For \(J=codom(x) \subseteq {\mathbb{R}}_{+}^{*}\) is obtained the general solution

$$\begin{aligned} x=\mathrm {e}^{-2b_{1}}\left( c-\alpha \displaystyle \int \mathrm {e}^{-b_{1}}dt\right) ^{-2}, \end{aligned}$$
(44)

where \(b_{1}=\displaystyle \int a_{1}dt,\,c\in \mathbb {R}^{*}\);

(ii) (the linear image of a pseudo-linear extension) In [RN]Footnote 1 are presented Notingher models as first and higher order pseudo-linear differential equations. In addition it is defined the linear image of a pseudo-linear equation, respectively a pseudo-linear extension of a linear equation; a pseudo-linear extension is convenient relative to the approximation of the Cauchy problem solution with the contraction principle.

2.2 Laplace Operational Method

According to the derivation theorem, the linearity of Laplace operator \(\mathcal L\) and Sect. 2.1 the operational equation associated with Cauchy problem is (see [11, 15])

$$\begin{aligned} p(u)X(u)=A(u)+B_{m}(u), \end{aligned}$$
(45)

where

$$\begin{aligned} &X=\mathcal L(x),\quad B_{m}=\mathcal L(b_{m})~,\\&A(u)=x_{0}\,p_{-1}(u)+v_{0}\,p_{-2}(u)\\&p(u)=u^{2}+c_{m}u+k_{m} \\&p_{-1}(u)=\dfrac{1}{u}\left( p(u)-k_{m}\right) =u+c_{m}~,\\&p_{-2}(u)=\dfrac{1}{u}\left( p_{-1}(u)-c_{m}\right) =1. \end{aligned}$$
(46)

The solution of (45) is

$$\begin{aligned} X(u)=\dfrac{A(u)+B_{m}(u)}{p(u)}=X_{o}(u)+X_{p}(u), \end{aligned}$$
(47)

where \(X_{o}(u)=\dfrac{q_{0}(u)}{p(u)}\) is independent of \(b_{m}\) (see Sect. 2.1), \(q_{0}(u)=x_{0}(u+c_{m})+v_{0}\), \(X_{p}(u)=\dfrac{B_{m}(u)}{p(u)}\). If \(b_{m}\) is quasi-polynomial, i.e. \(X_{p}\) is rational function, then \(\mathcal L^{-1}(X_{p})\) is quasi-polynomial (according to 3.1). For some developments see Sect. 3.2.

2.3 Analytical Conditional Form

If the disturbance \(b_{m}\) has an analytical form, i.e. \(b_{m}= \sum _{n\in \mathbb N}\mu _{n}\,t^{n}\), then the displacement x may have the form \(x(t)= \sum _{n\in \mathbb N}\lambda _{n}\,t^{n}\), respectively on \(I_{b}=[0,r_{b}],I_{x}=[0,r_{x}]\subset {\mathbb{R}}_{+}\) (relative to the positive parts of the convergence intervals [14]). The initial conditions involve \(\lambda _{0}=x_{0},\lambda _{1}=v_{0}\); in addition are obtained recurrence relations between coefficients (according to the Eq. 3) and finally result as it is possible that x have no an analytical form on the interval \(I_{x}\). For some developments see Sect. 3.3.

2.4 Approximations with Error Evaluation

The Cauchy problem (3) becomes

$$\begin{aligned} z'=h(t,z),\quad z(0)=z_{0},\quad z_{0}=(x_{0},v_{0}), \end{aligned}$$
(48)

where \(z=z(x,y)\), \(h=(f,g)\), \(f(t,x,y)=y\), \(g(t,x,y)=b_{m}-k_{m}x-c_{m}y\) (as vectorial differential equation with positively defined functions). In terms of existence and uniqueness theorem (see [11, 15]) the approximations chain (in a Banach subspace \(\mathcal F_{0}\) in relation to the norm \(\Vert \cdot \Vert _{u}\)) is given by

$$\begin{aligned} z_{n+1}(t)=z_{0}+\displaystyle \int _{0}^{t}h\left( u,z_{n}(u)\right) {\mathrm {d}}u,\quad z_{0}=(x_{0},v_{0}), \end{aligned}$$
(49)

or respectively analytically by

$$\begin{aligned} &x_{n+1}(t)=x_{0}+\displaystyle \int _{0}^{t}f\left( u,x_{n}(u)y_{n}(u)\right) \,{\mathrm {d}}u\\&\qquad ~\quad = x_{0}+\displaystyle \int _{0}^{t}y_{n}(u)\,{\mathrm {d}}u,\\&y_{n+1}(t)=v_{0}+\displaystyle \int _{0}^{t}g_{n}(u) \,{\mathrm {d}}u= v_{0}+\varDelta G_{n}(t), \end{aligned}$$
(50)

where \(g_{n}(t)=b_{m}(t)-\left( k_{m}x_{n}(t)+c_{m}y_{n}(t)\right) \), \(\varDelta G_{n}(t)=G_{n}(t)-G_{n}(0)\) and \(G_{n}t)\) is a primitive one. The approximation \(z_{N}=(x_{N},y_{N})\) with \(p\in \mathbb N^{*}\) crisp decimal places results from

$$\begin{aligned} \widetilde{c}^{n}<\dfrac{1-\widetilde{c}}{\delta }\,\varepsilon,\quad \varepsilon =10^{-p}, \end{aligned}$$
(51)

according to Sect. 3.4; on the other hand in a particular case N is comparable to p (see Sect. 3.4 again).

2.5 Dyadic Wavelet Method

It will be assumed that SDOF system (1) is dynamically forced with a harmonic function (i.e., under harmonic loading [3, 16]). Formally, the more general wavelet solution of the Cauchy problem for the governing equation of motion of the system (1) is

$$\begin{aligned} X_{w}(t)=x_{w_{o}}(t)+x_{w_{p}}(t), \end{aligned}$$
(52)

where \(x_{w_{p}}(t)\) is the particular wavelet solution of the problem, respectively the eighenfunction \(x_{w_{o}}(t)\) which represents the solution for the homogeneous equation associated to the Eq. (3) and the two coefficients \(c_1\), \(c_2\) (see (5)) denoted with \(K_{1}\), \(K_2\) are determined by applying the initial conditions \(X_{w}(0)=x_{0}\) and \(\dot{X}_{w}(0)\) \(=v_{0}\) to solution (52), see Sect. 4.

2.6 Approximation (Transform) Fuzzy Method

In general, when a realistic problem is transformed into deterministic initial value problem of ordinary differential equations (e.g., 2) the following dilemma appears, namely if that model is optimal. For example, the initial value may not be known exactly and the function \(F\left[ t,x,\dot{x}(t),f(t)\right] \) from SDOF may contain uncertain parameters. Thus, if these parameters are estimated through certain measurements, then inevitably errors occur. The analysis of the effect of these errors leads to the study of the qualitative behavior of the solutions of Eq. (2) and in this case, it would be natural to use fuzzy initial value problems FIVPs, see [17]. Particularly, FIVP is a topic very important as much of the theoretical point of view as well as of their applications, for example, in the civil engineering domain [18]. In general, FIVPs do not always have solutions which we can obtain using analytical methods. In fact, many of real physical phenomena are almost impossible to solve by this technique. For this reason, in literature has proposed numerical methods to approximate the solutions of FIVPs: the predictor-corrector method PCM, the continuous genetic algorithm CGA, the artificial neural network approach ANNA (see [19]), etc. The homotopy analysis method HAM proposed by Liao [20,21,22] is effectively and easily used to solve some classes of linear and nonlinear problems without linearization, perturbation, or discretization. The HAM is based on a basic concept in topology, more precisely on the homotopy. It is introduced an auxiliary parameter to construct the so-called “zero-order deformation equation” and the HAM provides us with a family of solution expressions which depend of this auxiliary parameter—the convergence region and rate of solution series can be controlling by the HAM (by choosing a proper value of the auxiliary parameter). Thus, using HAM for the linear and nonlinear cases the results obtained are very effective.

2.7 Grammatical Evolution Method

An EA (evolutionary algorithm), particulary the canonical GA (genetic algorithm), can be characterised by

$$\begin{aligned} x(t+1)=r(v(s(x(t)))), \end{aligned}$$
(53)

where x(t) is the population of encodings at iteration t and r, v, s are “evolutionary” operators, respectively of replacement, of variation (crossover and mutation) and of selection (see [23]); Cauchy solution problem of point 1 belongs about “population of art”. In this regard a first variant is an adaptation of the hybrid algorithm (for solving linear and partial differential equations) proposed by He et al. in [24]; hybridization consists of combining techniques of EA and some classical numerical methods, see [25] and [26] for a literature survey relative to EA in China.

The two, respectively three variant are adaptations of the GP (genetic programming) algorithms proposed by Burgess [27], by Cao et al. [28] and by Iba and Sakamoto [29], that induce the underlying differential equation from experimental data. GP as a tree—representations of computer programs was developed in Koza’s four books—the first is [30] and was popularized in [31]; in addition see [32] for new developments in key applications of GP (and its variants) such as specialized applications, hybridized systems (which “marry” GP to other technologies) and a detailed look of some recent GP software release.

The four variant is an adaptation of the GE (grammatical evolution) algorithm proposed by Tsoulos and Lagaris [33]; GE as a grammar—based form of GP was developed in [34, 35] (financial modelling) and [36] (dynamic environments). GE uses a genotype–phenotype mapping where the linear genom selects production rules from BNF (Backus–Naur Form) grammar to map down to a syntactically correct program; as translation model the modulo rule defines a degenerate mapping from those l bits codon value to a choice ch for a production rule

$$\begin{aligned} ch(s)=\texttt {codon}\,\%\, r_{sj}, \end{aligned}$$
(54)

where s is the current (active) non-terminal and \(\left( r_{s1}, \right. \) \(\left. \ldots , r_{sn_{s}}\right) \) is the production rules for s, \(j\in \left\{ 1,\ldots ,n_{s}\right\} \) (and in addition many codon value map the same choice of rules and usually \(l=8\)).

The process of replacing non-terminals is of if:

  1. i.

    a phenotype (a complete computer program) is generated (all the non-terminals in the expression being mapped are transformed into terminals);

  2. ii.

    the end of genome is reached—in which case the wrapping operator is invoked (the return of the genom reading frame to the left hand side of the genome once again—until the maximum number of wrapping events has occurred); an incompletely mapped individual is assigned the lowest possible fitness value.

In [37] are analyzed the integer-based genotypic representation and the binary-based genotypic representation and associated mutation operator—with a statistically significant advantage of the integer-based genotypic representation. In [38] is stated a low locality of the GE representation as a non-correspondence neighboring genotype—neighboring phenotype. In this regard (for further research)—relative to the partial adapted grammar \(\mathcal {G}_{{C}}\) is analysed looking for an admissible chromosome for the distance \(x(t)=\mathrm {e}^{\alpha t}\sin (\beta t)\) (see point 1 and Sect. 2.7.1). GP and particularly GE have structural difficulty (it is unable to search effectively for some solutions); more specifically, GE outperforms standard GP if optimal solutions are composed of very narrow and deep structure and in contrast, where optimal solutions require more dense trees GP outperforms GE (see [39]).

According to [33] it is considered “general form” of Cauchy problem (2), (3)

$$\begin{aligned}&f\left( t,x(t), \dot{x}(t), \ddot{x}(t)\right) = 0, \\&f\left( t,x,\dot{x},\ddot{x}\right) = \ddot{x}+c_{m}\dot{x}+k_{m}x-b_{m} ~,\quad t\in [0,r] \end{aligned}$$
(55)

and the initial conditions are (have the form)

$$\begin{aligned}&g\left( 0,x(0),\dot{x}(0)\right) = 0,\quad g=\left( g_{1},g_{2}\right) , \\&g_{1}\left( 0,x(0),\dot{x}(0)\right) =x(0)-x_{0}~,\\&g_{2}\left( 0,x(0),\dot{x}(0)\right) =\dot{x}(0)-v_{0}. \end{aligned}$$
(56)

In addition it is considered a division of equidistant points \(\left( x_{0},\ldots ,x_{n}\right) \) of [0, 1] and for every chromosome c the corresponding model \(M_{c}(t)\) expressed in the grammar \(\mathcal {G}_{{C}}\) (see Sect. 2.7.1). The fitness value of c is

$$\begin{aligned} v_c&=E\left( M_c\right) +P\left( M_c\right) , \\ E\left( M_c\right)&=\displaystyle \sum _{i=0}^{n} f\left( t,M_{c}(t_{i}),\dot{M}_{c}(t_{i}),\ddot{M}_{c}(t_{i})\right) \\&=\displaystyle \sum _{i=0}^{n} \ddot{M}_{c}(t_{i}) + c_{m}\displaystyle \sum _{i=0}^{n}\dot{M}_{c}(t_{i})\\&\quad + k_{m}\displaystyle \sum _{i=0}^{n}M_{c}(t_{i})- \displaystyle \sum _{i=0}^{n} b_{m}(t_{i}),\\ P(M_{c})&=\lambda \displaystyle \sum _{i=1}^{2} g_{i}^{2}\left( 0,M_{c}(0),\dot{M}_{c}(0)\right) \\&=\lambda \left[ \left( M_{c}(0)-x_{0}\right) ^{2}+\left( \dot{M}_{c}(0)-v_{0}\right) ^{2}\right], \end{aligned}$$
(57)

penalty function, \(\lambda \in {\mathbb{R}}_{+}^{*}\) is the scale parameter. Solutions are expressed in an analytical closed form—particularly exact solution is recovered if \(b_{m}\) is quasi-polynomial. For derivatives it uses three different stacks—respectively for \(M_c\), \(\dot{M}_{c}\), \(\ddot{M}_{c}\) according differentiation rules adopted by the various automatic differentiation methods (see [30]); specifically constants creation are made using a meta-grammar (grammar of grammars, see [36]). As an update of the four variant control sequences of point 1 (developed in Sect. 3.1) can be obtained with control genes and with the behaviour switching BNF grammar definition (BNF-BS, see [40]).

2.7.1 Admissible Chromosome—Relative to \(\mathcal {G}_{{C}}\)

The partial adapted grammar \(\mathcal {G}_{{C}}=\left\{ s_{0},N,T,R\right\} \) (which creates the constants of the \(C=\{\alpha ,\beta \}\subset T\), see [37]):

$$\begin{aligned}&N=\{s_{0}, s_{1}=\mathrm {e}, s_{2}=o, s_{3}=f, s_{4}=c\}, \\&T=T_{1}\cup T_{2}\cup T_{3}\cup T_{4}, T_{1}=\{t_{10}=t\}, \\&T_{2}=\left\{ t_{20}=+,t_{21}=-,t_{22}=*,t_{23}=/\right\} ,\\&T_{3}=\left\{ t_{30}=\sin ,t_{31}=\cos ,t_{32}=\exp , t_{33}=\ln \right\} , \\&T_{4}=\left\{ t_{4k}=k,k=\overline{0,9}, t_{4,10}=\alpha , t_{4,11}=\beta \right\} ,\\&R=R_{0}\cup R_{1}\cup R_{2}\cup R_{3}\cup R_{4},\\&R_{0}=\left\{ r_{00}\simeq s_{0}::=<\mathrm {e}>\right\} ,\\&R_{1}=\left\{ r_{10},r_{11},r_{12},r_{13},r_{14}\right\} \\&\simeq \left\{<\mathrm {e}>::=<\mathrm {e}> <o> <\mathrm {e}>|(<\mathrm {e}>)|<f>\right. \\&\left. (<\mathrm {e}>)|<c>|t\right\} ,\\&R_{2}=\left\{ r_{20},r_{21},r_{22},r_{23}\right\} \simeq \left\{<o>::=+|-|*|/\right\} ,\\&R_{3}=\left\{ r_{30},r_{31},r_{32},r_{33}\right\} \simeq \left\{<f>::=\sin |\cos |\exp |\ln \right\} ,\\&R_{4}=\left\{ r_{40},\ldots ,r_{49},r_{4,10},r_{4,11}\right\} \\& \simeq \left\{ <\mathrm {e}>::=0|\ldots |9|\alpha |\beta \right\} .\\ \end{aligned}$$

An admissible chromosome for \(x(t)=\mathrm {e}^{\alpha t}\sin (\beta t)\) (according to point 1) is obtained for \(l=\overline{1,16}\) by \(r_{ij}^{l}\vdash m_{l}=\left( m_{l1},m_{l2}\right) \), \(m_{l1}=\left| R_{i}^{l}\right| \), \(m_{l2}=j\), \(i=\overline{1,4}\), \(j=\overline{1,11}\); consequently relative to the chromosome \(c=\left( c_{1}\right. ,\) \(\left. \ldots ,c_{16}\right) \) the codon \(c_{l}\) is \(c_{l}=m_{l1}k_{l}+m_{l2}\in [0,255]\), \(l=\overline{1,16}\) where \(_{-1}c=\left\{ c_{2},\ldots ,c_{16}\right\} \), \(_{-2-}c=\left\{ c_{3},\dots ,c_{16}\right\} \), ..., \(_{-14-}c=\left\{ c_{15},c_{16}\right\} \), \(_{-15-}c\simeq c_{16}\), see Table 1. The head of the table is: line l, \(l=\overline{1,17}\)—for short line, string, rule \(r_{ij}^{l}\), \(l=\overline{1,16}\), \(i=\overline{0,4}\), \(j=\overline{0,11}\)—for short rule, map \(m_{l}=\left( m_{l1},m_{l2}\right) \), \(l=\overline{1,16}\)—for short map, chromosome \(c=\left( c_{1},\dots ,c_{16}\right) \)—for short chromosome, codon \(c_{l}\), \(l=\overline{1,16}\)—for short codon.

The first wrapping is on 13 (\(c_{1}=c_{14}=35\), \(c_{2}=c_{15}=22\), \(c_{3}=c_{16}=14\)); no wrapping respectively on 10 (\(c_{2}\ne c_{12}\) in [0, 255]), 11 (\(c_{2}\ne c_{13}\), \(c_{3}\ne c_{14}\), \(c_{5}\ne c_{16}\)), 12 (\(c_{1}\ne c_{13}\), \(c_{4}\ne c_{16}\)).

Table 1 Admissible chromosome

3 Some Developments of Classical Mathematics

In this section are developed the methods which are presented in Sects. 2.12.4. On the other hand Sect. 3.1.1 uses results from Sect. 2.1.1.

3.1 Second-Order Linear Nonhomogeneous Differential Equation with Constant Coefficients

In the case “\(b_{m}\) non-quasi-polynomial” \(x_{p}(t)\) is obtained by constants variation method CVM (see [12, 13]), i.e., \(c_{1}=c_{1}(t)\), \(c_{2}=c_{2}(t)\) in \(x_{p}=c_{1}x_{1}+c_{2}x_{2}\), \(\dot{c}_{1}\), \(\dot{c}_{2}\) are solutions of determined compatible system

$$\begin{aligned} \dot{c}_{1}x_{1}+\dot{c}_{2}x_{2}=0,\quad \dot{c}_{1}\dot{x}_{1}+\dot{c}_{2}\dot{x}_{2}=b_{m}, \end{aligned}$$
(58)

where

$$\begin{aligned} &\varDelta _{w}={\begin{vmatrix} x_{1}&x_{2} \\ \dot{x}_{1}&\dot{x}_{2} \\ \end{vmatrix}}=x_{1}\dot{x}_{2}-\dot{x}_{1}x_{2} ~,\\\\&\varDelta _{1}={\begin{vmatrix} 0&x_{2} \\ b_{m}&\dot{x}_{2} \\ \end{vmatrix} }=-x_{2}b_{m} ,\quad \varDelta _{2}={\begin{vmatrix} x_{1}&0 \\ \dot{x}_{1}&b_{m}\\ \end{vmatrix} }=x_{1}b_{m}\\&\dot{c}_{1}=\dfrac{\varDelta _{1}}{\varDelta _{w}}=-\dfrac{x_{2}b_{m}}{\varDelta {w}} , \quad \dot{c}_{2}=\dfrac{\varDelta _{2}}{\varDelta _{w}}=\dfrac{x_{1}b_{m}}{\varDelta {w}}. \end{aligned}$$
(59)

Relative to the particular solutions \(x_{1}\), \(x_{2}\) there are following three cases according to the nature of characteristic equation solutions (see [12, 13])

$$\begin{aligned} &p(r)=0,\quad p(r)=r^{2}+c_{m}r+k_{m}, \end{aligned}$$
(60)

\(\varDelta =c_{m}^2-4k_{m}\), \( \zeta =\dfrac{c_{m}}{2\sqrt{k_{m}}}\):

  1. i)

    \(\varDelta >0\), \(k_{m}<\dfrac{c_{m}^2}{4}\) (\(\zeta >1\), see Sect. 1), \(r_{1}=\rho _{1}=\dfrac{-c_{m}+\sqrt{\varDelta }}{2}\), \(r_{2}=\rho _2=\dfrac{-c_{m}-\sqrt{\varDelta }}{2}<0\), \(x_{1}=\mathrm {e}^{\rho _{1}t}\), \(x_{2}=\mathrm {e}^{\rho _{2}t}\). If \(\alpha _{i}=\rho _{1}\) or \(\alpha _{i}=\rho _{2}\) and \(\beta _{i}=0\), then the associated sum from Sect. 2.1 there is \( \sum _{j=0}^{n_{i}}\). For CVM: \(\varDelta _{w}=\mathrm {e}^{(\rho _{1}+\rho _{2})t}(\rho _{2}-\rho _{1})=-\mathrm {e}^{-c_{m}t}\sqrt{\varDelta }\), \(\varDelta _{1}=-\mathrm {e}^{\rho _{2}t}b_{m}\), \(\varDelta _{2}=\mathrm {e}^{\rho _{1}t}b_{m}\), \(\dot{c}_{1}=\dfrac{\mathrm {e}^{-\rho _{1}t}}{\sqrt{\varDelta }}b_{m}\), \(\dot{c}_{2}=-\dfrac{\mathrm {e}^{-\rho _{2}t}}{\sqrt{\varDelta }}b_{m}\) according to (59).

  2. ii)

    \(\varDelta =0\), \(k_{m}=\dfrac{c_{m}^2}{4}>0\) (\(\zeta =1\), see Sect. 1), \(r_{1,2}=\sigma =-\dfrac{c_{m}}{2}=-\sqrt{k_{m}}<0\), \(x_{1}=\mathrm {e}^{\sigma t}\), \(x_{2}=t\mathrm {e}^{\sigma t}\). If \(\alpha _{i}=\sigma \) and \(\beta _{i}=0\) then the associated sum from Sect. 2.1 there is \( \sum _{j=2}^{n_{i}+1}\). For CVM: \(\varDelta _{w}=\mathrm {e}^{2\sigma t}\), \(\varDelta _{1}=-t\mathrm {e}^{\sigma t}b_{m}\), \(\varDelta _{2}=\mathrm {e}^{\sigma t}b_{m}\), \(\dot{c}_{1}=-t\mathrm {e}^{-\sigma t}b_{m}\), \(\dot{c}_{2}=\mathrm {e}^{-\sigma t}b_{m}\).

  3. iii)

    \(\varDelta <0\), \(k_{m}<\dfrac{c_{m}^2}{4}>0\) (\(\zeta <1\), see Sect. 1), \(r_{1}=z\), \(r_{2}=\bar{z}\), \(z=\alpha +i\beta =\dfrac{-c_{m}+i\sqrt{-\varDelta }}{2}\), \(x_{1}=\mathrm {e}^{\alpha t}\cos (\beta t)\), \(x_{2}=\mathrm {e}^{\alpha t}\sin (\beta t)\). If \(\alpha _{i}=\alpha \), \(\beta _{i}=\beta \) then the associated sum from Sect. 2.1 there is \( \sum _{j=1}^{n_{i}}\). For CVM: \(\varDelta _{w}=\beta \mathrm {e}^{2\alpha t}\), \(\varDelta _{1}=-\mathrm {e}^{\alpha t}{\sin (\beta t)}b_{m}\), \(\varDelta _{2}=\mathrm {e}^{\alpha t}{\cos (\beta t)}b_{m}\), \(\dot{c}_{1}=-\dfrac{\mathrm {e}^{-\alpha t}}{\beta }{\sin (\beta t)}b_{m}\), \(\dot{c}_{1}=\dfrac{\mathrm {e}^{-\alpha t}}{\beta }{\cos (\beta t)}b_{m}\).

3.1.1 Some Developments for Variable Coefficients

Free vibration of single-degree-of-freedom(SDOF) systems with periodic time-dependent coefficients (mass and stiffness) have been extensively investigated. So the equation with non-periodically time-varying coefficients are considered in particular.

  1. 1.

    (scaled “reduced” canonical form)

    1. i)

      Let be the equation of type Konofin [41], respectively the scaled form (see Sect. 1)

      $$\begin{aligned} m\ddot{x} + kx = b, \quad \ddot{x} + k_m x = b_m, \end{aligned}$$
      (61)

      where \(m \in \mathbb {R}^*_+\) and the associated homogeneous equation, respectively a particular case with \(k_m (t)= \frac{2}{t} - 1\),

      $$\begin{aligned} \ddot{x} + k_m x = 0, \quad \ddot{x} + \left( \frac{2}{t}- 1\right) x = 0. \end{aligned}$$
      (62)

      The particular form has the particular solution

      $$\begin{aligned} x_1 = t e^{-t}. \end{aligned}$$
      (63)

      Another particular solution \(x_2\) is given (according to Observations 2.1.1.2, 2.1.1.4. i)) by

      $$\begin{aligned} x_2 = x_1 \int \frac{dt}{x^2_1} = te^{-t} \int \frac{e^{2t}}{t^2} dt \end{aligned}$$
      (64)

      (which is calculated according to the analytical expression of the exponential).

    2. ii)

      The equation of type Kolenef [42], respectively the scaled form is (61), where \(m = m(t)\), \(k = k(t)\) are positive definite-particularly \(m(t) = m_1 t^{\alpha _1}\), \(k(t) = k_1 t^{\alpha _2}\), \(\alpha _1, \alpha _2 \in \mathbb {R}\) (at the first step time interval). The associated homogeneous equation of the scaled particular form

      $$\begin{aligned} \ddot{x} + \frac{k_1}{m_1} t^{\alpha _2 - \alpha _1} x = 0, \end{aligned}$$
      (65)

      where \(\alpha _1 - \alpha _2 = 2\), \(m_1 > 4 k_1\) has the particular solution

      $$\begin{aligned} x = t^r, \end{aligned}$$
      (66)

      where \(r\in \{r_1, r_2\}\), \(r_1 = \frac{1-\sqrt{\varDelta }}{2}\), \(r_2 = \frac{1+\sqrt{\varDelta }}{2} (\varDelta = 1 - 4 \frac{k_1}{m_1} > 0)\) are the solutions of

      $$\begin{aligned} r^2 - r + \frac{k_1}{m_1} = 0. \end{aligned}$$
      (67)

      For \(r \in \{r_1, r_2\}\) another particular solution \(x_2\) is given by

      $$\begin{aligned} x_2 = x_1 \int \frac{dt}{x^2_1} = t^r \int \frac{dt}{t^{2r}} = \frac{1}{1-2r} t^{1-r}. \end{aligned}$$
      (68)
  2. 2.

    (standard form) Relative to (3) (see Sect. 1) the standard form is

    $$\begin{aligned} \ddot{x} + 2 \zeta \omega _0 \dot{x} + \omega ^2_0 x = b_m, \end{aligned}$$
    (69)

    with the associated homogeneous equation

    $$\begin{aligned} \ddot{x} + 2 \zeta \omega _0 \dot{x} + \omega ^2_0 x = 0, \end{aligned}$$
    (70)

    where \(\omega ^2_0 = k_m \in \mathbb {R}^*_+\) is natural frequency, \(2\zeta \omega _0 = c_m \in \mathbb {R}^*_+\) and \(\omega _1 = \omega _0 \sqrt{1-\zeta ^2}\) is damped natural frequency, \(\varDelta = - 4\omega ^2_1\) (see Sect. 3.1, [43]). Notice that the scaled and absolute dual scaled form is

    $$\begin{aligned} \ddot{x} + 2 \zeta \dot{x} + x = 0. \end{aligned}$$
    (71)
  3. 3.

    (graphical solutions) The case of the variable coefficient \(k_m = k_m (t)\) is reduced to the case of constant coefficients according to the graphical solutions of the intersection

    $$\begin{aligned} G_{k_m} \cap G, \end{aligned}$$
    (72)

    where G is a line parallel to the Ot axis wich has a cut with the \(O_{k_m}\) axis equal to \( \dfrac{c^2_m}{4} \cdot \dfrac{1}{\zeta ^2} \), i.e. relative to function variation in \(\zeta \); for solutions \(G_{k_m}\) has an exponential aspect for \(\zeta < 1\), a “linearizat” exponential aspect for \(\zeta =1\) and a periodical aspect for \(\zeta > 1\).

  4. 4.

    (general case) “Any of the mass, damping or stiffness coefficients of a mechanical oscillator may depend on time” (see [44]). Consequently in the equation (3) (see section 1) \(c_m, k_m \in R^*_+\) become \(c_m = c_m (t)\), \(k_m = k_m (t)\) positively defined and \(c_m, k_m \in C^0_I\). In addition in the standard form (69) where \(k_m = \omega ^2_0\), \(c_m = 2\zeta \omega _0\), \(\omega _0, \omega _1 \in \mathbb {R}^*\) become \(\omega _0 = \omega _0 (t)\), \(\omega _1 = \omega _1 (t)\), \(\omega _0, \omega _1 \in C^0_I\). Notice that the scaled and absolute dual scaled form of the associated homogeneous equation is

    $$\begin{aligned} \ddot{x} + {}_{|s'|} a_1 \dot{x} + x = 0, \end{aligned}$$
    (73)

    where \({}_{|s'|} a_1 = \alpha + 2\zeta \), \(\alpha = \dfrac{4\dot{\omega }_0+1}{2\omega _0}\) and \(\alpha ({}_{|s'|} a_1)\) is constant if \( \omega _0 = c \mathrm {e}^{ \frac{1}{2} \alpha t} + \dfrac{1}{2\alpha }\), \(c\in \mathbb {R}^*\). In [45] a function for describing the variation of mass of a SDOF system with time is an arbitrary one positive definite and the variation of the stiffness is expressed as a functional relation with the mass function, i.e. \(c_m\) of (3) (see Sect. 1) becomes \(c_m = \dfrac{\dot{m}}{m} + \dfrac{c}{m}\), where c is of Li [45]. Using appropriate change of variable the governing differential equation is reduced to a Bessel equation or other analytically solvable equation (see [45]). Relative to the “reduced” canonical form (30) (see Sect. 2.1.1) of the homogeneous equation associated with the equation of Li [45] the variable coefficient a is

    $$\begin{aligned} a&= k_m - \frac{1}{4} \left( \frac{\dot{m}}{m} + c_m\right) - \frac{1}{2} \left( \frac{\ddot{m}}{m} - \left( \frac{\dot{m}}{m} \right) ^2 + \dot{c}_m \right) \\&=k_m - \frac{1}{4} c_m - \frac{1}{2} \dot{c}_m - \frac{1}{2} \frac{\ddot{m}}{m} + \frac{1}{2} \left( \frac{\dot{m}}{m}\right) ^2 - \frac{1}{4} \frac{\dot{m}}{m}. \end{aligned}$$
    (74)

    This “reduced” canonical form can be reduced to the Riccati equation (see (32), (33), Sect. 2.1.1) and then can be approximated with error evaluation the Cauchy problem solution (see point 6).

  5. 5.

    (updates to approximations with error evaluation, 2.4, 3.4) Relative to the equation (3) with the variable coefficients \(c_m,k_m\) (see Sect. 1 and point 4) the approximations with error evaluation (see Sects. 2.4, 3.4) are maintained if the following updates are included

    1. (2.4)

      i) \(g_n (t) = b_m (t) - (k_m (t) x_n (t) + c_m (t) y_n (t))\);

    2. (3.4)

      i) \(c_m, k_m \in C^1_{D_1}\);

      ii) \(c_g = \max \{\displaystyle \sup _I |c_m|, \displaystyle \sup _I |k_m|\} = \max \{\bar{c}_m, \bar{k}_m\}\) \( = \bar{c}_m\) (the case \(c_m\));

      iii) \(\displaystyle \sup _H |g| = \bar{b}_m - \bar{k}_m (x_0 - R') - \bar{c}_m (v_0 - R') = (\bar{b}_m - (\bar{k}_m x_0 + \bar{c}_m v_0)) + R' (\bar{k}_m + \bar{c}_m)\), \(||h||_u = (\bar{b}_m - (\bar{k}_m x_0 + \bar{c}_m v_0)) + v_0 + R' (1 + \bar{k}_m + \bar{c}_m) > 0\), in addition \(\bar{c}_m = \displaystyle \sup _I c_m\), \(\bar{k}_m = \displaystyle \sup _I k_m\);

      iv) \(c_m \in [1,9]\) which involves \(t \in [t_1, t_9]\). For the standard form (see point 4) the case \(c_m\) of the update ii) of (3.4) becomes \(\bar{\omega }^2_0 \le 2\zeta \bar{\omega }_0\) iff \(\omega ^2_0 \le 2\zeta \omega _0\) iff \(\omega _0 \le 2\zeta \) iff \(\bar{\omega }_0 \le 2\zeta \), \(\omega _0 > 0\); in addition for the update iv) it is true \(c_m \in [1,9]\) iff \(\omega _0 \in \left[ \dfrac{1}{2\zeta }, \dfrac{9}{2\zeta }\right] \).

  6. 6.

    (approximations for the Riccati equation) Relative to the “reduced” Riccati equation (35) for the form (30) (see Sect. 2.1.1 and point 4) the Cauchy problem (3) (see Sect. 1) becomes

    $$\begin{aligned} \dot{u} = f (t,u), \quad u_0 = \frac{v_0}{x_0}, \end{aligned}$$
    (75)

    where \(f(t,u) = - (u^2 + a)\). In terms of existence and uniqueness theorem the scalar approximation chain (in a corresponding Banach subspace \(\mathcal {F}_0\) in relation to the corresponding norm) is given by

    $$\begin{aligned} u_{n+1} (t)&= u_0 + \int ^t_0 f(v, u_n (v)) dv \\ =&u_0 - \int ^t_0 f_n (v) dv = u_0 - \varDelta F_n (t),\end{aligned}$$
    (76)

    where \(f_n (t) = u^2_n (t) + a(t)\) and \(F_n (t)\) is a primitive one. The approximation \(u_N\) with \(p \in \mathbb {N}^*\) crisp decimal places results from (15) (see 2.4). The function \(f: D_f \rightarrow \mathbb {R}^2\), \(D_f \subseteq \mathbb {R}^2\) verifies the hypothesis of the existence and uniqueness theorem-locally, relative to the rectangle \(H = I \times V \subseteq D \subseteq D_f\), \(I = B[0,R] = [-R,R]\), \(I^+ = [0,R]\), \(V = B [u_0, R'] \subset \mathbb {R}\), H centered in \(P(0, u_0)\); \(a \in C^1_{D_1}\), \(D_1 \subseteq \mathbb {R}\) associated to D involves \(f \in C^1_D\) and consequently \(f \in C^0_D\) and f is Lipschitz continued function according to u on H with coefficient \(c_f = \displaystyle \sup _H \left| \frac{\partial f}{\partial u}\right| = \displaystyle \sup _V |2u| = 2R'\); approximations of Cauchy problem solution are obtained on interval \(I_0\) (symmetric, centered in 0, in fact the mid-positive \(I^+_0\), see 3.4), but with \(M = \bar{f} = \displaystyle \sup _H |u^2+a| = \displaystyle \sup _V u^2 + \displaystyle \sup _I |a| = (R')^2 + \bar{a}\), \(q = q_0 = \dfrac{R'}{(R')^2 + \bar{a}} < R\). The approximation \(u_1\) in \(\mathcal {F}_0\) is given by \(u_1 (t) = u_0 - \displaystyle \int ^t_0 f_0 (v) dv = u_0 - A(t)\), \(A(t) = \varDelta F_0 (t) = F_0 (t) - F_0 (0)\), \(A(t) \ge 0\) iff \(\dot{A} (t) \ge 0\), the case in which \(\delta = ||u_1 - u_0||_u = r(u^2_0 + \bar{a})\). For \(\tilde{c} = 2r R' < 1\), \(N = [\varphi (p, \tilde{c}, \delta ] + 1\) can be compared to p (see 3.4). Notice that the approximations of the corresponding Cauchy problem solution of the homogeneous equation associated with the equation of Li [45] has an exponential aspect (see (34) of 2.1.1).

  7. 7.

    (remarkable particular pseudo-linear extensions) The governing differential equations

    $$\begin{aligned} m \ddot{x} + v' = 0, \end{aligned}$$
    (77)

    where \(v = v(x)\) is the potential energy of a non-linear system and in particular it is \(v(x) = \dfrac{k}{2} x^2 - x^4\) with corresponding particular differential equation (see [46])

    $$\begin{aligned} m\ddot{x} + kx - 4x^3 = 0, \end{aligned}$$
    (78)

    respectively

    $$\begin{aligned} m\ddot{y} + ky = \lambda (t) f(\dot{y}) + r(y), \end{aligned}$$
    (79)

    where \(\lambda (t) f(\dot{y})\) is a “separable” function with \(\lambda (t)\) a coefficient (see [47]) are 2nd-order pseudo-linear differential equations (see 2.1.1); these may be convenient relative to the approximation of the Cauchy problem solution with the contraction principle relative to 2nd-order linear differential equations.

3.2 Laplace Operational Method

Through the specific of problems occurs \(x,\dot{x},\ddot{x},b_{m}\in \mathcal {O}_{f}\) (vectorial space of original functions) where \(x,\dot{x},\ddot{x}\), \(b_{m}=0\) on \({\mathbb{R}}_{-}^{*}\); in particular \(x\in C_{I}^{3}\), \(b_{m}\in C_{I}^{1}\), \(I\subset {\mathbb{R}}\) bounded interval, \(x,\dot{x},\ddot{x},b_{m}\ne 0\) on I involve \(x,\dot{x},\ddot{x}\), \(b_{m}\in \mathcal {O}_{f}\). For \(X_o\) the cases i)–iii) (Sect. 3.1) are found:

  1. i)

    \(p(u)=(u-\rho _{1})(u-\rho _{2})\), \(X_{o}(u)=\dfrac{q(\rho _{1})}{u-\rho _{1}}-\dfrac{q(\rho _{2})}{u-\rho _{2}}\), \(q(\rho )=\dfrac{q_{0}(\rho )}{\rho _{1}-\rho _{2}}=\dfrac{x_{0}(\rho +c_{m})+v_{0}}{\sqrt{\varDelta }}\), \(x_{o}^{c}=\mathcal {L}^{-1}(X_{o})\), \(x_{o}^{c}(t)=q(\rho _{1})x_{1}(t)-q(\rho _{2})x_{2}(t)\), \(x_{1}=\mathrm {e}^{\rho _{1}t}\), \(x_{2}=\mathrm {e}^{\rho _{2}t}\).

  2. ii)

    \(p(u)=(u-\sigma )^{2}\), \(X_{o}(u)=\dfrac{x_{0}}{u-\sigma }+\dfrac{q_{0}(\sigma )}{(u-\sigma )^2}\), \(x_{o}^{c}(t)=\mathrm {e}^{\sigma t}(x_{0}+q_{0}(\sigma )t)\).

  3. iii)

    \(p(u)=(q-z)(q-\bar{z})\), \(z=\alpha +i\beta \) or \(p(u)=(u-\alpha )^2+\beta ^2\), \(X_{o}(u)=\dfrac{x_{0}(u-\alpha )+q_{0}(\alpha )}{(u-\alpha )^2+\beta ^2}\), \(x_{o}^{c}=\mathcal {L}^{-1}(X_{o})\), \(x_{o}^{c}=\mathrm {e}^{\alpha t}\left( x_{0}\cos (\beta t)+\dfrac{q_{0}(\alpha )}{\beta }\sin (\beta t)\right) \), according to displacement theorem (see [11, 15]).

3.3 Analytical Conditional Form

For \(\tilde{I}_{x}=[0,R_{x}]\), \(\tilde{I}_{b}=[0,R_{b}]\) as the positive parts of the convergence intervals for the associated MacLaurin series there are (absolute) punctual, respectively uniform (normal) convergence if \(I_{x}=\tilde{I}_{x}\), respectively \(I_{b}=\tilde{I}_{b}\), respectively if \(I_{x}\subset \tilde{I}_{x}\), respectively \(I_{b}\subset \tilde{I}_{b}\). A sufficient condition for analytical form (associated with the necessary condition \(C_{I}^{\infty }-\lambda _{n}=\dfrac{x^{(n)}(0)}{n!}\), \(\mu _{n}=\dfrac{b_{m}^{(n)}(0)}{n!}\)) is that uniformly bounded derivatives on I—with uniform (normal) convergence for bounded interval I (see [14]). The equation of (3) becomes

$$\begin{aligned} &\displaystyle \sum _{n\ge 2} n(n-1)\lambda _{n}t^{n-2}+c_{m}\displaystyle \sum _{n\ge 1}n\lambda _{n}t^{n-1} \\&\quad +k_{m}\displaystyle \sum _{n\ge 0}\lambda _{n}t^{n}=\displaystyle \sum _{n\ge 0}\mu _{n}t^{n}.\end{aligned}$$
(80)

By identifying coefficients it obtain

$$\begin{aligned} &2\lambda _{2}+c_{m}\lambda _{1}+k_{m}\lambda _{0}=\mu _0,\\&6\lambda _{3}+2c_{m}\lambda _{2}+k_{m}\lambda _{1}=\mu _{1}, \end{aligned}$$
(81)

and in general

$$\begin{aligned} n(n-1)\lambda _{n}+c_{m}(n-1)\lambda _{n-1}+k_{m}\lambda _{n-2}=\mu _{n-2},\quad n\ge 2. \end{aligned}$$
(82)

The series of analytical forms do not verify the hypothesis compared to limit criterion—more exactly

$$\begin{aligned} \dfrac{\mu _{n-1}}{\lambda _{n}}\rightarrow \infty \end{aligned}$$
(83)

or equivalently

$$\begin{aligned} &\forall M\in {\mathbb{R}}_{+}^{*},\quad \exists N=N(M)\in \mathbb N,\quad \forall n\in \mathbb N,n\ge N \\&\Rightarrow \lambda _{n}<\dfrac{\mu _{n-1}}{M}, \end{aligned}$$
(84)

i.e., as it is possible that x has no an analytical form on the interval \(I_{x}\).

Indeed because \(\lambda _{n+1}=-\dfrac{c_{m}}{n+1}\lambda _{n}-\dfrac{k_{m}}{n(n+1)}\lambda _{n-1}-\dfrac{1}{n(n+1)}\mu _{n-1}\), \(\dfrac{\lambda _{n+1}}{\lambda _{n}}=-\dfrac{c_{m}}{n+1}-\dfrac{k_{m}}{n(n+1)}\dfrac{\lambda _{n-1}}{\lambda _{n}} -\) \(\dfrac{1}{n(n+1)}\dfrac{\mu _{n-1}}{\lambda _{n}}\) there is \(\dfrac{\lambda _{n+1}}{\lambda _{n}}\rightarrow \dfrac{1}{r_{x}}\) involves (83), according to indeterminacy \(\left( \dfrac{\infty }{\infty }\right) \).

3.4 Approximations with Error Evaluation

Function \(h:D_{h}\rightarrow {\mathbb{R}}^2\), \(D_{h}\subseteq {\mathbb{R}}^3\), verifies the hypothesis of the existence and uniqueness theorem (see [11, 15])—locally, relative to the parallelepiped \(H=I\times V\subset D\subseteq D_{h}\), \(I=B[0,R]=[-R,R]\subset {\mathbb{R}}\), \(I^{+}=[0,R]\), \(V=B'\left[ (x_{0},v_{0}),R'\right] \subset {\mathbb{R}}^2\), \(R'\le \min \left\{ x_{0},v_{0}\right\} \) centered in \(P(0,x_{0},v_{0})\); indeed \(b_{m}\in C_{D_{1}}^{1}\), \(D_{1}\subseteq {\mathbb{R}}\) associated to D, involves \(h\in C_{D}^{1}\) and consequently \(h\in C_{D}^{0}\) and h is Lipschitz continued function according to z on H with coefficient \(c_{h}=c_{f}+c_{g}\), \(c_{f}=\max \left\{ \displaystyle \sup _{H}\left| {\frac{\partial f}{\partial x}}\right| , \displaystyle \sup _{H}\left| \frac{\partial f}{\partial y}\right| \right\} =\max \{0,1\}=1\), \(c_{g}=\max \left\{ \displaystyle \sup _{H}\left| \frac{\partial g}{\partial x}\right| , \displaystyle \sup _{H}\left| \frac{\partial g}{\partial y}\right| \right\} =\) \(\max \{k_{m},c_{m}\}=c_{m}\) (\(\zeta >\frac{1}{2}\) conditioned by \(k_{m}\in {\mathbb{R}}^{*}_{+}\) see Sect. 1 and analogously the case \(k_{m}\)), \(c_{h}=c_{m}+1\).

Approximations of Cauchy problem solution are obtained on interval \(I_{0}\)—symmetric, centered in 0 (in fact on the mid-positive \(I_{0}^{+}\)), where \(I_{0}=I_{r}\cap I_{q}\) (\(I_{0}^{+}=I_{r}^{+}\cap I_{q}^{+}\)), \(I_{0}=I_{q}\) (\(I_{0}^{+}=I_{q}^{+}\)) if \(q<\dfrac{1}{c_{h}}\), respectively \(I_{0}=I_{r}\) (\(I_{0}^{+}=I_{r}^{+}\)) if \(q\ge \dfrac{1}{c_{h}}\) and \(I_{r}=B[0,r]=[-r,r]\) (\(I_{r}^{+}=[0,r]\)), \(r\le R\), respectively \(I_{q}=B(0,q)=(-q,q)\) (\(I_{q}^{+}=[0,q)\)), \(q=\min \left\{ a^{*}, \dfrac{b^{*}}{M}\right\} \), \(a^{*}=R,\,b^{*}=R'\), \(M=\bar{h}=\Vert h\Vert _{u}=\displaystyle \sup _{H}\Vert h\Vert _{1}= \displaystyle \sup _{H}|f|+\displaystyle \sup _{H}|g|\), \(\displaystyle \sup _{H}|f|=v_{0}+R'\), \(\displaystyle \sup _{H}|g|=\bar{b}_{m}-k_{m}(x_{0}-R')-c_{m}(v_{0}-R')\) \(=\left( \bar{b}_{m}-\left( k_{m}x_{0}+c_{m}v_{0}\right) \right) +R'(k_{m}+c_{m})\), \(\Vert h\Vert _{u}=\left( \bar{b}_{m}-\left( k_{m}x_{0}+c_{m}v_{0}\right) \right) +v_{0}+R'(1+k_{m}+c_{m})>0\), g is partial linear according to \(x,\,y,\,b_{m}\), partial decreasing according to \(x,\,y\), respectively partial increasing according to \(b_{m},\,\bar{b}_{m}=\displaystyle \sup _{I}b_{m}\), \(q=R\) (analogously the case \(q=q_{0}=\dfrac{b^{*}}{M}\)).

The approximation \(z_{1}=(x_{1},y_{1})\) in \(\mathcal {F}_{0}=\mathcal {F}_{t_{0}}^{0}=\mathcal {F}_{t_{0}}^{0}(I_{r},V)= \left\{ z:I_{r}\rightarrow V\right. \) \(\left. |z\in C_{I_{r}}^{0}, z(t_{0})=z_{0}\right\} \) \(\subset _{s}C_{I_{r}}^{0}\) (see 2.4): \(x_{1}(t)=x_{0}+\displaystyle \int _{0}^{t}y_{0}(u){\mathrm {d}}u=x_{0}+v_{0}t\ge 0\), \(y_{1}(t)=v_{0}+\displaystyle \int _{0}^{t}g_{0}(u)\,{\mathrm {d}}u =A(t)\), \(A(t)=\varDelta G_{0}(t)=G_{0}(t)-G_{0}(0)\), \(A(t)\ge 0\) iff \(\dot{A}(t)=g_{0}(t)\ge 0\), so \(y_{1}(t)\ge 0\), \(t\in I_{r}^{+}\).

In addition (see 2.4) \(\delta =\Vert z_{1}-z_{0}\Vert _{u}=\displaystyle \sup _{I_{r}^{+}}\Vert z_{1}-z_{0}\Vert _{1}\) \(=\displaystyle \sup _{t\in I_{r}^{+}}|v_{0}t|+\displaystyle \sup _{t\in I_{r}^{+}}|A(t)|= v_{0}r+A(r)\), \(\tilde{c}=c_{h}r\) and \(N=[\varphi (p,\tilde{c},\delta )]+1\), \(\varphi (p,\tilde{c},\delta )=\dfrac{p+\lg {\delta }-\lg (1-\tilde{c})}{-\lg {\tilde{c}}}\) is partial increasing according to \(\tilde{c}\)—restricted by \(\delta \ge 1-\tilde{c}\). In the particular case \(c_{m}\in [1,9]\), \(c_{h}=\dfrac{1}{c_{m}+1}\in \left[ \dfrac{1}{10},\dfrac{1}{2}\right] \), \(\tilde{c}=c_{h}r\in \left[ \dfrac{r}{10},\dfrac{r}{2}\right] \) N is comparable to p (see 2.4); indeed for \(1-\tilde{c}\le \delta \le \delta _{1}=(5r)^{-p}(1-\tilde{c})\) with \(r<\min \left\{ 2,\dfrac{1}{5}\right\} =\dfrac{1}{5}\) results \(\varphi (p,\tilde{c},\delta )\le \varphi \left( p,\dfrac{r}{2},\delta _{1}\right) =p\), \(N\le p+1\), respectively for \(\delta \ge \delta _{2}=r^{-p}(1-\tilde{c})>1-\tilde{c}\) with \(r<\min \{2,1\}=1\) results \(\varphi (p,\tilde{c},\delta )\ge \varphi \left( p,\dfrac{r}{10},\delta _{2}\right) =p\), \(N\ge p+1\).

4 Dyadic Wavelet Method

In some differential problems such as those corresponding to certain physical processes two main directions have been pursued with regard to the applicability of the wavelets [1,2,3,4,5,6,7,8,9,10, 16, 48,49,50,51,52,53,54,55,56,57,58] : determining the solution using wavelets with compact support and fulfillment of the conditions of the problem (initial or boundary) with some restrictions, e.g. [48]. It appeared a dilemma, namely, which wavelet family from a variety is the best choice to satisfy the axioms of multiresolution analysis [54, 55]—a major disadvantage of the wavelet theory being the arbitrary character of their choice. For example, the Daubechies [53] wavelets generate a large number of calculations, even for the one simpler differential problem. In this respect, the dyadic wavelets package (defined analytic functions, infinitely derivable and band-limited) is the most appropriate tool for studying processes located in a Fourier domain. With this choice solving such problems is a trivial one the wavelet solution of the problem being accessible. Therefore, based on the properties of wavelets like good localization and resolution control the concept of orthonormal wavelet series [7,8,9,10] and representation of a function \(f\in L^2({\mathbb{R}})\) by such an expression were introduced. To illustrate this, dyadic wavelet approximation represents a numerical method used to obtain numerical solutions of some physical problems.

An orthonormal dyadic wavelet is a function \(\psi \in L^2({\mathbb{R}})\), such that the family \(\left( D^n T_s[\psi ]\right) _{n,\, s\in \mathbb Z}\) is an orthonormal basis for \(L^2({\mathbb{R}})\), where D an T represent the expansion (dilatation) and translation unitary operators on \(L^2({\mathbb{R}})\) expressed as

$$\begin{aligned} D^n T_s[\psi (t)] = \root n \of {2}\,\psi (2^n t-s), \end{aligned}$$
(85)

where \(n,\,s\in \mathbb Z\) (dyadic values). Based on (85), the harmonic scaling function \(\varphi ^{{HW}}(t)\) and the harmonic wavelet function \(\psi ^{{HW}}(t)\) are complex functions with bonded spectrum defined by the following relations

$$\begin{aligned} &\varphi ^{{HW}},\,\psi ^{{HW}}:{\mathbb{R}}\rightarrow \mathbb C,\quad \varphi ^{{HW}}(t)=\dfrac{\mathrm {e}^{2\pi it}-1}{2\pi it}\\&\qquad \psi ^{{HW}}(t)=\mathrm {e}^{2\pi it}\varphi ^{{HW}}(t). \end{aligned}$$
(86)

In according to (85) is obtained a family of dilated and translated instances of the scaling, respectively wavelet functions

$$\begin{aligned} &\varphi _{{n\,s}}^{{\texttt {HW}}}(t) {=} \root n \of {2} \varphi ^{{\texttt {HW}}}\left( 2^{n}t-s\right) \\&\psi _{{n\,s}}^{{\texttt {HW}}}(t) {=}\root n \of {2}\psi ^{{\texttt {HW}}}(2^{n}t-s)= \mathrm {e}^{2\pi i\left( 2^{n}t-s\right) }\varphi _{{n\,s}}^{{\texttt {HW}}}(t), \end{aligned}$$
(87)

\(\forall n,\,s\in \mathbb Z\) and for \(n=s=0\) results \(\varphi _{{0\,0}}^{{\texttt {HW}}}=\varphi ^{{HW}}\), respectively \(\psi _{{0\,0}}^{{\texttt {HW}}}=\psi ^{{HW}}\). The Fourier transforms of functions (86) and their conjugates are

$$\begin{aligned} &\widehat{\varphi }^{{HW}}(\omega )=\aleph \left( \omega +2\pi \right) , \quad \widehat{\psi }^{{HW}}(\omega )=\aleph (\omega ) \\&\widehat{\bar{\varphi }^{{HW}}}(\omega )=\aleph (\omega +4\pi ) ,\quad \widehat{\bar{\psi }^{{HW}}}(\omega )=\aleph (\omega +6\pi ). \end{aligned}$$
(88)

where the characteristic function \(\aleph (\omega )\) is defined as

$$\begin{aligned} \aleph \left( \omega \right) =\left\{ \begin{array}{ll} 1, &{} \hbox {{ if}}\,\, 2\pi \le \omega \le 4\pi \\ 0, &{} \hbox {{otherwise.}} \end{array} \right. \end{aligned}$$
(89)

Harmonic wavelet fulfills the admissibility condition

$$\begin{aligned} 0<\mathbf {C}_{\psi ^{{HW}}}=\displaystyle \int _{{\mathbb{R}}} \dfrac{\left| \widehat{\psi }^{{HW}}(\omega )\right| ^2}{|\omega |}{\mathrm {d}}\omega =\ln (2)<\infty , \end{aligned}$$
(90)

and its energy \(\mathbf {E}_{\psi ^{{HW}}}=1\) and \(\widehat{\psi }^{{HW}}(0)=0\). Also, according to multiresolution analysis the followings are satisfied \(\displaystyle \int _{{\mathbb{R}}}\psi ^{{HW}}(t){\mathrm {d}}t=0\) and \(\displaystyle \int _{{\mathbb{R}}}\varphi ^{{HW}}(t){\mathrm {d}}t=1\).

Taking into account the Plâncherel-Parceval theorem for \(f,g\in L^{2}({\mathbb{R}})\) (Hilbert space)

$$\begin{aligned} \left\langle f,\,g\right\rangle (t)=\displaystyle \int _{{\mathbb{R}}}f(t)\cdot \overline{g(t)}{\mathrm {d}}t =\dfrac{1}{2\pi }\left\langle \widehat{f},\,\widehat{g}\right\rangle (\omega ), \end{aligned}$$
(91)

then results the following relations

$$\begin{aligned} \left\{ \small { \begin{array}{ll} \left\langle \varphi ^{{\texttt {HW}}},\,\varphi ^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\varphi }^{{\texttt {HW}}},\,\bar{\varphi }^{{\texttt {HW}}}\right\rangle = \left\langle \psi ^{{\texttt {HW}}},\,\psi ^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\psi }^{{\texttt {HW}}},\,\bar{\psi }^{{\texttt {HW}}}\right\rangle =1 \\ \left\langle \varphi ^{{\texttt {HW}}},\,\bar{\varphi }^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\varphi }^{{\texttt {HW}}},\,\varphi ^{{\texttt {HW}}}\right\rangle = \left\langle \psi ^{{\texttt {HW}}},\,\bar{\psi }^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\psi }^{{\texttt {HW}}},\,\psi ^{{\texttt {HW}}}\right\rangle =0\\ \left\langle \varphi ^{{\texttt {HW}}},\,\psi ^{{\texttt {HW}}}\right\rangle = \left\langle \psi ^{{\texttt {HW}}},\,\varphi ^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\varphi }^{{\texttt {HW}}},\,\bar{\psi }^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\psi }^{{\texttt {HW}}},\,\bar{\varphi }^{{\texttt {HW}}}\right\rangle =0\\ \left\langle \varphi ^{{\texttt {HW}}},\,\bar{\psi }^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\psi }^{{\texttt {HW}}},\,\varphi ^{{\texttt {HW}}}\right\rangle = \left\langle \bar{\varphi }^{{\texttt {HW}}},\,\psi ^{{\texttt {HW}}}\right\rangle = \left\langle \psi ^{{\texttt {HW}}},\,\bar{\varphi }^{{\texttt {HW}}}\right\rangle =0. \end{array} } \right. \end{aligned}$$
(92)

In the following is presented the method of obtaining the solution of the Cauchy problem (1) by using the Frobenius method (the power series form of the solution) and connection coefficients. Thus, the differential problem will turn into an algebraic system (finite dimensional) that can solve by fixing a finite approximation scale. It is also assumed that the function is assumed that the function \(f_{m}(t)=\dfrac{1}{m}\,f(t)\) admits a harmonic wavelet representation of the form

$$\begin{aligned} f_{{\texttt {HW}}}(t)&= \displaystyle \sum _{h=0}^{M} \left( a_{{0\,h}}^{{\texttt {HW}}} \varphi _{{0\,h}}^{{\texttt {HW}}}(t)+\tilde{a}_{{0\,h}}^{{\texttt {HW}}} \bar{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t)\right) \\&+ \displaystyle \sum _{n=0}^{N}\displaystyle \sum _{k=-M}^{M} \left[ b_{{n\,k}}^{{\texttt {HW}}}\psi _{{n\,k}}^{{\texttt {HW}}}(t)+ \tilde{b}_{{n\,k}}^{{\texttt {HW}}}\bar{\psi }_{{n\,k}}^{{\texttt {HW}}}(t)\right] , \end{aligned}$$
(93)

where \(0\le N,\,M<\infty \) and the coefficients of the wavelet development are computed by the formulas

$$\begin{aligned} a_{{0\,h}}^{{\texttt {HW}}}= & {} \left\langle f_{{\texttt {HW}}}(t),\,\varphi _{{0\,h}}^{{\texttt {HW}}}(t)\right\rangle =\dfrac{1}{2 \pi }\displaystyle \int _{0}^{2\pi }\widehat{f}_{{\texttt {HW}}}(\omega )\,\mathrm {e}^{i\omega h}{\mathrm {d}}{\omega } \nonumber \\ \tilde{a}_{{0\,h}}^{{\texttt {HW}}}= & {} \left\langle f_{{\texttt {HW}}}(t),\,\bar{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t)\right\rangle =\overline{a_{{0\,h}}^{{\texttt {HW}}}} \nonumber \\ b_{{n\,k}}^{{\texttt {HW}}}= & {} \left\langle f_{{\texttt {HW}}}(t),\,\psi _{{n\,k}}^{{\texttt {HW}}}(t)\right\rangle =\dfrac{2^{-\frac{n}{2}}}{2 \pi }\displaystyle \int _{2^{n+1}\pi }^{2^{n+2}\pi }\widehat{f}_{{\texttt {HW}}}(\omega )\,\mathrm {e}^{i\frac{\omega k}{2^n}}{\mathrm {d}}{\omega } \nonumber \\ \tilde{b}_{{n\,k}}^{{\texttt {HW}}}= & {} \left\langle f_{{\texttt {HW}}}(t),\,\bar{\psi }_{{n\,k}}^{{\texttt {HW}}}(t)\right\rangle =\overline{b_{{n\,k}}^{{\texttt {HW}}}}. \end{aligned}$$
(94)

Further, to illustrate the calculation algorithm for determining the solution (52) let’s consider the particular HW-solution of the problem (1)

$$\begin{aligned} x_{{\texttt {HW}}}(t)&=\displaystyle \sum _{h=0}^{M} \left( \alpha _{{0\,h}}^{{\texttt {HW}}}\varphi _{{0\,h}}^{{\texttt {HW}}}(t)+ \tilde{\alpha }_{{0\,h}}^{{\texttt {HW}}}\bar{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t)\right) \\&+ \displaystyle \sum _{n=0}^{N}\displaystyle \sum _{k=-M}^{M} \left( \beta _{{n\,k}}^{{\texttt {HW}}}\psi _{{n\,k}}^{{\texttt {HW}}}(t)+ \tilde{\beta }_{{n\,k}}^{{\texttt {HW}}}\bar{\psi }_{{n\,k}}^{{\texttt {HW}}}(t)\right), \end{aligned}$$
(95)

and in this respect, the equation

$$\begin{aligned} \ddot{x}(t)+\zeta _{c}\dot{x}(t)+\zeta _{k}x(t)= f_{{\texttt {HW}}}(t), \end{aligned}$$
(96)

\(\zeta _{c}=\dfrac{c}{m}\), \(\zeta _{k}=\dfrac{k}{m}\), \(\zeta _{c}\), \(\zeta _{k}=const.\), becomes

$$\begin{aligned}&\displaystyle \sum _{h=0}^{M} \alpha _{{0\,h}}^{{\texttt {HW}}}\left( \ddot{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t)+\zeta _{c}\dot{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t)+ \zeta _{k}\varphi _{{0\,h}}^{{\texttt {HW}}}(t) \right) \\&\quad +\displaystyle \sum _{h=0}^{M}\tilde{\alpha }_{{0\,h}}^{{\texttt {HW}}}\left( \ddot{\bar{\varphi }}_{{0\,h}}^{{\texttt {HW}}}(t)+ \zeta _{c}\dot{\bar{\varphi }}_{{0\,h}}^{{\texttt {HW}}}(t)+ \zeta _{k}\bar{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t) \right) \\&\quad +\displaystyle \sum _{n=0}^{N}\sum _{k=-M}^{M}\beta _{{n\,k}}^{{\texttt {HW}}}\left( \ddot{\psi }_{{n\,k}}^{{\texttt {HW}}}(t)+\zeta _{c}\dot{\psi }_{{n\,k}}^{{\texttt {HW}}}(t)+ \zeta _{k}\psi _{{n\,k}}^{{\texttt {HW}}}(t) \right) \\&\quad +\displaystyle \sum _{n=0}^{N}\sum _{k=-M}^{M} \tilde{\beta }_{{n\,k}}^{{\texttt {HW}}}\left( \ddot{\bar{\psi }}_{{n\,k}}^{{\texttt {HW}}}(t)+ \zeta _{c}\dot{\bar{\psi }}_{{n\,k}}^{{\texttt {HW}}}(t)+ \zeta _{k}\bar{\psi }_{{n\,k}}^{{\texttt {HW}}}(t) \right) \\&\quad \quad = f_{{\texttt {HW}}}(t), \end{aligned}$$
(97)

which is equivalent to [16, 50, 51]

$$\begin{aligned} &\displaystyle \sum _{h=0}^{M} \alpha _{{0\,h}}^{{\texttt {HW}}}\left[ \displaystyle \sum _{q=0}^{M} \lambda _{{{0\,h}|{0\,q}}}^{{\texttt {HW}}[2]} \varphi _{{0\,q}}^{{\texttt {HW}}}(t)+\zeta _{c} \displaystyle \sum _{q=0}^{M} \lambda _{{{0\,h}|{0\,q}}}^{{\texttt {HW}}[1]} \varphi _{{0\,q}}^{{\texttt {HW}}}(t) \right. \\&\quad \left. +\zeta _{k}\varphi _{{0\,h}}^{{\texttt {HW}}}(t) \right] + \displaystyle \sum _{h=0}^{M}\tilde{\alpha }_{{0\,h}}^{{\texttt {HW}}} \left[ \displaystyle \sum _{q=0}^{M} \bar{\lambda }_{{{0\,h}|{0\,q}}}^{{\texttt {HW}}[2]} \bar{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t)\right. \\&\quad \left. + \zeta _{c}\displaystyle \sum _{q=0}^{M} \bar{\lambda }_{{{0\,h}|{0\,q}}}^{{\texttt {HW}}[1]} \bar{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t)+ \zeta _{k}\bar{\varphi }_{{0\,h}}^{{\texttt {HW}}}(t) \right] \\&\quad +\displaystyle \sum _{n=0}^{N}\sum _{k=-M}^{M} \beta _{{n\,k}}^{{\texttt {HW}}} \left[ \displaystyle \sum _{i=0}^{N}\sum _{j=-M}^{M} \gamma _{{{n\,k}|{i\,j}}}^{{\texttt {HW}}[2]} \psi _{{i\,j}}^{{\texttt {HW}}}(t)\right. \\&\quad \left. +\zeta _{c} \displaystyle \sum _{i=0}^{N}\sum _{j=-M}^{M} \gamma _{{{n\,k}|{i\,j}}}^{{\texttt {HW}}[1]} \psi _{{i\,j}}^{{\texttt {HW}}}(t)+ \zeta _{k}\psi _{{n\,k}}^{{\texttt {HW}}}(t) \right] \\&\quad +\displaystyle \sum _{n=0}^{N}\sum _{k=-M}^{M} \tilde{\beta }_{{n\,k}}^{{\texttt {HW}}} \left[ \displaystyle \sum _{i=0}^{N}\sum _{j=-M}^{M} \bar{\gamma }_{{{n\,k}|{i\,j}}}^{{\texttt {HW}}[2]} \bar{\psi }_{{i\,j}}^{{\texttt {HW}}}(t)\right. \\&\quad \left. +\zeta _{c} \displaystyle \sum _{i=0}^{N}\sum _{j=-M}^{M} \bar{\gamma }_{{{n\,k}|{i\,j}}}^{{\texttt {HW}}[1]} \bar{\psi }_{{i\,j}}^{{\texttt {HW}}}(t)+ \zeta _{k}\psi _{{n\,k}}^{{\texttt {HW}}}(t) \right] = f_{{\texttt {HW}}}(t). \end{aligned}$$
(98)

By inner product with the harmonic scaling functions \( \varphi _{{0\,l}}^{{\texttt {HW}}}(t),\, \bar{\varphi }_{{0\,l}}^{{\texttt {HW}}}(t)\) and according to (85)–(92) results the harmonic scaling coefficients \(\alpha _{{0\,h}}^{{\texttt {HW}}}(t),\, \tilde{\alpha }_{{0\,h}}^{{\texttt {HW}}}(t)\) by solving the equations

$$\begin{aligned} \left( {\Lambda _{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[2]}} +\zeta _{c}{\Lambda _{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[1]}} + +\zeta _{k}\mathbf {I}_{M+1}\right) \left( \begin{array}{c} \alpha _{{0\,0}}^{{\texttt {HW}}} \\ \cdots \\ \alpha _{{0\,l}}^{{\texttt {HW}}}\\ \cdots \\ \alpha _{{0\,M}}^{{\texttt {HW}}} \end{array} \right)&= \left( \begin{array}{c} a_{{0\,0}}^{{\texttt {HW}}}\\ \cdots \\ a_{{0\,l}}^{{\texttt {HW}}} \\ \cdots \\ a_{{0\,M}}^{{\texttt {HW}}} \end{array} \right), \end{aligned}$$
(99)
$$\begin{aligned} \left( {\overline{\Lambda }_{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[2]}} +\zeta _{c}{\overline{\Lambda }_{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[1]}} + +\zeta _{k}\mathbf {I}_{M+1}\right) \left( \begin{array}{c} \tilde{\alpha }_{{0\,0}}^{{\texttt {HW}}} \\ \cdots \\ \tilde{\alpha }_{{0\,l}}^{{\texttt {HW}}}\\ \cdots \\ \tilde{\alpha }_{{0\,M}}^{{\texttt {HW}}} \end{array} \right)&= \left( \begin{array}{c} \tilde{a}_{{0\,0}}^{{\texttt {HW}}}\\ \cdots \\ \tilde{a}_{{0\,l}}^{{\texttt {HW}}} \\ \cdots \\ \tilde{a}_{{0\,M}}^{{\texttt {HW}}} \end{array} \right), \end{aligned}$$
(100)

where it denoted by \({\Lambda _{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[s]}}=\left( \lambda _{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[s]}\right) _{h,l=\overline{0,M}}\),

\({\overline{\Lambda }_{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[s]}=(-1)^{s}\Lambda _{{{0\,l}|{0\,h}}}^{{\texttt {HW}}[s]}}\) and \(s=\overline{1,2}\) represents the order of derivatives and the connection coefficients \(\lambda _{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[s]}\) of the harmonic scaling functions \( \varphi _{{0\,h}}^{{\texttt {HW}}}(t)\) are given by

$$\begin{aligned} \lambda _{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[2]}=&\left\{ \begin{array}{ll} \dfrac{-4\pi ^2}{3}, &{} \hbox {h}=\hbox {l} \\ \\ \dfrac{-2}{(l-h)^2}+\dfrac{2\pi i}{l-h}, &{} \hbox {h}\ne \hbox {l}, \end{array} \right. \end{aligned}$$
(101)

respectively

$$\begin{aligned} \lambda _{{{0\,h}|{0\,l}}}^{{\texttt {HW}}[1]}=&\left\{ \begin{array}{ll} \pi i, &{} \hbox {h}=\hbox {l} \\ \\ \dfrac{1}{(l-h)^2}+\dfrac{2\pi i}{l-h}, &{} \hbox {h}\ne \hbox {l} . \end{array} \right. \end{aligned}$$
(102)

Analogously, by inner product with the harmonic wavelet functions \(\psi _{{m\,r}}^{{\texttt {HW}}}(t)\), \(\bar{\psi }_{{m\,r}}^{{\texttt {HW}}}(t)\) and according to (85)–(92) are obtained the harmonic wavelet coefficients \( \psi _{{n\,k}}^{{\texttt {HW}}}(t)\), \( \bar{\psi }_{{n\,k}}^{{\texttt {HW}}}(t)\) from the following equations

$$\begin{aligned} \left( {\Gamma _{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[2]}} +\zeta _{c}{\Gamma _{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[1]}} +\zeta _{k}\mathbf {I}_{2M+1}\right) \left( \begin{array}{c} \beta _{{n\,-M}}^{{\texttt {HW}}} \\ \cdots \\ \beta _{{n\,r}}^{{\texttt {HW}}}\\ \cdots \\ \beta _{{n\,M}}^{{\texttt {HW}}} \end{array} \right)&= \left( \begin{array}{c} b_{{n\,-M}}^{{\texttt {HW}}} \\ \cdots \\ b_{{n\,r}}^{{\texttt {HW}}}\\ \cdots \\ b_{{n\,M}}^{{\texttt {HW}}} \end{array} \right), \end{aligned}$$
(103)
$$\begin{aligned} \left( {\overline{\Gamma }_{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[2]}} +\zeta _{c}{\overline{\Gamma }_{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[1]}} +\zeta _{k}\mathbf {I}_{2M+1}\right) \left( \begin{array}{c} \tilde{\beta }_{{n\,-M}}^{{\texttt {HW}}} \\ \cdots \\ \tilde{\beta }_{{n\,r}}^{{\texttt {HW}}}\\ \cdots \\ \tilde{\beta }_{{n\,M}}^{{\texttt {HW}}} \end{array} \right)&= \left( \begin{array}{c} \tilde{b}_{{n\,-M}}^{{\texttt {HW}}} \\ \cdots \\ \tilde{b}_{{n\,r}}^{{\texttt {HW}}}\\ \cdots \\ \tilde{b}_{{n\,M}}^{{\texttt {HW}}} \end{array} \right), \end{aligned}$$
(104)

where \({\Gamma _{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[s]}}=\left( \gamma _{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[s]}\right) _{n=\overline{0,N}, k,r=\overline{-M,M}}\),

\({\overline{\Gamma }_{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[s]}=(-1)^{s}\Gamma _{{{n\,r}|{n\,k}}}^{{\texttt {HW}}[s]}}\), \(s = \overline{1,2}\). As well, the connection coefficients of the harmonic wavelet functions \(\gamma _{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[s]}\) are given by

$$\begin{aligned} \gamma _{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[2]}=&\left\{ \begin{array}{ll} \dfrac{-7\pi ^2}{3}\cdot 4^{n+1}, &{} \hbox {k}=\hbox {r}\\ 4^{n}\left( -\dfrac{2}{(r-k)^2}+\dfrac{6\pi i}{r-k}\right) , &{} \hbox {k}\ne \hbox {r}, \end{array} \right. \end{aligned}$$
(105)

respectively

$$\begin{aligned} \gamma _{{{n\,k}|{n\,r}}}^{{\texttt {HW}}[1]}=&\left\{ \begin{array}{ll} 2^n\cdot 3\pi i, &{} \hbox {k}=\hbox {r}\\ \dfrac{2^n}{(r-k)^2}, &{} \hbox {k}\ne \hbox {r}, \end{array} \right. \end{aligned}$$
(106)

for \(s=\overline{1,2}\), \(n=\overline{0,N}\), \(k,\,r=\overline{-M,M}\). The harmonic scaling and wavelet coefficients and their conjugates are obtained by solving the systems (99), (103), respectively (100), (104) and finally HW-approximation of the solution (52) is found. Also, are used so-called harmonic connection coefficients and their conjugates (for \(\forall \,s\in \mathbb N\) the derivation order) which are provided by the derivatives of harmonic wavelet basis, for more technical details of computation see [16, 49, 52]. Coefficients \(K_1\), \(K_2\) of the homogeneous wavelet solution are determined by applying the initial conditions to (52).

As a conclusion few wavelet-based methods to damage detection in a structure will be summarized further. Variation of wavelet coefficients represents a monitoring process normally used and its main scope is that identifying the existence and severity of damages in a machine or civil structure. In this regard, it has been established that the damages in a structure results in the variation of the wavelet coefficients—usually caused by the change of modal properties of a structure (after it experiences damage, see [1]). Another method is local perturbation of wavelet coefficients for damage localization in structures. It implies not only the existence, but also determine the location of damage, because the wavelet coefficients tend to show irregularity near the crack. Reflected wave method caused by local damage—this technique is used to localize damages, and measuring the severity of the damage is possible by studying the wave velocity.

5 Conclusions

Equation (3)—where the displacement coefficient is constant is a particular case of the original Eq. (2) - where the displacement coefficient is variable; the original purpose of the paper it was “a calibration of dyadic wavelets” (of a method of non-classical mathematics for obtaining Cauchy problem solution CPS of (2)) on the particular case of (3), see Sects. 2.5, 4. In this approach “of calibration” has been useful first (as method of classical mathematics) 2nd-order linear nonhomogeneous differential equations “method” by which has been established appearance structural of CPS (see Sects. 2.1, 3.1); Laplace operational method is “an interesting update” in which \(x_{o}^{c}(t)\) (“the homogeneous” part of CPS) is independent of disturbance (see Sects. 2.2, 3.2). Further according to analytical conditional form “method” result as it is possible that the displacement x(t) have no an analytical form on interval (see Sects. 2.3, 3.3), i.e., the product theorems relatively the series are not pointwise operational (according to Sect. 3.1). Finally approximations with error evaluation “method” (as “a string method” of classical mathematics) gives a approximation \(z_{N}\) where N is comparable to p-crisp decimal places (in a particular case, see Sects. 2.4, 3.4). On the other hand the grammatical evolution (GE) method is operational both in particular case and the general case (see Sect. 2.7); this corresponds to the development of GE from static environments to dynamic environments (see [35, 36]). In the general case the classical method (Sect. 2.1) is replaced by methods adapted to particular expressions (see [14]) and Sects. 2.22.7 methods remain with some adaptations. In fact in the Sect. 2.1.1 there are some transformations of (non)homogeneous equations in the scaled forms; these transformations and an extension of the Bernoulli equation as “absolute” Bernoulli equation are used to obtain “reduced” canonical form. Notice that the scaled forms are essential in defining the pseudo-linear differential equations (see the footnote [RN] of the Sect. 2.1.1). Section 3.1.1 uses results from Sect. 2.1.1 in obtaining exact solutions, respectively approximations with error evaluation (according to contraction principle) relative to remarkable equations in literature (including the corresponding standard form).

The main wavelet-based methods for damages detection were classified into three categories (see [1]): (i) variation of wavelet coefficients—is caused by the change of modal properties of the structure, (ii) local perturbation of wavelet coefficients in a space domain—localizing the damages in the structure and involves detecting the irregularity of wavelet coefficients which are observed in proximity the location of the crack, (iii) reflected wave caused by local damage—is used to measure the severity as well as the location of damages and analyzes the wave reflected by local damages in the structure. It should be noted that in Sects. 4 the HW-approximation of the solution (52) was obtained for the constant coefficients \(\zeta _{c}\), \(\zeta _{k}\). If these coefficients are variable, then the condition \(\zeta _{c},\,\zeta _{k}\in L^1({\mathbb{R}})\) is required and in this case, the HW-approximation of the solution is obtained by using the frequency convolution method. A discussion about the approximation (transform) fuzzy method, namely the homotopy analysis method HAM has been added in the Sect. 2.6.

This paper concerns an expansion on a short previous work, i.e., extending Sects. 1, 2 (including new Sect. 2.1.1) and 5 and introducing new Sects. 3 (including Sect. 3.1.1) and 4.