1 Introduction

Due to the fact that the atmosphere is a chaotic dynamical system, the growth of small errors in weather prediction is exponential. In the case of sufficiently small initial error, the governing equations can be linearized, which leads to exponential growth of the error. Generally, whether an error is small enough to guarantee the exponential growth depends on the specific meteorological conditions and/or the model under study. The issue of small errors growth in the context of the model studied in this paper was also addressed in[15]. For a more comprehensive introduction to the problem of weather predictability we refer readers to the book of Palmer and Hagedorn[6].

If the system which governs the change of the error is linear, then the exponential growth will continue unabated. The earth’s atmosphere is a non-linear system and as the error becomes larger, the growth rate decreases. Eventually, all systematic growth should stop and the size of the error should be equal to the average size of the distance of two randomly chosen states.

Predictability and average initial error growth for numerical weather prediction models (NWPM) are still in the focus of researchers, e.g. [7, 8]. Analysis of the last mentioned growth was firstly performed by Lorenz[9] in 1969. He introduced a quadratic hypothesis which is based on the assumption that if the principal nonlinear terms in the atmospheric equations are quadratic, then the nonlinear terms in the equations governing the field of errors will also be quadratic but he could not prove this theory, because the method that has to be adopted (for NWPM we know just model state and we do not know the real state) gives limited number of initial error sizes and limited available data for valid approximation. This problem is still present. In this case, it is a logical step to use a less complex experimental model that gives the possibilities to choose “real” and model states and to choose the number of approximated data. The present study attempts to examine this hypothesis, using the low-dimensional atmospheric model introduced by Lorenz in 1996[10].

Even for systems with exponential error growth, it does not start right from the initial time, and we observe the transient behavior instead. After the transient behavior dies out and exponential growth follows, the time length of this part will be governed by the size of small initial error and by the model parameters. Predictability, as a time interval where the model error is growing, is also expected to be a function of the above mentioned errors and parameters. We also investigate this phenomenon.

Some results on the topic were published by us in a more compact form in [1].

2 Model

Lorenz[10] introduced a model of nonlinear behavior, with N variables X 1, ⋯, X N connected by governing equations:

$${{{\rm{d}}{X_n}} \over {{\rm{d}}t}} = - {X_{n - 2}}{X_{n - 1}} + {X_{n + 1}}{X_{n - 1}} - {X_n} + F$$
(1)

where X n −2, X n −1, X n , X n+ 1 are unspecified (i.e., unrelated to actual physical variables) scalar meteorological quantities, F is a constant representing external forcing, and t is time. The index is cyclic so that X n−N = X n+N = X n , and the variables can be viewed as existing around a circle. The nonlinear terms of (1) simulate advection. The linear terms represent mechanical and thermal dissipation. The model can quantitatively describe the weather system to a certain extent; but instead of the well-known Lorenz’s model of atmospheric convection[11], (1) cannot be derived from any atmospheric dynamic equations. The motivation was to formulate the simplest possible set of dissipative chaotically behaving differential equations that share some properties with the “real” atmosphere. In [2, 3], the reasoning for usability of such a model was discussed in more detail.

Details of the numerical integration of (1) are given in Appendix. For our computation we choose N = 36, so each sector covers 10 degrees of longitude. Parameters F were selected equal to 8, 9 and 10, successively. We first choose arbitrary values of the variables, and use a fourth order Runge-Kutta method with a time step Δt = 0.05 or 6 hours, we integrate forward for 14400 steps, or 10 years. We then use the final values, which should be more or less free of transient effect. For individual parameters F, we estimate the global largest Lyapunov exponents λ max by the method of numerical calculation presented in [12]. We gradually get

$$F = \left( {8;9;10} \right) \to {\lambda _{\max }} = \left( {0.33;0.39;0.46} \right).$$

By the definition in [9], a bounded dynamical system with a positive Lyapunov exponent is chaotic. Because all values of the largest Lyapunov exponents of the model are positive and the system is bounded[24], its chaoticity has been established for all three values of F. Strictly speaking, we also need to exclude the asymptotically periodic behavior, but such a task is impossible for numerical simulation to fulfill. The choice of parameters F and time unit = 5 days is made to obtain the same values of the largest Lyapunov exponents as the state of the art models of complete global atmospheric circulation.

3 Ensemble prediction method

The ensemble prediction method employed is similar to the one in [10] and is used to calculate the average initial error growth. We make an initial “run” by choosing error e n 0 and letting \(X_{n0}^\prime = {X_{n0}} + {e_{n0}}\) be the “observed” initial value of N variables. We then integrate forward from the true and the observed initial state, for 50 days (K = 200 steps), obtaining N sequences X n 0, ⋯, X n and \(X_{n0}^\prime, \cdots, X_{nK}^\prime \). After that, we let \({e_{nk}} = X_{nK}^\prime - {X_{nk}}\) for all values of k and n. To get more representative values, we make a total of M = 250 runs in the same manner by letting new values of X n 0 be the old values of X nK in each run. Finally, we let

$${e^2}\left( \tau \right) = {1 \over N}\left( {e_{1k}^2 + \cdots + e_{Nk}^2} \right)$$

be the average of the N values, where τ = kΔt is the predictable range and

$$\log {E^2}\left( \tau \right) = {1 \over M}\left( {\log {e^2}{{\left( \tau \right)}_1} + \cdots + \log {e^2}{{\left( \tau \right)}_M}} \right)$$

is the average of M values. The logarithmic average is chosen because of its suitability for comparison with growth governed by the largest Lyapunov exponent. For further information, see [1315].

4 Quadratic hypothesis

According to Lorenz[10], there is an eventual cessation of the exponential growth due to the processes represented by nonlinear terms in the weather governing equations. The most important are the quadratic terms, which represent the advection of the temperature and velocity fields. Under the assumption that the principal nonlinear terms in the atmospheric equations are quadratic, the nonlinear terms in the equations governing the field of errors will also be quadratic. To describe this, Lorenz[10] defined

$${{{\rm{d}}E(t)} \over {{\rm{d}}t}} = aE\left( t \right) - bE{\left( t \right)^2}$$
(2)

where E (t) is a distance at time t between two originally nearby trajectories; and a, b are constants. The quadratic hypothesis is also used to describe the behavior of initial error growth, for example in [16, 17].

4.1 Method

Because we want to study behavior of (2), we make differences \({y_k} = {{\left( {E\left( {\tau + \Delta t} \right) - E\left( \tau \right)} \right)} \over {\Delta t}}\) points \({x_k} = {{\left( {E\left( \tau \right) + E\left( {\tau + \Delta t} \right)} \right)} \over 2}\), where E is average initial error growth calculated from the ensemble prediction method (Section 3).

Next we interpolate the data (x k , y k ). The interpolation equations are

$$y(t) = {{{\rm{d}}E(t)} \over {{\rm{d}}t}} = aE\left( t \right) - bE{\left( t \right)^2}$$
(2)
$$y(t) = {{{\rm{d}}E(t)} \over {{\rm{d}}t}} = aE\left( t \right) - bE{\left( t \right)^3}$$
(3)
$$y(t) = {{{\rm{d}}E(t)} \over {{\rm{d}}t}} = aE\left( t \right) - bE{\left( t \right)^4}$$
(4)
$$y(t) = {{{\rm{d}}E(t)} \over {{\rm{d}}t}} = - aE(t)\ln (bE(t)).$$
(5)

Equation (2) represents the examined quadratic hypothesis. The alternative forms (3) and (4) are added, because Lorenz[9] noticed that the cubic and quartic equations would also fit his data. Equation (5) is chosen because if \(Q\left( t \right) = \ln \left( {\overline {E\left( t \right)} } \right)\) with Ē being the normalized E, then \({{{\rm{d}}Q\left( t \right)} \over {{\rm{d}}t}} = a\left( {1 - {{\rm{e}}^{Q\left( t \right)}}} \right)\) represents the quadratic hypothesis. In [18], it was assumed that linear fit \({{{\rm{d}}Q\left( t \right)} \over {{\rm{d}}t}} = - aQ\left( t \right)\) is superior to the quadratic hypothesis. Parameters a and b in (2–5) are examined and discussed in the next section.

4.2 Results

Different initial errors e 0 exhibit different behaviors of an error growth. To study that, we selected six magnitudes of \(||{e_0}||\) for each \(F:\, \Vert e_{0,1} \Vert = 0.0001,\, \Vert e_{0,2} \Vert = 0.001,\, \Vert e_{0,3} \Vert = 0.01,\, \Vert e_{0,4} \Vert = 0.1,\, \Vert e_{0,5} \Vert = 0.6,\, \Vert e_{0,6} \Vert = 1\), where ∥ · ∥ marks the Euclidean norm. The interpolation equations were tested for all three values of parameter F and initial error e 0. Table 1 shows the RMS error between values obtained from the ensemble prediction and from the interpolation equations. The error is divided by the average value of \({{\left( {E\left( {\tau + \Delta t} \right) - E\left( \tau \right)} \right)} \over {\Delta t}}\). Fig. 1 displays the error growth rate \({{{\rm{d}}E} \over {{\rm{d}}t}}\) versus E for all parameters F and for e 0,2, e 0,4 and e 0,5. Each interpolation equation gives specific values of a and b for particular F and e0. Our aim is to find a general description of a and b by well-known parameters of the system. The early growth rate should be close to \({{{\rm{d}}E} \over {{\rm{d}}t}} = {\lambda _{\max }}E\). That means a = λ max for all interpolation equations. Results for (2–4) are the following. The constant a measures the growth rate of small errors, the quadratic (cubic, quartic) term has to be negative if a is positive, since it is the only factor that can stop the growth. If E is normalized such that the value which it approaches as t → ∞ is unity, b = a. For unnormalized \(E,b = {{{\lambda _{\max }}} \over {{E^ * }}}\) for (2), \(b = {{{\lambda _{\max }}} \over {{E^ * }^2}}\) for equation (3) and \(b = {{{\lambda _{\max }}} \over {{E^ * }^3}}\) for (4), where E* denotes the saturation value for E. For (5), then \(b = {1 \over {{E^ * }}}\).

Fig. 1
figure 1

The error growth rate \({{{\rm{d}}E} \over {{\rm{d}}t}}\) versus E for all parameters of F and for e 0,2, e 0,4 and e 0,5. The thick line represents ensemble prediction, the thin line corresponds to (2), the dashed line to (3), the thinly dashed line to (4) and the largely dashed line to (5)

Table 1 RMS error between values obtained from the ensemble prediction and from the interpolation equations. The error is divided by the average value of \({{\left( {E\left( {\tau + \Delta t} \right) - E\left( \tau \right)} \right)} \over {\Delta t}}\) displayed in percent. Gray cells mark values with the best results

From Table 1 and Fig. 1, it is obvious that the most accurate and therefore usable hypotheses are the quadratic (2) and logarithmic (5) ones. The theoretical values of parameters a and b would make the inaccuracy of cubic and quartic hypotheses even greater. Therefore, we will from now on work only with quadratic and logarithmic hypotheses. Table 2 shows the RMS error between values obtained from the ensemble prediction and from (2–5), where parameters a and b are the expected theoretical values. We can see a higher increase of the percent error for (5) than for (2). The difference between Table 2 and Table 1 is displayed in Table 3.

Table 2 The RMS error between values obtained from the ensemble prediction and from (2, 5), where parameters a and b are the expected theoretical values. The error is divided by the average value of \({{\left( {E\left( {\tau + \Delta t} \right) - E\left( \tau \right)} \right)} \over {\Delta t}}\) displayed in percent
Table 3 The absolute values of the differences between Table 2 and Table 1

5 Exponential growth and predictability

Usability of the exponential model of initial error growth is depended on the size of initial error as well as the model parameter F. In the introduction, we mentioned that the exponential growth \({E_{\exp }}\left( t \right) = {e_0}{{\rm{e}}^{\left( {{\lambda _{\max }}t} \right)}}\) governed by the largest Lyapunov exponent λ max occurs in the case of a sufficiently small initial error e 0. In this section, we present sizes of this initial error and therefore the usability of the exponential model.

Predictability t p is the time interval where systematic growth of initial error occurs and the size of this error is smaller than the average size of the distance of two randomly chosen states. This section also focuses on the dependence of t p on the size of initial error, parameter F and on the possible connection between predictability and exponential growth.

5.1 Method

If we display the time variation of the average prediction error E obtained from the Lorenz’s model and exponential growth (Fig. 2), we can hardly decide whether the exponential growth happens or not. We could get same guesses for predictable time read from graphs of time evolution of E (Fig. 2), but we would rather relate it to a better specified value instead of the saturated value E*. In both cases we want to get more precise values and accuracy, therefore we have to introduce a more sophisticated method. This method was developed from the assumption that if the exponential growth is present, then the ratio of the two average errors E calculated from ensemble prediction method (Section 3) in two following time steps Δt is

$${{E\left( \tau \right)} \over {E\left( {\tau - \Delta t} \right)}} = {{{E_{\exp }}\left( \tau \right)} \over {{E_{\exp }}\left( {\tau - \Delta t} \right)}} = {{\rm{e}}^{\left( {{\lambda _{\max }}\Delta t} \right)}}$$

and hence it follows that

$$G\left( \tau \right) = {{\ln \left( {{{E\left( \tau \right)} \over {E\left( {\tau - \Delta t} \right)}}} \right)} \over {({\lambda _{\max }}\Delta t)}} = 1.$$

Boundaries of predictability occur when

$$E\left( \tau \right) = E\left( {\tau + \Delta t} \right) = {E^ * }$$

which means that G (τ) = 0. Through the use of G, we analyze exponential growth and predictability.

Fig. 2
figure 2

Time variations of the average prediction error E obtained from the Lorenz’s model (the thick line) for F = 8, for e 0,2, e 0,4, e 0,5 and exponential growth governed by the largest Lyapunov exponent (the thin line)

5.2 Results

Function G is calculated for a variety of initial errors e 0 and parameters F. We again choose six magnitudes of ∥e 0∥ for each parameter F: ∥e 0,1∥ = 0.0001, ∥e 0,2∥ = 0.001, ∥e 0,3∥ = 0.01, ∥e 0,4∥ = 0.1, ∥e 0,5∥ = 0.2, ∥e 0,6∥ = 1, where ∥ · ∥ marks the Euclidean norm. In Table 4, the time interval t e , during which results from ensemble prediction method are close to the theoretical exponential growth, is displayed for each initial error e 0 and each parameter of F. In the same table, predictability t p (length of time interval where E is growing) is also displayed. Fig. 3 shows time evolution of function G for all F and e 0,2, e 0,4, e 0,5.The experimental data oscillate around theoretically expected values, and therefore intervals t e and t p are measured as the length with similar oscillation around theoretically expected values rather than an exact match. Table 4 and Fig. 3 also illustrate that the exponential growth is present for e 0 which is smaller than e 0,4 = 0.1 and never starts at the beginning of the growth. There is always a wave that precedes it (it can also be seen in Table 3) and its length is similar throughout the spectrum of F and e 0. To analyze this behavior further, we focus on the dependence of the least upper bound (supremum) t e,u of time interval t e and predictability t p on natural logarithm of initial errors from e 0,1 to e 0,4 for all parameters of F (Fig. 4) and on natural logarithm of initial errors from e 0,1 to e 0,6 (Fig. 5) in the second case.

Fig. 3
figure 3

Time evolution of function \(G\left( \tau \right) = {{\ln \left( {{{E\left( \tau \right)} \over {E\left( {\tau - \Delta t} \right)}}} \right)} \over {{\lambda _{\max }}\Delta t}}\) for all parameters of F and for e 0,2, e 0,4 and e 0,5

Fig. 4
figure 4

Time length t e versus natural logarithm of initial errors from e 0,1 to e 0,4 for all parameters of F. The dashed line is for F = 8, the thin dashed line is for F = 9, the large dashed line is for F = 10 and solid lines represent linear interpolations

Fig. 5
figure 5

Time length t p versus natural logarithm of initial errors from e 0,1 to e 0,6 for all parameters of F. The dashed line is for F = 8, the thin dashed line is for F = 9, the large dashed line is for F = 10 and solid lines represent linear interpolations

Table 4 Time interval t e , where results from the ensemble prediction method are close to theoretical exponential growth (N means negative result) and predictability t p (time intervals, where E is growing) for each initial error e 0 and each parameter F

The results in Figs. 4 and 5 indicate the linear dependence between t e,u as well as between t p and ln (e 0). To get more information, we linearly interpolated the experimental data. The interpolation equations t e,u (e 0) and t p (e 0) for all parameters F were

$$t_{e,u}=c+d {\rm ln}(e_{0})$$
(6)
$$\matrix{{{t_p} = f + h\ln ({e_0})} \cr {({c_{F = 8}};{c_{F = 9}};{c_{F = 10}}) = (0.2; - 0.2; - 0.2)} \cr {({d_{F = 8}};{d_{F = 9}};{d_{F = 10}}) = ( - 3; - 2.6; - 2.2)} \cr {({f_{F = 8}};{f_{F = 9}};{f_{F = 10}}) = (20;15;12)} \cr {({h_{F = 8}};{h_{F = 9}};{h_{F = 10}}) = ( - 3.1; - 0.7; - 2.2).} \cr }$$
(7)

Equations (6) and (7) suggest that as we choose a bigger initial error, the window of exponential growth and predictability decreases with the natural logarithm of the initial error. Coefficient c is close to 0 for all F.Coefficient f is the same as t p of e 0,6 for all F. Coefficients d and h are similar to each other. Theoretically, possible values for the coefficients come from the definition of exponential growth

$${E_{\exp }}\left( t \right) = {e_0}{{\rm{e}}^{\left( {{\lambda _{\max }}t} \right)}}$$

thus

$$t\left( {{e_0}} \right) = - {1 \over {{\lambda _{\max }}}}\ln \left( {{{{e_0}} \over {{E_{\exp }}}}} \right).$$

Values of are

$$\left( {{1 \over {{\lambda _{\max, F = 8}}}};{1 \over {{\lambda _{\max, F = 9}}}};{1 \over {{\lambda _{\max, F = 10}}}}} \right) = \left( {3.03;2.56;2.17} \right).$$

If we compare it with d and h, we see similarity of the results.

From (6) and (7), it is obvious that function t p (t e,u ) (Fig. 6) is linear:

$$\matrix{{{t_p} = o + p{t_{e,u}}} \cr {({o_{F = 8}};{o_{F = 9}};{o_{F = 10}}) = (20;15;12)} \cr {a({p_{F = 8}};{p_{F = 9}};{p_{F = 10}}) = (1;1;1).} \cr }$$
(8)
Fig. 6
figure 6

Predictability t p versus time length t e,u . The dashed line is for F = 8, the thin dashed line is for F = 9, the large dashed line is for F = 10 and solid lines are linear interpolations

If the exponential growth is present, t p is equal to t e,u plus e 0,6 for all parameters of F.

6 Discussion

The Lorenz’sresults[9] fulfilled the cubic relation (3) fairly well, though he only used limited number of data for his study. We showed that neither (3) nor (4) fits our data properly, compared to the other alternatives. Two usable hypotheses approximating the error growth rate are

(9)
(10)

where λ max is the largest Lyapunov exponent and E* is the saturated value of E.

If we look for the best approximation of model values of the error growth rate Ė, then the quadratic law (2) fits the best for e 0 up to about 0.1. For higher values, it is better to use logarithmic law (5). On the other hand, if we want to estimate parameters of the model or use (9, 10) directly, it is, according to Fig. 2, Table 2 and Table 3, better to use (9) for e 0 = 〈0; 1〉 and (10) for e 0 = 〈1; 2〉. The reason for the difference can be found in Fig. 7. We can estimate that the best result for (2) is the range of e 0 between 0.001 and 0.1 and for (5) we obtain the range of e 0 between 1 and 1.5. The difference between theoretical and experimental data for parameter a is much higher for (5) than for (2) and parameter b in (5) is inside the logarithmic function and therefore any possible difference would be increased by this function.

Fig. 7
figure 7

Comparison of theoretical and experimental parameters a, b for (2, 5). The thin dashed lines represent F = 8, the dashed lines are for F = 9 and the solid lines are for F = 10. The thin lines are theoretical values and the thick lines are experimental values

Here, we also need to mention that variables X 1, ⋯, X N vary between approximately −6 and +12. Fig. 8 shows time variations of X during a period of 180 days. The average value \(\overline{X} \) of X n is 2, which means that for \( e_0 \leqslant \over{\overline{X}}{2}\), it is better to use (9) and for \( e_0 \geqslant \over{\overline{X}}{2}\) it is better to use logarithmic law (10). This is in good agreement with [18], where the same was suggested. Solutions of (9) and (10) are

$$E(t) = {{E*} \over {\left( {{{E*} \over {{e_0}}} - 1} \right){{\rm{e}}^{({\rm{ - }}{\lambda _{{\rm{max}}}}t)}}{\rm{ + 1}}}}$$
(11)
$$E(t) = E*{\left( {{{{e_0}} \over {E*}}} \right)^{{{\rm{e}}^{({\rm{ - }}{\lambda _{{\rm{max}}}}t)}}}}.$$
(12)
Fig. 8
figure 8

Time variations of X 1 during a period of 180 days

The maximum sufficiently small initial error with the exponential growth of E (t) is e 0 = 0.1 (according to Table 4 and Fig. 3). If we take a look at the validity of quadratic hypothesis with experimental parameters, we can see the same maximal value. We can speculate that it is not a coincidence that a sufficiently small error is directly connected with the quadratic hypothesis, and that if we use a sufficiently small initial error, then the growth will be governed by (9).

The greatest lower bound (infimum) t e,l of the time interval t e has a similar value across the spectrum of e 0 (Table 4), and we did not find any interpolation equation. The behavior of E (t) from the ensemble prediction approach before the exponential growth takes place has a similar form for all F and e 0 (Table 4, Figs. 2 and 3). Errors measured by ensemble prediction approach are approximately decreasing during the first 0.3 days and after 0.5 days it is approximately as large as e 0. Exponential growth overestimates the error for approximately the first 1.3 days, while underestimates it (till t e,l ) later. The maximum of this wave behavior occurs approximately at 2.3 days. Same behavior was also observed for e 0 ⩾ 0.1. To explain it we have to remind the definition of Lyapunov exponent as a long-term average characteristic. As stated in [19, 20], the error growth differs from the one established from Lyapunov exponent for the first few days. That is true not only for low dimensional models, but also for more complex global atmospheric circulation models.

The least upper bound t e,u of time interval t e follows (6), which can be generally written as

$${t_{e,u,{\rm{theoretical}}}} = - \left( {{1 \over {{\lambda _{\max }}}}} \right)\ln \left( {{e_0}} \right).$$
(13)

If we substitute (3) into the equation for the exponential growth \({E_{\exp, u}}\left( t \right) = {e_0}{{\rm{e}}^{\left( {{\lambda _{\max }}t} \right)}}\), we get

$${E_{e,u,{\rm{theoretical}}}} = 1.$$
(14)

E e,u,theoretical is invariant to all e 0 and F. In Fig. 9, we can see that E e,u from the ensemble prediction approach is very similar.

Fig. 9
figure 9

Time variations of the average prediction error E around E = 1 obtained from the Lorenz’s model (the thick line) for F = 9, for e 0,1, e 0,3, e 0,4 and exponential growth governed by the largest Lyapunov exponent (the thin line)

Here we would like to remind the reader that it is better to use (9) for e 0 = 〈0; 1〉 and (10) for e 0 = 〈1; 2〉, and again we dare to say that it is not a coincidence that the value of 1 plays a role in both. Equation (7) approximating predictability t p (length of the time interval, where E is growing) with theoretical parameters f and h can be written as

$${t_p} = {t_{p,{e_0} = 1}} - \left( {{1 \over {{\lambda _{\max }}}}} \right)\ln \left( {{e_0}} \right)$$
(15)

where \({t_{p,{e_0} = 1}}\) is the predictability for e 0 = 1. We did not find any general expression for \({t_{p,{e_0} = 1}}\). The second part of the expression represents t e,u,theoretical and for e 0 ∈ (0; 0.1), (15) can be rewritten into

$${t_p} = {t_{p,{e_0} = 1}} + {t_{e,u,{\rm{theoretical}}}}$$
(16)

which is the same equation as (8). This means that for e0 ∈ (0; 0.1) the increase in predictability time t p is due to the increase in time length of the exponential growth t e,u . The maximum predictability governed by (11) is 49 days for F = 8 and e 0,1. The lower predictability, governed by (12), is 12 days for F = 10 and e 0,6 (Table 4).

7 Conclusion

This article focuses on analyzing the average error growth in a low-dimensional atmospheric model[10]. Theoretical hypotheses with experimental and theoretical coefficients and exponential model are compared with the ensemble prediction method for different initial errors and model parameters. Dependence of predictability and validity of exponential growth on lastly mentioned errors and parameters is also studied.

The important resulting values are 0.1 and 1. Value 0.1 is border for hypotheses with experimental coefficients, and it is also the maximum sufficiently small initial error with exponential growth. If the initial error is smaller than 0.1, then it is better to use quadratic hypothesis. If it is bigger, then the logarithmic hypothesis becomes superior. Value 1 is border for hypotheses with theoretical coefficients, and it is also the size of the error at least upper bound (supremum) of time length of exponential growth for all sufficiently small initial errors and model parameters. If the initial error is smaller than 1, then it is better to use quadratic hypothesis. If it is bigger, then the logarithmic hypothesis becomes superior. Predictability, as a time interval where the model error is growing, is, for small initial error, the sum of the least upper bound of time interval of exponential growth and the predictability for the size of initial error equal to 1 as shown in (16). The least upper bound of time interval of exponential growth is negatively proportional to Lyapunov exponent and directly proportional to natural logarithm of small initial error as shown in (15). Exponential growth does not start from the beginning of error growth and the greatest lower bound of time interval of exponential growth has similar values across the spectrum of sufficiently small initial errors.

It is in relatively good agreement, e.g., [7, 8], with the ability and predictability of the current meteorological models (the predictability is approximately two weeks).

This number (two weeks) itself cannot be transferred to dynamic systems of a different nature (nonlinear oscillators, mechanical systems, etc.). When studying the predictability of other systems it is possible to use the methodology of our article, for example, a combination of approach using Lyapunov exponents and ensemble prediction or estimation error growth depending on its stage.