Robust neural dynamics with adaptive coefficient applied to solve the dynamic matrix square root

Zeroing neural networks (ZNN) have shown their state-of-the-art performance on dynamic problems. However, ZNNs are vulnerable to perturbations, which causes reliability concerns in these models owing to the potentially severe consequences. Although it has been reported that some models possess enhanced robustness but cost worse convergence speed. In order to address these problems, a robust neural dynamic with an adaptive coefficient (RNDAC) model is proposed, aided by the novel adaptive activation function and robust evolution formula to boost convergence speed and preserve robustness accuracy. In order to validate and analyze the performance of the RNDAC model, it is applied to solve the dynamic matrix square root (DMSR) problem. Related experiment results show that the RNDAC model reliably solves the DMSR question perturbed by various noises. Using the RNDAC model, we are able to reduce the residual error from 101\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^1$$\end{document} to 10-4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{-4}$$\end{document} with noise perturbed and reached a satisfying and competitive convergence speed, which converges within 3 s.


Introduction
The dynamic matrix square root problem (DMSR) is broadly applied to engineering and research, such as power system interconnection [1], automatic control [2], signal processing [3], neural networks [4], etc. Many practical engineering applications or problems can be transformed into a potential DMSR problem, which attracted an increasing number of researchers for these topics, replacing time-independent problems. Among many works, the zeroing neural network (ZNN) is one of the most comprehensively investigated ways to resolve dynamic problems [5][6][7][8][9] 1 School of Electronic and Information Engineering, Guangdong Ocean University, Zhanjiang 524088, China performance on both sides of parallel computing and convergence speed [10].
ZNN models are proposed to overcome the disadvantages of previous static numerical algorithms and neural networks, which can real-time solve dynamic problems in a predictable manner aided by developing the time derivative information [11][12][13]. Noise perturbation serves as an essential surrogate to evaluate the robustness of the dynamic solution model before being deployed [14][15][16]. However, it has been reported that the original ZNNs are shown easily interfered with by perturbed measurement noises crafted during model implementation, thereby severely limiting the applications in complicated environments [17,18]. In order to address this problem, some solution model enhancement techniques are proposed and investigated [19,20]. ZNN robustness enhancement technique is the task of improving the solving accuracy of ZNN models under variants measurement noise perturbed environments [21,22]. As a representative approach, integration enhance method based ZNN models are naturally immune to noises perturbed, which is a milestone in this field [23,24]. Whereas these methods adopt the fixed scale parameter and have limitations that lead to slower convergence speed or need a redundant manual to adjust [25]. After that, some improved ZNN models are proposed boosted by the modified activation functions [26][27][28]. A monotonically increasing odd function usually realizes the activation functions for accelerating the convergence speed or transforming the pose of the ZNN models [29].
Nevertheless, existing activation function injected ZNN models do not investigate and develop the residual error information during the dynamic problem solution procedure, thereby degrading their convergence [30]. Besides, note that the residual error information is a critical component when realizing the ZNN model, which is the foundation to construct the error function [31]. Therefore, developing better activation function design approaches is valuable to further improve the solution performance by exploiting the residual error information (Table 1).
This gap motivated us to propose a more efficient activation function to improve the performance of ZNN models aided by the residual error information while maintaining the robustness even perturbed by noises. Therefore, the adaptive activation function is proposed, which can activate each component of the solution system to be activated separately in a decoupled manner and leads to better convergence and accuracy performance. After that, combining the robust evolution formula, the RDNAC model is proposed. Then the proposed RDNAC model is applied to deal with the DMSR problem for the first time to demonstrate effectiveness and potential. The most characteristic of the RNDAC model is to adopt the component information of the residual error to optimize the real-time solution effect of the model for the first time. Further, comprehensive mathematical derivation and experiments demonstrated the validity and superiority of the RNDAC model theoretically and empirically, respectively. To overview the general design and implementation process, the RNDAC model's graphical flowchart of the RNDAC model to solve the DMSR problem is shown in Fig. 1. First, the error function is established based on the definition of the DSMR problem for monitoring the solution state. Second, the evolution formula of RNDAC is proposed to improve the convergence speed and enhance the robustness of existing models. Third, by combining the error function and substituting it with the evolution function, the RNDAC model to solve the DMSR problem is implemented. Finally, comprehensive comparison experiments are designed and conducted to verify the performance of the RNDAC model. The main contributions of this paper are listed as follows: 1. Different from previous work, the RNDAC model designs a new evolution formula and employs a new nonlinear adaptive activation function, which accelerates the convergence to the theoretical solution of the DMSR problem and gains a predominant performance in robustness. 2. Four theorems are given to verify the performance of the RNDAC model and analyze its robustness under three kinds of noises: constant, time-varying linear, and bounded random noises while solving the DMSR problem. Besides, the proof processes are given in detail. 3. The relevant quantitative and visual simulation experiments are carried out with a specific example to demonstrate the superior convergence and robustness of the RNDAC model under noise interference.
The rest of the structure is organized into the following six sections. "Preliminaries and related scheme formulation" shows the formulation and preliminary preparation of the DMSR problem to be solved. Next, the construction and evolution formula of the RNDAC model is given, and the corresponding analyses and proofs of the convergence of the RNDAC model are carried out in "RNDAC model construction and convergence analysis". "Robustness of the RNDAC model", rigorous theoretical analysis, and corresponding proof steps are given, which perfectly completes the proof of the robustness of the RNDAC model. "Simulations" provides quantitative and visual results and analyzes, which further illustrates the superiority of the RNDAC model through comparative experiments. Furthermore, the highlights and limitations of this work are discussed and presented in "Discussion". Finally, the paper is summarized in "Conclusions".

Preliminaries and related scheme formulation
The mathematical formula of the DMSR problem is expressed as follows: where the matrix N (t) ∈ R n×n is a known smooth dynamic matrix, and X (t) ∈ R n×n is the unknown dynamic matrix to be solved with t denotes time. The superscript 2 stands for the square operator of the matrix. It can be seen that the actual meaning of making the Eq. (1) holds is to find a suitable solution X (t). To this end, the error function is furnished as Note that the condition of the Eq. (1) holds is equivalent to all components of the error function (t) are zero. To address this problem, the original zeroing neural network (OZNN) model is adopted [2]. In light of the evolution formula of OZNN˙ (t) = −γ (t) and error function (2), the following OZNN model for solving the DMSR problem (1) is presented as where γ > 0 is a scale factor.Ẋ (t) andṄ (t) represent the time derivative of X (t) and N (t), respectively.  Noteworthy, the convergence speed and robustness of the OZNN model (3) are unsatisfactory. Given the disturbance of various noises that severely weaken the accuracy of the solution system or even make it break down. A residual-based adaptive coefficient ZNN (RACZNN) model is presented in [21]. The RACZNN model effectively avoids the problem of inflexibility in manually setting the scale factor and makes convergence faster. The RACZNN model is formulated aṡ where the adaptive scale coefficient ε(·) > 0 : R n×n → R and feedback coefficient σ (·) > 0 : R n×n → R. The adaptive scale coefficient ε(·) is constructed as where the parameter e > 1 and · F represents the Frobenius norm. Similarly, the adaptive feedback coefficient σ (·) is constructed as where the parameter g > 0.

RNDAC model construction and convergence analysis
To solve the DMSR problem (1) robustly under various noise interference environments and accelerate the convergence, the RNDAC model is proposed in this section.

Model formulation
Different from previous work, a new evolution formula is developed, which makes the RNDAC model perform better in the two crucial performance indicators of solution accuracy and convergence speed. The evolution formula of the RNDAC model is designed as where the scale coefficient η > 0 and the feedback coefficient ζ > 0. Function κ(·) : R n×n → R n×n and ϕ(·) : R n×n → R n×n represent the adaptive control and the adaptive feedback activation function, respectively. The following expression of f (·) that can be used to construct both the adaptive control κ(·) and the adaptive feedback activation function ϕ(·): • Power bounded adaptive function: • Exponential bounded adaptive function: where i, j = 1, 2, ..., n, and parameter ς > 0, which is adapted to limit the residual error of the solving system and enhance the robustness. Hyperparameters > 1 and ι > 1 are utilized to control the convergence speed.
Therefore, through the combination of Eqs. (2) and (7), the RNDAC model for solving the DMSR problem (1) is formulated as In addition, the RNDAC model (10) is inevitably affected by different noises in practical application environments. Hence, here give the form of the RNDAC model (10) for solving the DMSR problem (1) under noises interference as follows: where the noises perturbation item ν(t) ∈ R n×n . (10) is based on the continuous-time ZNN model, the "ode45" solver in MAT-LAB is exploited to transfer the RNDAC model (10) into an ordinary differential equation for simulation with an approximate continuous-time from and compare with different existing ZNN models. Specifically, the RNDAC model (10) is transformed into an initial value ordinary differential equation with a mass matrix to solve the target solution X (t) in real time by using the Runge-Kutta method. The target solution X (t) obtained over time is input to the obtained error function (t), and the residual error is generated until the accuracy reaches the requirement, and the iteration ends.

Remark 2
There are some recent works on adaptive control. First, according to the adaptive event-triggered mechanism, the sliding mode control of a class of random switching systems is realized and applied to the boost converter circuit model [33]. Second, based on the adaptive backstepping design framework incorporated with the universal online approximation capability of neural networks, Su et al. [34] realizes the probability-based asymptotic tracking control and the probability-based asymptotic tracking control. Kong et al. [35] implement an adaptive strategy based on introducing a neural network to identify unknown nonlinear functions, uses an effective hypothesis to deal with unknown system coefficients and applies it to a class of uncertain switching MIMO non-strict feedback adaptive output feedback neural tracking control problem. Compared with the three models above, the novelty of the adaptive strategy in this paper is that it is the first time to apply the adaptive strategy to the direction of the ZNN model and solves the DMSR problem (1). Unlike these works, we use the residual and integral information to guide the convergence of the model.

Convergence analysis
We provide theorem and the corresponding proof to comprehensively analyze the convergence of the RNDAC model (10).

Theorem 1 Starting from a random initial state, the RNDAC model (10) global converges to the theoretical solution of the DMSR problem (1).
Proof Based on the evolution formula (7), the i jth element of the RNDAC model (10) is depicted aṡ By introducing the Lyapunov theory [17,36], the intermediate variable ϑ(t) is introduced as and its derivative formθ(t) =˙ i j (t) + κ( i j (t)). The Eq. (12) can be rewritten in the following form to analyze the stability of the proposed RNDAC model (10) based on the Lyapunov theory mentioned in paper [17,36]. The following Lyapunov candidate function Y 1 (t) is offered: Moreover, the derivative form of Lyapunov candidate func- . In light of the definition of activation function (8) and (9), the inequality is established as follows: When Besides, on the basis of the definition of F-norm, we can obtained: Hence, the following inequality is obtained: The time derivative of Y 1 (t) asẎ 1 (t) ≤ 0. Obviously, it can be known from Eq. (17) that the necessary and sufficient condition to satisfy Y 1 (t) = 0 andẎ 1 (t) = 0 is if and only if ϑ(t) = 0. Therefore, the ϑ(t) is globally convergent to zero by the definition of Lyapunov stability theory. Based on the LaSalles invariance principle [37] and Rearranging the Lyapunov candidate function as Y 2 (t) = T (t) (t) is convenient for further proof steps, and the time derivative of Y 2 (t) can be demonstrated aṡ Y 2 (t) = − T (t)κ( (t)). Next, by referring to the preceding proof steps for Y 1 (t) to prove the function Y 2 (t), the completion that Y 2 (t) ≤ 0 can also be acquired. The elaborate proof steps for Y 2 (t) are omitted here. Therefore, the RNDAC model (10) globally converges to the theoretical solution.
The proof is thus completed.

Computational complexity analysis
The feasibility of the RNDAC model (10) is further illustrated from the implementation and computational point of view. First, the magnitude of the variables in Eq. (10) are reviewed: (t) ∈ R n×n , κ(·) ∈ R n×n and ϕ(·) ∈ R n×n . Meanwhile, the addition, subtraction, multiplication, or division of two floating point numbers should be informed following to analyze the computational complexity of the RNDAC model (10). Based on the abovementioned preconditions, the calculation of )dδ requires n 3 + 8n 2 + n + 1 floating point operators; the calculation of · 2 F requires 2n 2 − 1 floating point operators. The RNDAC model (10) costs n 3 + 8n 2 + 2n + 3 floating point operators at each time instant. It can be easily obtained that the OZNN model (3) costs n 3 + 2n 2 − n floating point operators, the RACZNN model (4) costs n 3 + 6n 2 + 4n floating point operators.

Robustness of the RNDAC model
In this section, three theorems and corresponding proof are provided for investigating the robustness of the RNDAC model (10) under noise injected environments.

Theorem 2
The residual error (t) F of the RNDAC model (10) for solving the DMSR problem (1) globally converges to zero under the disturbance of constant noises ν(t) = ν ∈ R n×n when the bounds of the activation function satisfy −ζ ϕ(ϑ(t)) + ν ≤ 0.
Proof First, the intermediate variable is introduced as follows: Referring to the defination of the intermediate variable ϑ(t), the time-derivative of ϑ(t) under the constant noises ν is denoted asθ(t) = −ζ ϕ(ϑ(t)) + ν and its ith subelement is expressed aṡ For further investigating the anti-interference performance of the RNDAC model (10), we construct the following Lyapunov candidate function L i (t) = ϑ 2 i (t)/2 and its time derivative as follows: It can be concluded from the Eq> (19) that the sign of parameter ϑ i (t) is related on the positive or negative of the L i (t). Therefore, for the three different cases of the symbolic value of parameter ϑ i (t), we discuss them independently.

If # i (t) < 0
In accordance with the definition of the exponential bounded adaptive activation function, we can get the ϕ(ϑ i (t)) < 0. Next, to ensure the final conclusion ofL i (t) < 0 is obtained. There are three different cases discussed and shown below.
• Under the condition of −ζ ϕ(ϑ i (t)) + ν i > 0 and ϑ i (t) < 0, the Lyapunov candidate functionL i (t) < 0. Hence, we can get the conclusion that the system (18) converges to the theoretical solution of the DMSR problem (1) within finite time based on the Lyapunov theory [38]. • Under the condition of −ζ ϕ(ϑ i (t)) + ν i = 0, by converting this equation, that is, ϑ i (t) = ϕ −1 (ν i /ζ ). From the Eq. (19) can also be acquired simultaneously that the Lyapunov candidate functionL i (t) = 0. At this time, the system (18) converges to zero. • Under the condition of −ζ ϕ(ϑ i (t))+ν i < 0, meanwhile, since the same signs of ϑ i (t) and −ζ ϕ(ϑ i (t)) + ν i are both less than zero, it is concluded thatL i (t) > 0, and the system (18) diverges. In addition, the absolute value of ϑ i (t) is proportional to the absolute value of ϕ(ϑ i (t)) contemporaneously. That is to say, the absolute value of ϕ(ϑ i (t)) becomes larger when the absolute value of ϑ i (t) becomes larger. The robustness of the system (18) in this case is concerned with the upper ϕ + and lower ϕ − bounds of the exponential bounded adaptive function, we divide the relationship between ζ ϕ − and ν i into the following two subcases for further analysis. In front of ζ ϕ − > ν i , the system (18) diverges as time goes on. Inversely, in front of ζ ϕ − < ν i , the system (18) always exists a time t at which −ζ ϕ(ϑ i (t)) + ν i = 0 with the change of time, and the system (18) tends to be stable. Ultimately, to avoid system divergence, the scale parameter ζ in (7) and (22) needs to be adjusted appropriately.
Distinctly, ϕ(ϑ i (t)) = 0 andθ i (t) = ν i can be deduced from the Eq. (18) under this precondition. To ensure system stability, make constant noises ϑ i (t) > 0 when ν i > 0 or ϑ i (t) < 0 when ν i < 0. Accordingly, ϑ i (t) exists only as a transient state, and the system (18) is unstable when ν i (t) = 0, which indicates that the situation will revert to the case of ϑ i (t) < 0 or ϑ i (t) > 0.
This part can be regarded as the opposite situation to ϑ i (t) < 0, which can be analyzed according to the same proof steps as ϑ i (t) < 0, and the detailed proof steps are omitted here. After that, the convergence of the system (18) is analyzed in the state of t → ∞. Through mathematical derivation, we can get lim t→∞ ϑ(t) = ϕ −1 (ν i /ζ ) and lim t→∞θ (t) = 0. When taking t → ∞ for the time t of the time derivative of the Eq. (13). That is,θ(t) =˙ (t) + κ( (t)) is reformulated aṡ In other words, this situation corresponds to the research of Theorem 1. All in all, the RNDAC model (10) activated by the exponential bounded adaptive function can converge to the theoretical solution of the DMSR problem (1) under constant noises ν, and the residual error (t) F of the RNDAC model (10) also globally converges to zero.
The proof is thus completed.

Theorem 3 In the presence of time-varying linear noises ν(t) ∈ R n×n , the RNDAC model (10) globally converges to the theoretical solution X * (t) of the DMSR problem (1). Taking the bound of the RNDAC model (10) residual error is expressed as lim
Proof According to the definition of Laplace transform [39], the i jth subsystem of the RNDAC model (10) polluted by time-varying linear noises ν(t) is rewritten as where i, j = 1, 2, ..., n and it can be further derived into: According to the final value theorem, the Eq. (18) is formulated as It can be observed from the above formula (23) that the values of ζ and η are related to the value of the lim t→∞ i j (t) F . When the value of ζ η as the denominator of the Eq. (23) approaches to positive infinity, the final value (t) F can be further decribed as lim t→∞ (t) F = 0. The proof is thus completed.

Theorem 4 Under the interference of bounded random noises ν(t) = φ(t) ∈ R n×n , the RNDAC model (10) starts from a randomly generated initial state and eventually converges to a certain bounded range.
Proof The activation functions κ(·) and ϕ(·) are both specified as linear activation functions for the following proof process. And then, the Eq. (7) can be denoted aṡ Furthermore, adjusting the Eq. (24) to: where the above auxiliary matrices are represented as By solving the Eq. (26) can obtain: Defined by the triangle inequality mentioned in reference [40], the following relational expression can be acquired: Next, a detailed discussion is made according to the following three different situations: • Under the condition of (η + ζ ) 2 < 4ηζ . Importing the auxiliary parameters 1 = (−(η + ζ ) When 1 > 2 : Here are two inequalities satisfy that Next, in the light of the above conditions: Where the parameter G = In addition, we can obtain the following: Then, we take the limit on the residual error (t) F as The situation of 1 > 2 can be roughly regarded as the same situation of 1 < 2 , and the tedious proof steps are omitted here. • Under the condition of (η + ζ ) 2 = 4ηζ . We can also get the following: In this time, the parameter = −(η+ζ ) 2 . When there are two parameters and with positive values, the inequality is texp( t) The conclusion is derived as To sum up, the residual error (t) F of the RNDAC model (10) is bounded when disturbed by bounded random noises ψ(t). The residual error upper bound lim t→∞ (t) F can be arbitrarily small if the design parameters η and ζ are large enough.
Thus, the proof is thus completed.

Experiments setup
All simulation experiments are conducted via MATLAB R2021b on a computer with Intel Core i7-12700F @2.10 GHz CPU, 64 GB memory, NVIDIA GeForce GTX 3080Ti GPU, and Windows 11 operating system. In order to realize the RNDAC model (10) and other comparison models for a fair comparison, a versatile ordinary differential equations solver "ode45" function provided in Matlab is exploited. Note that the "ode45" function is the most commonly used implementation approach for continuous-time ZNN Models [22,41,42] in addition more detail about this function has been introduced in Remark 1. Finally, our simulation experiments code is available on GitHub: https://github.com/ cfko836/RNDAC.

Time-dependent numerical experiment
In this section, the experimental simulation is carried out in embrace of the dynamic matrix instance N (t) in the DMSR problem (1).
Experiments are conducted with the dynamic matrix instance N (t) in the DMSR problem (1) • Exponential bounded adaptive function: Besides, the construction approach of the corresponding models for comparison are given as • OZNN model [32]: where γ > 0 is a scaling factor. • Modified ZNN (MZNN) model [24]: NF, CN, TVN, and RN signify the noises free, constant noises, time-varying linear noises, and random noises, respectively The bold font denotes the best result in the corresponding item with γ > 0 and υ > 0. • Improved zeroing neural dynamics (IZND) model [30]: • RACZNN model (4) for solving the DSMR (1):

RNDAC model in noise-free environment
The comparison results of the OZNN (28), MZNN (29), IZND (30), RACZNN (31) and the RNDAC model (10) to solve the DMSR problem (5.2) in the ideal environment is illustrated in Fig. 2. As can be seen in Fig. 2a, the RNDAC model (10) is able to converge to the highest accuracy, which is the order of 10 −7 , within three seconds. Meanwhile, the comparison models can only converge up to 10 −6 with the cost of prolonged convergence time consumption. Noteworthy, although the IZND model (30) is shown to possess the fastest converge speed, it can only converge to order 10 −4 , which is much lower than the accuracy of the RNDAC model (30). Furthermore, Fig. 2a reveals that the RNDAC model has the ability to accurately converge to a theoretical solution from any initial state.The quantitative experiment results are shown in Table 2, which detailedly records the average steady-state residual error (ASSRE) and maximal steadystate residual error (MSSRE) when different models solve the DMSR problem (1). The mathematical description of MSSRE is defined as lim t→∞ sup (t m ) F , t m ∈ [t s , t max ] and ASSRE is defined as where t(s) represents the length of time in seconds. As exhibited in Table 2, booting by the adaptive activation function, the RNDAC can achieve the highest accuracy compared with other advanced models. It infers that one of the biggest advantages of the RNDAC model is the high precision. In general, in the noise free situation, the RNDAC model (10) can solve the DMSR problem (1) well, showing a competitive performance.

RNDAC model in noises interference environment
In this subsection, the simulations are executed to investigate the robust performance. Figure 3 demonstrates the performance of the RNDAC model (10) and compared models

In the case of constant noises
First, the noises in experiments are set to [5] 2×2 and injected into the model as described in (11). As shown in Fig. 3d, the norm residual error of the RNDAC model (10) proposed in this paper converges in order of 10 −4 , which is more accurate to the theoretical solution of the DMSR problem (5.2) than the OZNN (28), MZNN (29), IZND (30), RACZNN (31) model. At the same time, it is also observed from Fig. 3a that the RNDAC model (10) also performs well in terms of convergence speed. Furthermore, the numerical result, in this case, is also presented in Table 2. Even under the constant noise perturb, the RNDAC model can reach order 10 −4 . It is to say, the performance of the RNDAC model in terms of ASSRE and MSSRE is the best in this case.

In the case of time-varying linear noises
Under the disturbance of the time-varying linear noises ν tv (t) = [2] 2×2 , the solving performance compared among the RNDAC and other models is shown in Fig. 3b, e. Obvi-ously, when solving the time-varying linear noises perturbed DMSR problem (1), the RNDAC model (10) can stably and fast converge to the order of 10 −2 and preserve the accuracy consistently. On the aspect of OZNN (28) and IZND (30) models, their residual error increases over time and then causes failure in the solution process. Further, despite the MZNN (29) and RACZNN (31) models can eliminate the influence of part of the time-varying linear noise, the cost is longer convergence time, and the accuracy is lower than the RNDAC model (10). The results provided in Table 2 further this phenomenon.

In the case of bounded random noises
It is challenging to solve the DMSR problem (1) stably under the interference of bounded random noise. In this environment, the IZND (30) model shows the best convergence performance and reaches the order of 10 −3 , which is close to the accuracy generated by the RNDAC model (10). Nonetheless, the robustness performance of the RNDAC model (10) manifests the competitiveness. Specifically, except for the IZND model (30), the RNDAC model (10) is the fastest to converge. Besides, the RNDAC model (10) is the most accurate model under the perturbs of bounded random noise. Similar conclusions can be drawn from Table 2.  The effect of parameter ζ on RNDAC model (10)

Different values of hyperparameter
In this subsection, we control hyper-parameters η and ζ to analyze the relationship between hyper-parameters and the robustness of the RNDAC model (10). First, it can be seen from Fig. 4a that when the parameter ζ keeps as a fixed value, meanwhile, η changes lead to a change in convergence time. Specifically, the RNDAC model converges faster with η increase but results in more computational consumption, which is limited by the practical implementation environment. Second, when the parameter η is set as a fixed value, the convergence speed of the RNDAC model (10) is positively correlated with the value of ζ , which the accuracy decreases with larger ζ . In addition, according to the four Theorems mentioned in this paper, it can be obtained that the accuracy and convergence performance of the RNDAC model (10) is related to the hyper-parameters. Specifically, as the parameter η increases and the parameter ζ decreases, its upper and lower bounds will be more accurate under the influence of noise.

Summary
In general, the evolution formula and activation function is the difference between our method and previous models adopted for solving the dynamic problem. The experiment results reveal that the RNDAC maintains high accuracy and faster convergence speed in the noise-free environment since it employs the proposed adaptive activation function. On the aspect of robustness, due to adopting the noise suppression evolution formula, the solution performance of the RNDAC model demonstrates enhanced robustness compared with other models, which shows reliable solution performance.
In order to manifest the difference between this paper and previous works and clarify the motivation, the highlights of this paper are listed as follows: 1. The adaptive activation function design framework is proposed for the first time, which can effectively improve the solution system's convergence speed and accuracy. 2. Based on the adaptive activation function and the novel robustness evolution formula, the RNDAC model is proposed to solve the noises perturbated DMSR problem. 3. Four theorems and proof process is provided for comprehensive analyze the performance of the RNDAC model.

Limitations
The limitations of the proposed model are two aspects. First, the improvement of the RNDAC model relies on the adaptive activation function, thereby needing the procedure of additional hyperparameters adjustment, which hinders the flexibility of model deployment. Second, for the way to adjust the hyperparameter and activation function to improve the RNDAN model's performance, we have only empirically validated it in the simulation part.

Conclusions
This paper proposes a novel adaptive activation function design and constructs a framework for improving the solution model's performance. Based on this framework and the robust evolution formula, the robust neural dynamic with adaptive coefficient (RDNAC) model is presented and applied to solving the dynamic matrix square root (DMSR) problem. Unlike the existing activation function is usually realized by a monotonically increasing odd function, our method relaxes the limitation and integrates residual information for each component to accelerate convergence. Theoretically, four theorems and proof processes are presented, which show the RNDAC model globally converges to zero in noise free and can preserve reliable solution performance in noises injected environments. Then, experiments with three metrics considered (convergence time, MSSRE, and ASSRE) are conducted to validate the effectiveness of the proposed model and investigate why it works in dealing with the problem. Under a noise free environment, our proposed model can achieve higher accuracy with faster convergence speed than previous methods. On the other hand, the RNDAC model can robustness solve the DMSR problem even perturbed by various noises, which guarantees the reliability concerns on the real-time solving system. Considering the limitations of this paper, our future work will denote simplifying the process redundant adjustment procedure in activation functions by exploiting the dynamic bounds method. After that, we will conduct a comprehensive theoretical analysis to investigate the better implementation scheme of the proposed adaptive activation function for guiding the design and realization of the RNDAC model. Besides, we will extend the RNDAC model to the complex-valued and look for practical applications such as localization systems, filter design, algorithmic platforms for UAVs, etc.

Declarations
Conflict of interest We wish to draw the attention of the Editor to the following facts which may be considered as potential conflicts of interest and to significant financial contributions to this work. We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the other of authors listed in the manuscript has been approved by all of us and that the second author prepared the revision information letter and addressed most of the comments. We confirm that we have given due consideration to the protection of intellectual property associated with this work and that there are no impediments to publication, including the timing of publication, with respect to intellectual property. In so doing we confirm that we have followed the regulations of our institutions concerning intellectual property. We understand that the Corresponding Author is the sole contact for the Editorial process (including Editorial Manager and direct communications with the office ). He is responsible for communicating with the other authors about progress, submissions of revision and final approve of proof. We confirm that we have provided a current email address which is accessible by the Corresponding Author and which has been configured to accept email from the Complex and Intelligent Systems Editorial Office.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Dynamic coefficient matrix and N (t) ∈ R n×n X (t)

Appendix A Definition of notation
Real-time solution matrix and X (t) ∈ R n×n (t) Error function ν(t) Noises perturbation item γ Scale factor and γ > 0 ζ Scale factor and γ > 0 η Feedback factor and η > 0 κ(·) Adaptive control activation function ϕ(·) Adaptive feedback activation function The parameter in activation function for control convergence speed and > 1 ι The parameter in activation function for control convergence speed and ι > 1 ς + Upper bound of the adaptive activation function ς − Lower bound of the adaptive activation function