Introduction

The control of mobile robots (MR) is an interesting topic because of their wide applications. Wheeled MRs are one of the most popular and widely used types in robot design because they have many benefits, including simple structure, high energy efficiency and speed, low manufacturing cost, and many other valuable features. Wheeled MRs are robots that can quickly move on the ground utilizing wheels that are connected to the motor. The design of wheeled robots is much easier than the robots with legs. At the same time, they are easier to build, program, and control. Disadvantages of wheeled MRs are that obstacles such as uneven terrain, steep drops, or low friction areas cannot be traversed well [1, 2].

Controlling the movement of wheeled MRs and their practical applications is a significant research challenge that has received much attention in recent decades [3, 4]. Various new control methods have been proposed to guide such robots [5]. For example, in [6] a cooperative controller is developed, and designing a kinematic control scheme reaching the desired speed of the robot is investigated. In [7, 8], a sliding mode controller (SMC) is introduced for MRs, and by designing an observer, some disturbances such as wheeled slipping are investigated. In [9], a warehouse MR is reviewed, and by considering the kinematics, a feedback controller is designed, and the tracking of desired velocities is investigated. In [10], a simple PID is intended for wheeled MRs considering the kinematics, and the stability is analyzed. A fault-tolerant controller for autonomous MRs is studied in [11], and the actuator faults are diagnosed. In [12], a fractional SMC is developed, and by deriving some conditions, the stability of MRs in challenging situations is examined. The optimal controller for MRs considering the interchangeable models is studied in [13], and the effect of time-varying constraints on velocity is investigated. In [14], an SMC is designed by iteratively solving quadratic programming, and the optimality of the velocity of MR is analyzed. In [15], an event-triggered controller is intended for MRs, and the estimation of angular and velocity are studied by designing an observer. In [16], a tuned PID by the Grey Wolf algorithm is suggested for MRs, and the stability of MR is studied under forced displacements and noises. Due to the existence of internal restrictions and speed limits in the robot, using model predictive control (MPC) is interesting for researchers. Because, MPC provides an interesting tool to consider all restrictions in designing, such as any limitations for output and control signals [11, 17].

The MPC is a model-based optimal method that uses the prediction of the system output to obtain the control law. The advantages of MPC include simplicity, simple adjustment of controller parameters by changing the definition of the cost function, optimality, simplicity in generalization to multi-input-multi-output systems, usable for non-phase systems and unstable processes, direct use of nonlinear models for prediction delay compensation and the possibility of considering restrictions on input–outputs such as operator restrictions can be mentioned. MPC research on the mobile robot mainly focuses on the problem of stability, optimization, or system modeling. The issue of sustainability has been investigated in many types of research related to robots. Still, it has other analysis methods due to different restrictions on the systems and the type of robot used. In [18], a formation controller based on the concept of MPC is designed, and by examination of the real-life scenarios, the efficacy of the suggested approach is verified. In [19], the effect of friction is considered, an MPC is formulated for three-wheeled MRs, and the impact of constraints on input torque is investigated. In [20], the formation control of MRs is studied, and an MPC is designed considering some limitations on the actuators. MPC can predict the robot’s behavior several steps ahead and optimally operate to control and direct the robot. This method defines the cost function as a sequence of control input and system states. The predictive control model provides the best answer to minimize the cost function by optimizing this sequence. Optimization is done in different ways, some of which are very time-consuming and inefficient because it is necessary to have an answer in the shortest time to continue the control process. In addition, some optimization methods cannot provide solutions in certain conditions. Therefore, further research has been done to achieve the optimal real-time response in MPC. Modeling is a challenging problem in MPC, and it has a substantial impact on the performance of MPC. It should be noted that the accuracy of MPC depends on the accuracy of the model of the plant. Because the model is used for prediction, and the predicted signals are used in the construction of the controller. So, any error in the model directly impacts the prediction efficiency and, subsequently, the controller efficiency. One of the famous practical modeling approaches is the fuzzy logic system (FLSs) [21,22,23,24].

Modeling of moving robots is done in two dynamic and kinematic domains. The relationship between spatial characteristics and time changes with the forces and torques required to create such changes and movements are examined in its equations. In the kinematics of moving robots, the locations and speeds of the robots are considered [24, 25]. In contrast, in the field of dynamics, the forces necessary to create movement in the robots are considered. Some neural-fuzzy controllers have been developed to assess the uncertainties and estimate the dynamics. Using the kinematic model in low-speed robots may be acceptable; however, in high-speed applications, the kinematic model is not an accurate description of the system and does not have the necessary accuracy to describe the system. Using the dynamic model increases the accuracy of the answers and brings the simulation answers closer to the practical solutions. In these approaches, the problem is formulated so that instead of solving two optimization problems on the kinematic model and the dynamic model at each moment, only one optimization problem is needed, which takes less time to perform calculations, and the optimal response is in real-time. Also, in these approaches, the obstacles in the robot’s path are considered so that the proposed algorithm can be evaluated in an environment with a predetermined direction. In these methods, the robot moves in the direction according to the information of the surrounding environment that it receives in each sampling by the camera installed on it. Therefore, at each moment, the position of the robot and obstacles are determined using the information received from the camera, and then the next parts of the robot are determined using the prediction model. In [26], the nonholonomic MRs are studied, and by designing, an MPC is created utilizing the estimation capability of FLSs. In [27], considering the historical data, a T2-FLS based MPC is designed, and by the use of the Stone-Weierstrass approach, the stability is analyzed. In [28], a T2-FLS based controller is developed for MRs, and it is shown that the metaheuristic efficiency is improved using type-2 FLSs. In [29], the backstepping controller is formulated for MRs, and its accuracy is enhanced using T2-FLSs. In [30], the event-triggered controller is developed for MRs using T2-FLSs, and the \(H_\infty \) stability is analyzed. The other challenging problem in fuzzy control is the learning scheme. In [31], an innovative learning method has been devised to facilitate learning in nonlinear systems subject to constraints and perturbation rejection. Specifically, to achieve the targeted output performance and accommodate time-varying state constraints, the learning algorithm will utilize a predetermined performance function and barrier Lyapunov functions. Additionally, the learning approach will integrate a neural network adaptive control coupled with extended state observers to account for internal uncertainties and external disturbances, thereby enabling anticipation and compensation of these factors. In [32], to deal with both matched and mismatched disturbances at the same time, two continuous control algorithms were created for a category of uncertain nonlinear systems to provide asymptotic tracking performance feedback control.

Fig. 1
figure 1

Diagram of control scheme

Ardashir et al. have recently proposed type-3 FLS-based controllers with higher accuracy and robustness for nonlinear problems. T3-FLSs have been used in various issues. For instance, in [33], a controller is suggested for MRs by the use of T3-FLSs, and the Bee colony is developed for optimization. In [34], an MPC controller is designed for MRs using T3-FLSs, and the dynamics of MRs are estimated based on T3-FLSs. In [35] the T3-FLS based observer and controller are designed for nonlinear systems, and the superiority of T3-FLS based controllers is examined on several benchmark systems. In [36], the chaotic financial systems are analyzed using T3-FLSs, and the better accuracy of T3-FLSs is verified. The accurate modeling and forecasting capability of T3-FLSs is shown by applying time series in [37]. In [38] the optimization of T3-FLSs is investigated through evolutionary-based algorithms.

The above literature review shows that most of the MPCs for MRs use conventional models. Also, in many applications of MRs, there are various uncertainties. However, the new strong T3-FLSs have not been developed for MRs. Analyzing the stability of MR is the other topic in this field that needs further study. Regarding the above discussion, a T3-FLS based controller is presented in this paper. The primary objective of the paper is to design a robust path-following scheme in the presence of uncertainties. The main contributions of this study include:

  • A deep learning MPC is designed to enhance the tracking accuracy of MRs, under chaotic references.

  • A T3-FLSs based controller is introduced to improve the resistance of MRs under practical hard situations.

  • The stability and robustness are ensured by new adaptation laws.

  • The feasibility of the designed type-3 fuzzy-based controller is studied, and a practical examination is presented.

Problem description

A general view

A general view of the suggested controller is depicted in Fig. 1. The dynamics of MR are estimated by NT3FSs, and a primary controller is designed. Then, using the Boltzmann machine a nonlinear MPC is designed to improve the accuracy. Finally, by designing a compensator and adjusting the rules of NT3FSs in direction of stability, closed-loop stability is ensured. The NMPC is optimized by Boltzmann machine and NT3FSs are learned by Lyapunov approach.

Kinematic model

In this paper, a 2-wheeled model mobile robot is considered as a case study. The general diagram of 2-wheeled model is depicted in Fig. 2. The velocity equations are written as [39]:

$$\begin{aligned} \begin{array}{*{20}{l}} {v = \mathcal{R}\left( {{{{\dot{\theta }} }_\mathcal{R}} + {{{\dot{\theta }} }_\iota }} \right) /2}\\ {\omega = \mathcal{R}\left( {{{{\dot{\theta }} }_\mathcal{R}} - {{{\dot{\theta }} }_\iota }} \right) /2\iota } \end{array} \end{aligned}$$
(1)

where, v and \(\omega \) are the linear velocities, \({\mathcal {R}}\) denotes the radius, and \(\iota \) denotes the distance of wheels. The rotation equations are [40]:

(2)
Fig. 2
figure 2

Diagram of wheeled MR

where, P denotes the rotation angle. By taking into account the angular velocity, one can write:

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {\dot{x}}\\ {\dot{y}}\\ {\dot{P} } \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\frac{{\mathcal {R}}}{2}\cos \left( P \right) }&{}\quad {\frac{{\mathcal {R}}}{2}\cos \left( P \right) }\\ {\frac{{\mathcal {R}}}{2}\sin \left( P \right) }&{}\quad {\frac{{\mathcal {R}}}{2}\sin \left( P \right) }\\ {\frac{{\mathcal {R}}}{{2\iota }}}&{}\quad { - \frac{{\mathcal {R}}}{{2\iota }}} \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {{{{\dot{\theta }} }_{\mathcal {R}}}}\\ {{{{\dot{\theta }} }_\iota }} \end{array}} \right] \end{aligned}$$
(3)

Dynamics

Using the Lagrange method, the dynamics are give as [41]:

$$\begin{aligned} \begin{array}{l} \theta \left( q \right) \ddot{q} + \eta \left( {q,\dot{q}} \right) \dot{q} + \Psi \left( q \right) + {\tau _\vartheta } = W\left( q \right) \tau - {Q^T}\gamma \end{array} \end{aligned}$$
(4)

where, \(\theta \left( q \right) \), \(W\left( q\right) \), \({\tau _\vartheta }\), \(\Psi \), and \(\eta \), are inertia, input, disturbance, gravitational and Coriolis matrices, respectively. \(\tau \) and q denote the input vector and the position, respectively. By \(\Psi =0\), we have:

$$\begin{aligned} \theta \left( q \right)= & {} \left[ {\begin{array}{*{20}{c}} m&{}\quad 0&{}\quad { - md\sin \left( P \right) }&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad {md\cos \left( P \right) }&{}\quad 0&{}\quad 0\\ { - md\sin \left( P \right) }&{}\quad {md\cos \left( P \right) }&{}\quad I&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad {{I_\omega }}&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad {{I_\omega }} \end{array}} \right] \nonumber \\ \end{aligned}$$
(5)
$$\begin{aligned} \eta \left( {q,\dot{q} } \right)= & {} \left[ {\begin{array}{*{20}{c}} 0&{}\quad { - md\cos \left( P \right) }&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad { - md\sin \left( P \right) }&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0 \end{array}} \right] \end{aligned}$$
(6)
$$\begin{aligned} {Q^T}\gamma= & {} \left[ {\begin{array}{*{20}{c}} { - \sin \left( P \right) }&{}\quad {\cos \left( P \right) }&{}\quad {\cos \left( P \right) }\\ {\cos \left( P \right) }&{}\quad {\sin \left( P \right) }&{}\quad {\sin \left( P \right) }\\ { - \vartheta }&{}\quad \iota &{}\quad { - \iota }\\ 0&{}\quad { - {\mathcal {R}}}&{}\quad 0\\ 0&{}\quad 0&{}\quad { - {\mathcal {R}}} \end{array}} \right] \end{aligned}$$
(7)
Table 1 Parameters value

where, the value of all parameters are given in Table 1. Considering the transformation matrix \(\varsigma \) as the null space of Q such that \({\dot{\chi }} = \varsigma \chi \), \(\ddot{\chi }= {\dot{\varsigma }} \chi + \varsigma {\dot{\chi }}\), \(\chi = {\left[ {\begin{array}{*{20}{c}} {{{{\dot{\theta }} }_{\mathcal {R}}}}&{{{{\dot{\theta }} }_\iota }} \end{array}} \right] ^T}\), \({\varsigma ^T}{Q^T} = 0\) ( \(\varsigma \) is the null space of Q), we can write:

$$\begin{aligned}{} & {} \left[ {\begin{array}{*{20}{c}} {\dot{x}}\\ {\dot{y}}\\ {{\dot{\varphi }} }\\ {{{{\dot{\theta }} }_{\mathcal {R}}}}\\ {{{{\dot{\theta }} }_\iota }} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\frac{{\mathcal {R}}}{2}\cos \left( P \right) }&{}\quad {\frac{{\mathcal {R}}}{2}\cos \left( P \right) }\\ {\frac{{\mathcal {R}}}{2}\sin \left( P \right) }&{}\quad {\frac{{\mathcal {R}}}{2}\sin \left( P \right) }\\ {\frac{{\mathcal {R}}}{{2\iota }}}&{}\quad { - \frac{{\mathcal {R}}}{{2\iota }}}\\ 1&{}\quad 0\\ 0&{}\quad 1 \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {{{{\dot{\theta }} }_{\mathcal {R}}}}\\ {{{{\dot{\theta }} }_\iota }} \end{array}} \right] \end{aligned}$$
(8)
$$\begin{aligned}{} & {} {\theta \left( q \right) \left( {{\dot{\varsigma }} \chi + \varsigma {\dot{\chi }} } \right) + \eta \left( {q,\dot{q}} \right) \varsigma = W\left( q \right) \tau - {Q^T}\gamma } \end{aligned}$$
(9)
Fig. 3
figure 3

Diagram of NT3FS

From (9), by multiplying both side in \({\varsigma ^T}\), defining \({\bar{\theta }} \left( q \right) = {\varsigma ^T}\theta \left( q \right) \varsigma \), and \({\bar{\eta }} \left( {q,\dot{q}} \right) = {\varsigma ^T}\theta \left( q \right) {\dot{\varsigma }} + {\varsigma ^T}\eta \left( {q,\dot{q}} \right) \varsigma \) and \({\bar{W}}\left( q \right) = {\varsigma ^T}W\left( q \right) \) we have:

$$\begin{aligned} {\bar{\theta }} \left( q \right) {\dot{\chi }} + {\bar{\eta }} \left( {q,\dot{q}} \right) \chi = {\bar{W}}\left( q \right) \tau \end{aligned}$$
(10)

where,

$$\begin{aligned}{} & {} \begin{array}{l} {\bar{\theta }} \left( q \right) \\ \quad =\left[ {\begin{array}{*{20}{c}} {{I_\omega } + \frac{{{{\mathcal {R}}^2}}}{{4{\iota ^2}}}\left( {m{\iota ^2} + I} \right) }&{} \quad {\frac{{{{\mathcal {R}}^2}}}{{4{\iota ^2}}}\left( {m{\iota ^2} - I} \right) }\\ {\frac{{{{\mathcal {R}}^2}}}{{4{\iota ^2}}}\left( {m{\iota ^2} - I} \right) }&{}\quad {{I_\omega } + \frac{{{{\mathcal {R}}^2}}}{{4{\iota ^2}}}\left( {m{\iota ^2} + I} \right) } \end{array}} \right] \end{array} \end{aligned}$$
(11)
$$\begin{aligned}{} & {} \begin{array}{l} {\bar{\eta }} \left( q \right) \\ \quad =\left[ {\begin{array}{*{20}{c}} 0&{}\quad {\frac{{{{\mathcal {R}}^2}}}{{2{\iota ^2}}}\left( {{m_c}\vartheta \dot{P} } \right) }\\ { - \frac{{{{\mathcal {R}}^2}}}{{2{\iota ^2}}}\left( {{m_c}\vartheta \dot{P} } \right) }&{} \quad 0 \end{array}} \right] \end{array} \end{aligned}$$
(12)
$$\begin{aligned}{} & {} {\bar{W}}\left( q \right) = \left[ {\begin{array}{*{20}{c}} 1&{}\quad 0\\ 0&{}\quad 1 \end{array}} \right] \end{aligned}$$
(13)
$$\begin{aligned}{} & {} \begin{array}{l} m = {m_c} + 2{m_\omega }\\ I = {I_c} + {m_c}{\vartheta ^2} + 2{m_\omega }{\iota ^2} + 2{I_m} \end{array} \end{aligned}$$
(14)

From (4) and (10), we have:

$$\begin{aligned} \begin{array}{l} \left[ {\begin{array}{*{20}{c}} 0&{}\quad {\frac{{{{\mathcal {R}}^2}}}{{2{\iota ^2}}}\left( {{m_c}\vartheta \dot{P} } \right) }\\ { - \frac{{{{\mathcal {R}}^2}}}{{2{\iota ^2}}}\left( {{m_c}\vartheta \dot{P} } \right) }&{}\quad 0 \end{array}} \right] \left( {\begin{array}{*{20}{c}} {\dot{v}}\\ {{\dot{\omega }} } \end{array}} \right) \\ + \left[ {\begin{array}{*{20}{c}} 0&{}\quad { - {m_c}dw}\\ {{m_c}dw}&{}\quad 0 \end{array}} \right] \left( {\begin{array}{*{20}{c}} v\\ \omega \end{array}} \right) = \left[ {\begin{array}{*{20}{c}} {\frac{1}{{\mathcal {R}}}}&{}\quad 0\\ 0&{}\quad {\frac{1}{{\mathcal {R}}}} \end{array}} \right] \left( {\begin{array}{*{20}{c}} {{u_1}}\\ {{u_2}} \end{array}} \right) \end{array} \end{aligned}$$
(15)

The following assumptions are considered in developed theorems:

Assumption 1

The dynamics of MR are unknown, and are perturbed by some disturbances.

Assumption 2

The perturbations are bounded.

Assumption 3

The changes of reference trajectory are not too sharp, and it physically can be followed by MR.

Non-singleton T3-FS

The dynamics of MR and also the disturbances are considered to be unknown in this study. The NT3FSs are used as estimators for uncertainties. The general scheme is given in Fig. 3. The computation of NT3FS is explained in detail below.

Fig. 4
figure 4

Suggested type-3 fuzzy set

  1. 1.

    The inputs are \(U_1={\chi _1}\), and \(U_2={\chi _2}\).

  2. 2.

    By the non-singleton fuzzifications the input U is converted \({\bar{U}}\) and \({\underline{U}}\), as follows:

    $$\begin{aligned} {{{\bar{U}} }_{i,{{{\bar{\varsigma }} }_j}}}\left( t \right)= & {} \frac{{{U _i}\left( t \right) {\bar{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}^2 + {o_{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}{\bar{\beta }} _U ^2}}{{{\bar{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}^2 + {\bar{\beta }} _U ^2}}, \end{aligned}$$
    (16)
    $$\begin{aligned} {{{\bar{U}} }_{i,{{{\underline{\varsigma }} }_j}}}\left( t \right)= & {} \frac{{{U _i}\left( t \right) {\bar{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}^2 + {o_{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}{\bar{\beta }} _U ^2}}{{{\bar{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}^2 + {\bar{\beta }} _U ^2}}, \end{aligned}$$
    (17)
    $$\begin{aligned} {{\underline{U}} _{i,{{{\bar{\varsigma }} }_j}}}\left( t \right)= & {} \frac{{{U _i}\left( t \right) {\underline{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}^2 + {o_{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}{\bar{\beta }} _U ^2}}{{{\underline{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}^2 + {\bar{\beta }} _U ^2}}, \end{aligned}$$
    (18)
    $$\begin{aligned} {{\underline{U}} _{i,{{{\underline{\varsigma }} }_j}}}\left( t \right)= & {} \frac{{{U _i}\left( t \right) {\underline{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}^2 + {o_{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}{\bar{\beta }} _U ^2}}{{{\underline{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}^2 + {\bar{\beta }} _U ^2}}, \end{aligned}$$
    (19)

    where, \({{U _i}\left( t \right) }\) denotes the i-th input of NT3FS, and \({\Lambda _i^h}\) is membership function (MF) (h-th MF for \({{U _i}\left( t \right) }\)). The center of MFs is represented by \({{o_{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}}\), and other terms \({{\underline{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}\), \({{\underline{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}\), \({{\bar{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}\), \({{\bar{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}\) denote the standard divisions of Gaussian MFs. \({{\bar{\beta }} _U }\) denotes the fuzzification level.

  3. 3.

    As shown in Fig. 4, the memberships are computed as:

    $$\begin{aligned} {{{\bar{\zeta }} }_{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}\left( {{U _i}\left( t \right) } \right)= & {} \exp \left( { - \frac{{\left( {{{{\bar{U}} }_{i,{{{\bar{\varsigma }} }_j}}}\left( t \right) - {o_{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}} \right) }}{{{\bar{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}^2}}} \right) , \end{aligned}$$
    (20)
    $$\begin{aligned} {{{\bar{\zeta }} }_{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}\left( {{U _i}\left( t \right) } \right)= & {} \exp \left( { - \frac{{\left( {{{{\bar{U}} }_{i,{{{\underline{\varsigma }} }_j}}}\left( t \right) - {o_{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}} \right) }}{{{\bar{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}^2}}} \right) , \end{aligned}$$
    (21)
    $$\begin{aligned} {{\underline{\zeta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}\left( {{U _i}\left( t \right) } \right)= & {} \exp \left( { - \frac{{\left( {{{{\underline{U}} }_{i,{{{\bar{\varsigma }} }_j}}}\left( t \right) - {o_{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}}} \right) }}{{{\underline{\beta }} _{\Lambda _i^h,{{{\bar{\varsigma }} }_j}}^2}}} \right) , \end{aligned}$$
    (22)
    $$\begin{aligned} {{\underline{\zeta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}\left( {{U _i}\left( t \right) } \right)= & {} \exp \left( { - \frac{{\left( {{{{\underline{U}} }_{i,{{{\underline{\varsigma }} }_j}}}\left( t \right) - {o_{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}}} \right) }}{{{\underline{\beta }} _{\Lambda _i^h,{{{\underline{\varsigma }} }_j}}^2}}} \right) . \end{aligned}$$
    (23)
  4. 4.

    For the rules firing, we have:

    $$\begin{aligned} {\bar{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l= & {} {{{\bar{\zeta }} }_{\Lambda _1^{{h_1}},{{{\bar{\varsigma }} }_j}}} \cdot {{{\bar{\zeta }} }_{\Lambda _2^{{h_2}},{{{\bar{\varsigma }} }_j}}} \cdots {{{\bar{\zeta }} }_{\Lambda _n^{{h_n}},{{{\bar{\varsigma }} }_j}}}, \end{aligned}$$
    (24)
    $$\begin{aligned} {\bar{\Omega }} _{{{{\underline{\varsigma }} }_j}}^l= & {} {{{\bar{\zeta }} }_{\Lambda _1^{{h_1}},{{{\underline{\varsigma }} }_j}}} \cdot {{{\bar{\zeta }} }_{\Lambda _2^{{h_2}},{{{\underline{\varsigma }} }_j}}} \cdots {{{\bar{\zeta }} }_{\Lambda _n^{{h_n}},{{{\underline{\varsigma }} }_j}}}, \end{aligned}$$
    (25)
    $$\begin{aligned} {\underline{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l= & {} {{\underline{\zeta }} _{\Lambda _1^{{h_1}},{{{\bar{\varsigma }} }_j}}} \cdot {{\underline{\zeta }} _{\Lambda _2^{{h_2}},{{{\bar{\varsigma }} }_j}}} \cdots {{\underline{\zeta }} _{\Lambda _n^{{h_n}},{{{\bar{\varsigma }} }_j}}}, \end{aligned}$$
    (26)
    $$\begin{aligned} {\underline{\Omega }} _{{{{\underline{\varsigma }} }_j}}^l= & {} {{\underline{\zeta }} _{\Lambda _1^{{h_1}},{{{\underline{\varsigma }} }_j}}} \cdot {{\underline{\zeta }} _{\Lambda _2^{{h_2}},{{{\underline{\varsigma }} }_j}}} \cdots {{\underline{\zeta }} _{\Lambda _n^{{h_n}},{{{\underline{\varsigma }} }_j}}} \end{aligned}$$
    (27)

    The form of \(l-th\) rule is give as:

    $$\begin{aligned} \begin{array}{*{20}{l}} {\mathrm{{if}}\,\,{\chi _1}\,\mathrm{{is}}\,\Lambda _{1,{\varsigma _j}}^{{h_l}}\,\, \mathrm{{and}}\,\,{\chi _2}\,\mathrm{{is}}\,\Lambda _{2,{\varsigma _j}}^{{h_l}}\,\, \mathrm{{and}}\,\,}\\ {\,\,\,\,\,\,\,{\chi _n}\,\mathrm{{is}}\,\Lambda _{n,{\varsigma _j}}^{{h_l}}\, \, \mathrm{{Then}}\,\,{y_l}\,\in \,\left[ {{{{\underline{\alpha }} }_{l,j}},{{{\bar{\alpha }} }_{l,j}}} \right] ,} \end{array} \end{aligned}$$
    (28)

    where, \({{{\underline{\alpha }} }_{l,j}}\) and \({{{\bar{\alpha }} }_{l,j}}\) represent the rule coefficients, and \({{y_l}}\) denotes the output of fuzzy system at l-th rule.

  5. 5.

    The output of NT3FS is written as [42]:

    $$\begin{aligned} \begin{array}{l} y = \frac{{\sum \nolimits _{j = 1}^\lambda {} }}{{\sum \nolimits _{j = 1}^\lambda {{{{\bar{\varsigma }} }_j}} }}\left( {\frac{{\sum \nolimits _{l = 1}^r {{{{\bar{\varsigma }} }_j}\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l} \right) {{{\bar{\alpha }} }_{l,j}}/2} }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l} \right) } }} } \right. \\ \left. {\,\,\,\,\,\,\,\,\,\, +\frac{{{{{\underline{\varsigma }} }_j}\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_j}}^l} \right) {{{\underline{\alpha }} }_{l,j}}/2} }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_j}}^l} \right) } }}} \right) , \end{array} \end{aligned}$$
    (29)

    where, \({{{{\bar{\varsigma }} }_j}}\) and \({{{{\underline{\varsigma }} }_j}}\) are slice levels, \(\lambda \) is the slice number, and r is number of rules. The dynamics of MR are written as:

    $$\begin{aligned} {{{\dot{\chi }}}} = {g}\left( {U|{\alpha }} \right) + {u} \end{aligned}$$
    (30)

    where U is the input vector, u is control signal, \({g}\left( {U|{\alpha }} \right) \) denote a fuzzy systems and from (29), \({g}\left( {U|{\alpha }} \right) \) is written as:

    $$\begin{aligned} {g}\left( {U|{\alpha }} \right) = \alpha ^T{\Phi }, \end{aligned}$$
    (31)

    where,

    $$\begin{aligned}{} & {} \begin{array}{l} \alpha ^T = \left[ {{{{\underline{\alpha }} }_{1,1}},\ldots ,{{{\underline{\alpha }} }_{1,\lambda }},\ldots ,{{{\underline{\alpha }} }_{r,1}},\ldots ,{{{\underline{\alpha }} }_{r,\lambda }}} \right. ,\\ \left. {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{{\bar{\alpha }} }_{1,1}},\ldots ,{{{\bar{\alpha }} }_{1,\lambda }},\ldots ,{{{\bar{\alpha }} }_{1,1}},\ldots ,{{{\bar{\alpha }} }_{1,\lambda }}} \right] , \end{array} \end{aligned}$$
    (32)
    $$\begin{aligned}{} & {} \begin{array}{l} \Phi ^T= \frac{{0.5}}{{\sum \nolimits _{j = 1}^\lambda {{{{\bar{\varsigma }} }_j}} }}\left[ {\frac{{{{{\underline{\varsigma }} }_1}\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_1}}^1 + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_1}}^1} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1 + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1} \right) } }},\ldots ,\frac{{{{{\underline{\varsigma }} }_\lambda }\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_\lambda }}^1 + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_\lambda }}^1} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1 + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1} \right) } }}} \right. ,\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \frac{{{{{\underline{\varsigma }} }_1}\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_1}}^r + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_1}}^r} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1 + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1} \right) } }},\ldots ,\frac{{{{{\underline{\varsigma }} }_\lambda }\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_\lambda }}^r + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_\lambda }}^r} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1 + {\underline{\Omega }} _{{{{\underline{\varsigma }} }_j}}^1} \right) } }},\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{{\bar{\varsigma }} }_1}\frac{{\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_1}}^1 + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_1}}^1} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l} \right) } }},\ldots ,{{{\bar{\varsigma }} }_\lambda }\frac{{\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_\lambda }}^1 + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_\lambda }}^1} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l} \right) } }},\\ \left. {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{{\bar{\varsigma }} }_1}\frac{{\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_1}}^r + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_1}}^r} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l} \right) } }},\ldots ,{{{\bar{\varsigma }} }_\lambda }\frac{{\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_\lambda }}^r + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_\lambda }}^r} \right) }}{{\sum \nolimits _{l = 1}^r {\left( {{\bar{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l + {\underline{\Omega }} _{{{{\bar{\varsigma }} }_j}}^l} \right) } }}} \right] , \end{array} \end{aligned}$$
    (33)

Nonlinear MPC

A predictive controller is one of the most common methods used in various industries. The ability to consider different system constraints is one of the advantages of MPC. The future behavior of the system is estimated according to the assumed model at each sampling time in the determined horizon. In the algorithm, the MPC optimizer is responsible for determining the optimal control input in the predetermined horizon. This work is done based on the defined control strategy and predicted outputs by the prediction block. Optimization has always been one of the most important steps in processing and performing different control methods. In the context of MPC, it is essential to perform calculations in order to achieve an optimal and real-time response. The MPC problem is considered as minimizing the following cost function [43, 44]:

$$\begin{aligned} \begin{array}{*{20}{l}} {\mathop {\min }\limits _{{v _{{p_z}}}(j),\ldots ,{v _{{p_z}}}(j + {n_C})} J = \sum \limits _{j = t}^{t + {n_P}} {\mu \cdot {\hat{\delta }} _z^2(j) + \pi \cdot \Delta v _{{p_z}}^2(j)} }\\ {subject\,\,to}\\ {{\hat{\delta }} _z^{}(t) = {y_{\mathrm{{BM}}}}\left( {{{\underline{{\hat{\delta }} } }_z}|{X _z}} \right) }, \end{array} \end{aligned}$$
(34)

where, \(z=1,2\), \(\pi \) and \(\mu \) denote the constant variables, \({v_{{p_z}}^{}}\) is the z-th predictive controller, \({{n_P}}\) is the prediction horizon, \({\Delta v_{{p_z}}^{}}\) denotes the differential of \({v_{{p_z}}^{}}\), \({\underline{{\hat{\delta }} } _z}\) is written as:

$$\begin{aligned} {\underline{{\hat{\delta }} } _z} = {\left[ {{\hat{\delta }} _z^{}\left( {t - 1} \right) ,\ldots ,{\hat{\delta }} _z^{}\left( {t - \tau } \right) ,{v_{{p_z}}}\left( {t - 1} \right) } \right] ^T}, \end{aligned}$$
(35)

\({{y_{\mathrm{{BM}}}}\left( {{{\underline{{\hat{\delta }} } }_z}|{X _z}} \right) }\) is Boltzmann machine (BM) output (see Fig. 5), and \({X _z}\) includes output and hidden layer weights (\({X _v}\) and \(X _{{z_0}}\)). \({{y_{\mathrm{{BM}}}}\left( {{{\underline{{\hat{\delta }} } }_z}|{X _z}} \right) }\) is written as:

Fig. 5
figure 5

BM block diagram

$$\begin{aligned} {y_{\mathrm{{BM}}}}\left( {{{\underline{{\hat{\delta }} } }_z}|{X _z}} \right) = X _{{z_0}}^T{\mu _1}, \end{aligned}$$
(36)

where,

$$\begin{aligned} {\mu _1} = \Xi \left( {X _v^T{\eta _1}} \right) , \end{aligned}$$
(37)

In (39), \(\eta _1\) is obtained as:

$$\begin{aligned} {\eta _1} = \Xi \left( {X _v^{}{\mu _0}} \right) , \end{aligned}$$
(38)

In (38), \(\eta _0\) is written as:

$$\begin{aligned} {\mu _0} = \Xi \left( {X _v^T{\eta _0}} \right) , \end{aligned}$$
(39)

where, \({{\eta _0}}\) denotes input vector of BM, and

$$\begin{aligned} \Xi \left( y \right) = \left[ {1 - \exp ( - y)} \right] /\left[ {1 + \exp ( - y)} \right] \end{aligned}$$
(40)

\({X _v^{}}\) is optimized by CD training approach [45, 46], such that the cost function (60) is minimized:

$$\begin{aligned} J'\left( {\eta ,\mu } \right) = {\delta ^{\mathrm{{E}}\left( {\eta ,\mu } \right) }}/\mathrm{{E}}\left( {\eta ,\mu } \right) , \end{aligned}$$
(41)

where,

$$\begin{aligned} \mathrm{{E}}\left( {\eta ,\mu } \right) = - \mu _{}^TX _v^{}{\eta _{}}, \end{aligned}$$
(42)

Then \(X _v^{}\) is adjusted as:

$$\begin{aligned} X _v^{}\left( {t + 1} \right) = X _v^{}\left( t \right) + \gamma \left( {{\eta _0}\mu _0^T - {\eta _1}\mu _1^T} \right) , \end{aligned}$$
(43)

where \(0 < \gamma \le 1\), and \(X _{{z_0}}\) is adjusted as:

$$\begin{aligned} {X _{{z_o}}}\left( {t + 1} \right) = {X _{{z_o}}}\left( t \right) + \gamma \left( {{\delta _z} - {{{\hat{\delta }} }_z}} \right) {\mu _1}, \end{aligned}$$
(44)

The BM output at steps ahead (j-steps) is:

$$\begin{aligned} {{{\hat{\delta }} }_z}(t + j) = {{{\hat{\delta }} }_{z,forced}}(t + j) + {{{\hat{\delta }} }_{z,free}}(t + j), \end{aligned}$$
(45)

where,

$$\begin{aligned} {{{\hat{\delta }} }_{z,free}}(t + j)= & {} {y_{\mathrm{{BM}}}}\left( {{{\underline{{\hat{\delta }} } }_z}(t + j)|{X _z}} \right) , \end{aligned}$$
(46)
$$\begin{aligned} {{{\hat{\delta }} }_{z,forced}}(t + j)= & {} \sum \limits _{\iota = 0}^{j - 1} {{\ell _\iota }\, \Delta {v_{{p_z}}}(t + j - \iota + 1)}, \end{aligned}$$
(47)

where, \({\ell _\iota },\,\iota = 0,\ldots ,j - 1\) denote the step responses coefficients, and the estimated step response \(s(t - 1) \) is written as:

$$\begin{aligned} s(t - 1) = \left[ {{{{\hat{\delta }} }_{z,step}}(t + j) - {{{\hat{\delta }} }_{z,free}}(t + j)} \right] /d{v_{{p_z}}}(t),\nonumber \\ \end{aligned}$$
(48)

where, \(d{v_{{p_z}}}(t)\) denotes the step size, and:

$$\begin{aligned}{} & {} {{{\hat{\delta }} }_{z,step}}(t + j) = {y_{\mathrm{{BM}}}}\left( {{{\underline{{\hat{\delta }} } }_z}(t + j)|{X _z}} \right) , \end{aligned}$$
(49)
$$\begin{aligned}{} & {} {{\underline{{\hat{\delta }} } }_z} {=} {{\left[ {{{{\hat{\delta }} }_z}(t {+} j {-} 1),{\ldots },{\hat{\delta }} _z^{}\left( {t {+} j {-} \tau } \right) ,{v _{{p_z}}}\left( {t {+} j {-} 1} \right) } \right] }^T}, \end{aligned}$$
(50)
$$\begin{aligned}{} & {} {v _{{p_z}}}\left( t \right) = \left\{ {\begin{array}{*{20}{l}} {{v _{{p_z}}}\left( {t - 1} \right) + d{v _{{p_z}}}(t)\,\,\,\qquad if\,\,j > t - 1}\\ {{v _{{p_z}}}\left( {t - 1} \right) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\qquad else} \end{array}} \right. . \end{aligned}$$
(51)

The above-mentioned problem in a matrix form can be written as [45, 46]:

$$\begin{aligned} \Psi = S\Delta {v_{{p_z}}} + {\Psi _{free}}, \end{aligned}$$
(52)

where,

$$\begin{aligned}{} & {} \Psi = \left[ {\begin{array}{*{20}{c}} {{{{\hat{\delta }} }_z}(t + 1)}\\ {{{{\hat{\delta }} }_z}(t + 2)}\\ \vdots \\ {{{{\hat{\delta }} }_z}(t + {n_p})} \end{array}} \right] , \end{aligned}$$
(53)
$$\begin{aligned}{} & {} S = \left[ {\begin{array}{*{20}{c}} {{s_0}}&{}\quad 0&{}\quad \cdots &{}\quad 0\\ {{s_1}}&{}\quad {{s_0}}&{}\quad \cdots &{}\quad 0\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ {{s_{{n_p}}} - 1}&{}\quad {{s_{{n_p}}} - 1}&{}\quad \cdots &{}\quad {{s_0}} \end{array}} \right] , \end{aligned}$$
(54)
$$\begin{aligned}{} & {} \Delta {v_{{p_z}}} = \left[ {\begin{array}{*{20}{c}} {\varphi {v_{{p_z}}}(t)}\\ {\varphi {v_{{p_z}}}(t + 1)}\\ \vdots \\ {\varphi {v_{{p_z}}}(t + {n_p} - 1)} \end{array}} \right] ,\end{aligned}$$
(55)
$$\begin{aligned}{} & {} {\Psi _{free}} = \left[ {\begin{array}{*{20}{c}} {{{{\hat{\delta }} }_{z,free}}(t + 1)}\\ {{{{\hat{\delta }} }_{z,free}}(t + 2)}\\ \vdots \\ {{{{\hat{\delta }} }_{z,free}}(t + {n_p})} \end{array}} \right] , \end{aligned}$$
(56)
$$\begin{aligned}{} & {} {{{\hat{\delta }} }_{z,free}}(t + j) = {y_{\mathrm{{BM}}}}\left( {{{\underline{{\hat{\delta }} } }_z}(t + j)|{X _z}} \right) , \end{aligned}$$
(57)
$$\begin{aligned}{} & {} {{\underline{{\hat{\delta }} } }_z}(t + j) = \left[ {{{\hat{\delta }} }_z}(t + j - 1),\ldots ,{\hat{\delta }} _z^{}\left( {t + j - \tau } \right) ,\right. \nonumber \\ {}{} & {} \left. {v _{{p_z}}}\left( {t + j - 1} \right) \right] ^T, \end{aligned}$$
(58)
$$\begin{aligned}{} & {} {v _{{p_z}}}\left( t \right) = \left\{ {\begin{array}{*{20}{l}} {{v _{{p_z}}}\left( {t - 1} \right) \,\,\,\,\,\,\qquad if\,\,j > t - 1}\\ {{v _{{p_z}}}\left( t \right) \,\,\,\,\,\,\,\,\,\,\qquad else} \end{array}} \right. . \end{aligned}$$
(59)

From (34) and (52), we have:

$$\begin{aligned} \begin{array}{*{20}{l}} {J = \mu {\Psi ^T}\Psi + \pi \Delta v_{{p_z}}^T\Delta {v_{{p_z}}}}\\ \begin{array}{l} \,\,\,\, = \mu {\left( {S\Delta {v_{{p_z}}} + {\Psi _{free}}} \right) ^T}\left( {S\Delta {v_{{p_z}}} + {\Psi _{free}}} \right) \\ \,\,\,\,\,\,\,\, + \pi \Delta v_{{p_z}}^T\Delta {v_{{p_z}}}, \end{array} \end{array} \end{aligned}$$
(60)

From (60), \(\partial J/\partial \Delta {v_{{p_z}}}\) is written as:

$$\begin{aligned} \begin{array}{l} \partial J/\partial \Delta {v_{{p_z}}} = 0\\ \Rightarrow \mu {S^T}{\Psi _{free}} + \Delta {v_p}\left( {\mu {S^T}S + \pi } \right) = 0. \end{array} \end{aligned}$$
(61)

From (61), \(\Delta {v_{{p_z}}}\) is written as:

$$\begin{aligned} \Delta {v_{{p_z}}} = \mu {\left( {\mu {S^T}S + \pi } \right) ^{ - 1}}{S^T}{\Psi _{free}}. \end{aligned}$$
(62)

The controller \({v_{{p_z}}}(t)\) is:

$$\begin{aligned} {v_{{p_z}}}(t) = {v_{{p_z}}}(t - 1) + \Delta {v_{{p_z}}}(t), \end{aligned}$$
(63)

where, \(\Delta {v_{{p_z}}}(t)\) is first element in \(\Delta {v_{{p_z}}}\) in (62).

Stability analysis

By taking into account the optimal NT3FSs, the robot dynamics are written as:

$$\begin{aligned} {{\dot{\chi }} _1}= & {} g_1^*\left( {U|\alpha _1^*} \right) + {u_1} + {\omega _1}, \end{aligned}$$
(64)
$$\begin{aligned} {{\dot{\chi }} _2}= & {} g_2^*\left( {U|\alpha _2^*} \right) + {u_2} + {\omega _2}, \end{aligned}$$
(65)

where, \({\omega _i},\,i=1,2\) denote estimation errors. The controllers \(u_1\) and \(u_2\) have three parts: the primary error feedback, predictive signals \(v_p\) (see (63)) and compensators \(v_c\) (see (81) and (82)). As shown in Fig. 1, the controllers are:

$$\begin{aligned}{} & {} \begin{array}{*{20}{l}} {{u_1} = - {g_1}\left( {\theta |{\alpha _1}} \right) - {K_1}{\delta _1}}{ + {\omega _1} + {v_{{c_1}}} + {v_{{p_1}}}} \end{array} \end{aligned}$$
(66)
$$\begin{aligned}{} & {} \begin{array}{*{20}{l}} {{u_2} = - {g_2}\left( {\theta |{\alpha _2}} \right) - {K_2}{\delta _2}}{ + {\omega _2} + {v_{{c_2}}} + {v_{{p_2}}}} \end{array} \end{aligned}$$
(67)

where, \(K_1\) and \(K_2\) are constants and \({{\dot{\delta }} }_1\) and \({{\dot{\delta }} }_2\) are errors. Then \({{\dot{\delta }} }_1\) and \({{\dot{\delta }} }_2\) are written as:

$$\begin{aligned}{} & {} \begin{array}{*{20}{l}} {{{{\dot{\delta }} }_1} = g_1^*\left( {\theta |\alpha _1^*} \right) - {g_1}\left( {\theta |{\alpha _1}} \right) - {K_1}{\delta _1}}\\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + {\omega _1} + {v _{{c_1}}} + {v _{{p_1}}}}, \end{array} \end{aligned}$$
(68)
$$\begin{aligned}{} & {} \begin{array}{*{20}{l}} {{{{\dot{\delta }} }_2} = g_2^*\left( {\theta |\alpha _2^*} \right) - {g_2}\left( {\theta |{\alpha _2}} \right) - {K_2}{\delta _2}}\\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + {\omega _2} + {v _{{c_2}}} + {v _{{p_2}}}}, \end{array} \end{aligned}$$
(69)

From (31), we can write:

$$\begin{aligned} g_1^ * \left( {\theta |\alpha _1^*} \right) - {g_1}\left( {\theta |{\alpha _1}} \right)= & {} {\tilde{\alpha }} _1^T{\Phi _{{\xi _1}}}, \end{aligned}$$
(70)
$$\begin{aligned} g_2^ * \left( {\theta |\alpha _2^*} \right) - {g_2}\left( {\theta |{\alpha _2}} \right)= & {} {\tilde{\alpha }} _2^T{\Phi _{{\xi _2}}}. \end{aligned}$$
(71)

From (70)–(71), the equations (68)–(69) are written as:

$$\begin{aligned}{} & {} \begin{array}{*{20}{l}} {{{{\dot{\delta }} }_1} = {\tilde{\alpha }} _1^T{\Phi _{{\xi _1}}} - {K_1}{\delta _1}} { + {\omega _1} + {v _{{c_1}}} + {v _{{p_1}}}}, \end{array} \end{aligned}$$
(72)
$$\begin{aligned}{} & {} \begin{array}{*{20}{l}} {{{{\dot{\delta }} }_2} = {\tilde{\alpha }} _2^T{\Phi _{{\xi _2}}} - {K_2}{\delta _2}} {+ {\omega _2} + {v _{{c_2}}} + {v _{{p_2}}}}. \end{array} \end{aligned}$$
(73)

We consider the Lyapunov function as:

$$\begin{aligned} \begin{array}{l} V = \frac{1}{2}\delta _1^2 + \frac{1}{2}\delta _2^2 + \frac{1}{{2\gamma }}{\tilde{\alpha }} _1^T{\tilde{\alpha }} _1^{} + \frac{1}{{2\gamma }}{\tilde{\alpha }} _2^T{\tilde{\alpha }} _2^{}. \end{array} \end{aligned}$$
(74)

Derivative of (74), results in:

$$\begin{aligned} \begin{array}{*{20}{l}} {\dot{V} = \delta _1^{}{\dot{\delta }} _1^{} + \delta _2^{}{\dot{\delta }} _2^{}}{ - \frac{1}{\gamma }{\tilde{\alpha }} _1^T{\dot{\alpha }} _1^{} - \frac{1}{\gamma }{\tilde{\alpha }} _2^T{\dot{\alpha }} _2^{}}. \end{array} \end{aligned}$$
(75)

From (72), (73) and (75), \(\dot{V}\) is written as:

$$\begin{aligned} \begin{array}{*{20}{l}} {\dot{V} = - {K_1}\delta _1^2 - {K_2}\delta _2^2}{ + {\omega _1}\delta _1^{} + \delta _1^{}{v _{{c_1}}} + \delta _1^{}{v _{{p_1}}}}\\ {\,\,\,\,\,\,\,\,\,\,\,\,\, + {\omega _2}\delta _2^{} + \delta _2^{}{v _{{c_2}}} + \delta _1^{}{v _{{p_2}}}}{ + {\tilde{\alpha }} _1^T{\Phi _{{\xi _1}}}\delta _1^{} - \frac{1}{\gamma }{\tilde{\alpha }} _1^T{\dot{\alpha }} _1^{}}\\ {\,\,\,\,\,\,\,\,\,\,\,\,\, + {\tilde{\alpha }} _2^T{\Phi _{{\xi _2}}}\delta _2^{} - \frac{1}{\gamma }{\tilde{\alpha }} _2^T{\dot{\alpha }} _2^{}}. \end{array} \end{aligned}$$
(76)

Then the tuning rules are written as:

$$\begin{aligned} {\dot{\alpha }} _1^{}= & {} \gamma {\Phi _{{\xi _1}}}\delta _1^{} \end{aligned}$$
(77)
$$\begin{aligned} {\dot{\alpha }} _2^{}= & {} \gamma {\Phi _{{\xi _2}}}\delta _2^{} \end{aligned}$$
(78)

From (77) and (78), the equation (76), becomes:

$$\begin{aligned} \begin{array}{*{20}{l}} {\dot{V} = - {K_1}\delta _1^2 - {K_2}\delta _2^2}{ + {\omega _1}\delta _1^{} + \delta _1^{}{v _{{c_1}}} + \delta _1^{}{v _{{p_1}}}}\\ {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + {\omega _2}\delta _2^{} + \delta _2^{}{v _{{c_2}}} + \delta _1^{}{v _{{p_2}}}}. \end{array} \end{aligned}$$
(79)
Fig. 6
figure 6

The designed robot

Fig. 7
figure 7

The designed setup

Fig. 8
figure 8

Implementation performance

Fig. 9
figure 9

Implementation performance: Phase portrait

Regarding the characteristics of MR, we can consider the upper bounds for \(\omega _i\) and \({{ v}_{{p_i}}}\,i=1,2\), then we can write:

$$\begin{aligned} \begin{array}{*{20}{l}} {\dot{V} \le - {K_1}\delta _1^2 - {K_2}\delta _2^2}{ + {{{\bar{\omega }} }_1}\left| {\delta _1^{}} \right| + \delta _1^{}{v _{{c_1}}} + \left| {\delta _1^{}} \right| {{{\bar{v}} }_{{p_1}}}}\\ {\,\,\,\,\,\,\,\,\,\, + {{{\bar{\omega }} }_2}\left| {\delta _2^{}} \right| + \delta _2^{}{v _{{c_2}}} + \left| {\delta _2^{}} \right| {{{\bar{v}} }_{{p_2}}}}. \end{array} \end{aligned}$$
(80)

From (80), to eliminate the effect of terms \({{{{\bar{\omega }} }_1}\left| {\delta _1^{}} \right| + \left| {\delta _1^{}} \right| {{{\bar{v}}}_{{p_1}}}}\) and \({{{{\bar{\omega }} }_2}\left| {\delta _2^{}} \right| + \left| {\delta _2^{}} \right| {{{\bar{v}}}_{{p_2}}}}\), the compensators are:

$$\begin{aligned} {v_{{c_1}}}= & {} - \tanh \left( {\delta _1^{}} \right) \left( {{{{{\bar{\omega }} }}_1} + {{{\bar{v}}}_{{p_1}}}} \right) , \end{aligned}$$
(81)
$$\begin{aligned} {v_{{c_2}}}= & {} - \tanh \left( {\delta _2^{}} \right) \left( {{{{{\bar{\omega }} }}_2} + {{{\bar{v}}}_{{p_2}}}} \right) , \end{aligned}$$
(82)
Fig. 10
figure 10

The switching modes

Considering (81)-(82), we have:

$$\begin{aligned} \dot{V} \le - {K_1}\delta _1^2 - {K_2}\delta _2^2. \end{aligned}$$
(83)

Then,

$$\begin{aligned} V \le \int _0^t { - {K_1}{{\left\| {\delta _1^{}\left( \tau \right) } \right\| }^2} - {K_2}{{\left\| {\delta _2^{}\left( \tau \right) } \right\| }^2}d\tau } \prec \infty . \end{aligned}$$
(84)

Inequality (84), shows that \({\delta _1} \in {\ell ^2}\) and \({\delta _2} \in {\ell ^2}\). Then the asymptotic stability is proved.

Implementation

In this section, the suggested approach is examined on a real robot. The designed robot is depicted in Fig. 6. The robot is communicated by a wireless sensor to a notebook for monitoring. The setup is shown in Fig. 7. The robot has two standard wheels with a 7 cm diameter that are attached to two stepper motors with high accuracy and two idle pins to keep it balanced. The robot includes eight Sharp sensors on each of its four sides and an MPU6050 angle acceleration sensor in its center of mass. The bottom plate of the robot, where the motors are mounted, is made of 1.5 mm aluminum. The precision of two-phase stepper motors is 1.8 degrees per step. The chassis surface is elevated to receive the PCB using four 7 cm spacers. The NRF24L01 radio transmitter module is used for communication between the laptop and the robot.

To examine the feasibility, a complicated chaotic path-following is considered. The setup is shown in Fig. 7. The results are shown in Fig. 8. We see that the designed scheme has an accurate performance, even in very noisy conditions. The designed robot well tracks a complicated chaotic path. The suggested predictive approach decreases the tracking error and tackles the natural perturbations. To better see the chaotic behavior the phase portrait is shown in Fig. 9. We see that the robot well follows a chaotic path. The complicated paths can be used in various security applications. For example, the suggested robot can be used in a patrolling application. In this problem, we need that the path of the robot to be secure and unpredictable.

Table 2 Simulation condition
Fig. 11
figure 11

Output signals

Simulations

Fig. 12
figure 12

Control signals

Fig. 13
figure 13

Estimated signals along with the targets

Fig. 14
figure 14

Phase portraits

The reference path is generated by following chaotic system:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {D_t^{0.98}{y _1} = 35\left[ {{y _2}{y _3} + \left( {{y _2} - {y _1}} \right) } \right] }\\ {D_t^{0.98}{y _2} = - 5{y _1}{y _3}25{y _1} + {y _2} + {y _4}}\\ {D_t^{0.98}{y _3} = - 4{y _3} + {y _1}{y _2}}\\ {D_t^{0.98}{y _4} = - 100{y _2}} \end{array}} \right. \end{aligned}$$
(85)

The switching modes are shown in Fig. 10. The reference of output signals as shown in Fig. 10 are changed four times. The simulation conditions are depicted in Table 1 and 2. It should be noted that the conditions are considered to be the same for both IT2-FLS and NT3FS (the number of MFs, rules, and values of centers of MFs). The outputs are depicted in Fig. 11. An excellent tacking response is shown in Fig. 11. We see that output signals well track the target signals. In spite of time-varying references and multi-switching between different signals, the robot well tracks the desired path. The controllers are depicted in Fig. 12. The obtained control signals have no high-frequency fluctuations. The estimated signals are given in Fig. 13. The phase portrait is shown in Fig. 14. We see acceptable tracking accuracy. The chaotic path proposes a secure path for robots. Especially considering the fact that the suggested path is suddenly changed from one path to another.

Table 3 The RMSE comparisons
Fig. 15
figure 15

The estimation comparison of FLSs

To give a fair comparison, the effect of the controller is terminated, and the estimation of (15) is considered, without a controller, then we have:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {{{\dot{{\hat{\chi }}} }_1} = {g_1}\left( {U|{\alpha _1}} \right) }\\ {{{\dot{{\hat{\chi }}} }_2} = {g_2}\left( {U|{\alpha _2}} \right) } \end{array}} \right. \end{aligned}$$
(86)

Also, a noise variance/mean of 0.04/0 is added. The RMSEs are provided in Table 3. Also, the comparison phase portrait is depicted in Fig. 15. According to the RMSE values, our designed controller’s performance surpasses that of other related control methods. The proposed controller exhibits improved resistance to switching modes and dynamics uncertainties. Additionally, it showcases significant advancement in performance when operating under highly noisy conditions. Other controllers’ performance drastically declines as measurement noise increases, whereas the accuracy of the proposed scheme remains unaffected by measurement errors. Even as the noise variance increases from 0 to 0.1, the suggested approach’s RMSE only slightly increases. Conversely, other FLSs experience an RMSE increase of over 50%.

Conclusion

This article presents a new innovative approach to controlling MRs. In order to manage nonlinearities of MR dynamics, a novel NT3FS is proposed. Additionally, the dynamics of the tracking error are identified through the development of a BM model. An MPC controller based on the BM model is then designed and stability is scrutinized in the presence of multiple disturbances. Reference signals for the path of the MR are created using a hyperchaotic system comprised of four states. The MR’s desired path is selected from these signals, resulting in sudden changes between four chaotic signals. The proposed intelligent controller is demonstrated to efficiently handle uncertainties, effectively steering the MR along chaotic reference paths. Due to the unpredictable nature of these paths,

this approach proves highly promising for patrol robots. As compared to various FLSs and controllers, the newfound approach is shown to perform with heightened accuracy. The suggested method is found to be 50% better in accuracy as compared to other methods. In future research, the optimality of NT3FSs will be further investigated.