The control of technical systems aims a safe and reliable system behavior, and controllable system states. By the depiction as a system the analysis is put on an abstracted level which allows covering many different technical systems described by their fundamental physics. On this abstracted level a general analysis of closed loop control issues is possible using several methods and techniques. The resulting procedures are applicable to a various number of system classes. The main purpose of any depiction and analysis of control systems is to achieve high performance, safe system behavior and reliable processes. Of course this also holds for haptic systems. Here stable system behavior and high transparency are the most important control law design goals. The abstract description that shall be used for a closed loop control analysis starts with the mathematical formulation of the physical principles the system follows. As mentioned above, systems with different physical principles are covered up by similar mathematical methods. The depiction by differential equations or systems of differential equations proves widely usable for the formulation of various system behaviors. Herein analogies allow transforming this system behavior into the different technical context of a different system, provided that there exists a definite formulation of the system states that are of interest for the closed loop control analysis. The mathematical formulation of the physical principles of the system, also denoted as modeling, is followed by the system analysis including dynamic behavior and its characteristic. With this knowledge, a wide variety of design methods for control systems become applicable. Their main requirements are:  

System stability:

The fundamental requirement for stability in any technical system is the main purpose for closed loop control design. Especially for haptic systems stability is the most important criteria in order to guarantee safe use of the device for the user.

Control quality:

Tracking behavior of the system states to demanded values every system is faced with external influences also denoted as disturbances which interfere with the demanded system inputs and disrupt the optimal system behavior. To compensate this negative influence a control system is designed.

Dynamic behavior and performance:

In addition to the first two issues, the need for a certain system dynamic completes this requirements-list. With a view to haptic systems the focus lays on the transmitted mechanical impedance, which determines the achievable grade of transparency.

 

Besides the quality of the control result tracking the demanded values, the system behavior within the range of changes from these demanded values is focused. Also the control effort which needs to achieve a certain control result is to be investigated. The major challenge for closed loop control law design for haptic systems and other engineering disciplines is to deal with different goals that are often in conflict with each other. Typically a gained solution is never an optimal one, rather a tradeoff between system requirements. In the following Sect. 7.1 basic knowledge of linear and non-linear system description will be given. Section 7.2 gives a short overview about system stability analysis. A recommendation for structuring the control law design process for haptic systems will be given in Sect. 7.3. Subsequently Sect. 7.4 focuses on common system descriptions for haptic systems and shows methods for designing control laws. Closing in Sect. 7.5 a conclusion will be presented.

1 System Description

A variety of description methods can be applied for the mathematical formulation of systems with different physical principles. One of the main distinctions is drawn between methods for the description of linear and nonlinear systems, summarized in the following paragraphs. The description based on Single-Input-Single-Output-Systems (SISO) in the Laplace domain was already discussed in Sect. 4.3.

1.1 Linear State Space Description

Besides the formulation of system characteristics through transfer functions, the description of systems using the state space representation in the time domain allows to deal with arbitrary linear systems too. For Single-Input-Single-Output-Systems, a description using an nth order ordinary differential equation is transformable into a set with n first order ordinary differential equations. In addition to the simplified usage of numerical algorithms for solving this set of differential equations, the major advantage is the applicability to Multi-Input-Multi-Output-Systems (MIMO). A correct and systematic model of their coupled system inputs, system states, and system outputs is comparably easy to achieve. On the contrary to the system description in the Laplace domain by transfer functions G(s), the state space representation formulates the system behavior in the time domain. Two sets of equations are necessary for a complete state space system representation. These are denoted as the system equation

$$\begin{aligned} \dot{\textbf{x}} = \textbf{A}\textbf{x} + \textbf{B}\textbf{u} \end{aligned}$$
(7.1)

and the output equation

$$\begin{aligned} \textbf{y} = \textbf{C}\textbf{x} + \textbf{D}\textbf{u}. \end{aligned}$$
(7.2)

The vectors \(\textbf{u}\) and \(\textbf{y}\) describe the multidimensional system input respectively system output. Vector \(\textbf{x}\) denotes the inner system states.

As an example for state space representation the 2nd order mechanical oscillating system as shown in Fig. 7.1 is examined. Assuming the existence of time invariant parameters the description by a 2nd order differential equation is:

$$\begin{aligned} m\ddot{y}+d\dot{y}+ky=u \end{aligned}$$
(7.3)
Fig. 7.1
figure 1

Second order oscillator a scheme, b block diagram

The transformation of the 2nd order differential Eq. (7.3) into a set of two 1st order differential equation is done by choosing the integrator outputs as system states:

$$\begin{aligned} x_{1}=y\Rightarrow & {} \dot{x}_{1}=x_{2}\nonumber \\ x_{2}=\dot{y}\Rightarrow & {} \dot{x}_{2}=-\frac{k}{m}x_{1}-\frac{d}{m}x_{2}+\frac{1}{m}u \end{aligned}$$
(7.4)

Thus the system equation for the state space representation is as follows:

$$\begin{aligned} \left[ \begin{array}{c} \dot{x}_{1}\\ \dot{x}_{2} \end{array}\right] = \left[ \begin{array}{cc} 0 &{} 1 \\ -\frac{k}{m} &{} -\frac{d}{m} \end{array}\right] \left[ \begin{array}{c} x_{1}\\ x_{2} \end{array}\right] + \left[ \begin{array}{c} 0\\ \frac{1}{m} \end{array}\right] \,u \end{aligned}$$
(7.5)

The general form of the system equation is:

$$\begin{aligned} \dot{\textbf{x}}=\textbf{A}\,\textbf{x}+\textbf{B}\,\textbf{u} \end{aligned}$$
(7.6)

This set of equations contains the state space vector \(\textbf{x}\). Its components describe all inner variables of the process that are of interest and that have not been examined explicitly using a formulation by transfer function. The system output is described by the output equation. In the given example as shown in Fig. 7.1 the system output y is equal to the inner state \(x_{1}\)

$$\begin{aligned} y=x_{1} \end{aligned}$$
(7.7)

which leads to the vector representation of

$$\begin{aligned} y=\left[ \begin{array}{cc} 1 &{} 0\\ \end{array}\right] \left[ \begin{array}{c} x_{1}\\ x_{2} \end{array}\right] \end{aligned}$$
(7.8)

The general form of the output equation is:

$$\begin{aligned} \textbf{y}=\textbf{C}\,\textbf{x}\,+\,\textbf{D}\,\textbf{u} \end{aligned}$$
(7.9)

which leads to the general state space representation that is applicable for Single or Multi Input and Output systems. The structure of this representation is depicted in Fig. 7.2. Although not mentioned in this example, matrix \(\textbf{D}\) denotes a direct feedthrough which occurs in systems whose output signals y are directly affected by the input signals u without any time delay. Thus these systems show a non-delayed step response. For further explanation on A, B, C and D [38] is recommended. Note that in many teleoperation applications, where long distances between master device and slave device are existing significant time delays occur.

Fig. 7.2
figure 2

State space description

1.2 Nonlinear System Description

A further challenge within the formulation of system behavior is to imply nonlinear effects, especially if a subsequent system analysis and classification is needed. Although a mathematical description of nonlinear system behavior might be found fast, the applicability of certain control design methods is an additional problem. Static non-linearities can be easily described by a serial coupling of a static non-linearity and linear dynamic device to be used as a summarized element for closed loop analysis. Herein two different models are differentiated. Figure 7.3 shows the block diagram consisting of a linear element with arbitrary subsystem dynamics followed by a static non-linearity.

Fig. 7.3
figure 3

Wiener-model

This configuration also known as Wiener-model is described by

$$\begin{aligned} \tilde{u}(s)= & {} \underline{G}(s)\cdot u(s)\\ y(s)= & {} f(\tilde{u}(s)). \end{aligned}$$

In comparison, Fig. 7.4 shows the configuration of the Hammerstein -model changing the order of the underlying static non-linearity and the linear dynamic subsystem.

Fig. 7.4
figure 4

Hammerstein-model

The corresponding mathematical formulation of this model is described by

$$\begin{aligned} \tilde{u}(s)= & {} f(u(s))\\ y(s)= & {} \underline{G}(s)\cdot \tilde{u}(s). \end{aligned}$$

More complex structures appear as soon as the dynamic behavior of a system is affected by non-linearities. Figure 7.5 shows as an example a system with an internal saturation. For this configuration both models cannot be applied as easily as for static non-linearities. In particular if a system description is needed usable for certain methods of system analysis and investigation.

Fig. 7.5
figure 5

System with internal saturation

Typical examples for systems showing that kind of nonlinear behavior are electrical motors whose torque current characteristic is affected by saturation effects, and thus whose torque available for acceleration is limited to a maximum value.

This kind of system behavior is one example of how complicated the process of system modeling may become, as ordinary linear system description methods are not applicable to such a case. Nevertheless it is necessary to gain a system formulation in which the system behavior and the system stability can be investigated successfully. To achieve a system description taking various system non-linearities into account, it is recommended to set up a nonlinear state space descriptions. They offer a wide set of tools applicable to the following investigations. Deriving from Eqs. (7.1) and (7.2) the nonlinear system description for single respectively multi input and output systems is as follows:

$$\begin{aligned} \dot{\textbf{x}}= & {} \textbf{f}(\mathbf {}x, \textbf{u}, t)\\ \textbf{y}= & {} \textbf{g}(\mathbf {}x, \textbf{u}, t). \end{aligned}$$

This state space description is most flexible to gain a usable mathematical formulation of a systems behavior consisting of static, dynamic and arbitrarily coupled non-linearities. In the following, these equations serve as a basis for the examples illustrating concepts of stability and control.

1.2.1 Common Nonlinearities in Control Systems

In general, a control system can be divided into four parts—plant, actuators, sensors, and controller—as shown in Fig. 7.6. Any of these units can be linear or nonlinear.

Fig. 7.6
figure 6

Block diagram of a robotic system

Due to centripetal and Coriolis forces, the plant or the physical robot is usually nonlinear. As this type of nonlinearity is continuous, it can be locally approximated to be a linear function. In many applications, since the operation range is small, this linearized model is effective and almost accurate.

On the other hand, some nonlinearities (hard nonlinearities) are discontinuous or hard for approximation. Regardless of the operation range, the magnitude and level of their effect on the system’s performance define whether to consider them or not. In the following, some of the common nonlinearities will be discussed.

Saturation

In linear control, it is considered that increasing the input to a device results in a proportional increase of the output. However, in real systems, it goes somehow differently. For small inputs, the corresponding output is almost proportional, but when the input increases to a certain level and above that, the output will not increase proportionally or even it may not increase. In other words, the output stays around a maximum value and it can be said that the device is in saturation. The saturation is normally due to the physical limits of the device. For example, the properties of the magnet in a DC motor set the limit to its output torque, the supply voltage limits the output of an operational amplifier, and the length of a spring defines its force limit. The typical real saturation nonlinearity and the ideal saturation function are depicted in Fig. 7.7.

Fig. 7.7
figure 7

The real and ideal saturation nonlinearity

Since saturation nonlinearity does not change the phase of the input, one can consider it as a variable gain, where the gain decreases when the saturation occurs. The exact effect of saturation on the system performance is rather complicated. Consider a system that is unstable in the linear range, using saturation can limit the system signals, suppresses its divergence, and result in a sustained oscillation. However, it can slow down a linearly stable system since it is a variable and decreases gain as input increases.

Dead-zone

Many practical devices do not respond to the inputs below a certain level. When the input’s value is bigger than a threshold, there would be output. The dead-zone nonlinearity can be shown as Fig. 7.8.

Fig. 7.8
figure 8

Dead-zone nonlinearity

One common example is a diode. This electronic element does not pass any current if the input voltage is below its threshold (cut-in voltage), so the output current is almost zero, and if the voltage increases, the diode will behave like an ohmic resistance. Another example can be a DC motor that does not rotate until the input voltage exceeds a minimum level and the produced torque becomes bigger than the static friction on the motor’s shaft.

Some possible effects of the dead-zone in a control system are reducing the positioning accuracy, introducing a limit cycle, leading to instability due to zero response in the dead-zone, and reducing chattering of an ideal relay.

Backlash

The clearance of mechanical gears or transmission system results in zero output for a certain range of input (the gap) when the direction of movement is reversed. Consider the gear shown in Fig. 7.9, due to several reasons such as rapid working and unavoidable manufacturing error, there exists backlash. When the rotating direction of the driving gear changes, the driven gear does not rotate at all until the driving gear makes contact with it. During this period, the rotation of the driven gear is zero. After the establishment of contact, the driven gear will follow the rotation of the driver. Consequently, if the driver performs a periodic rotation, the driven gear’s rotation will be a closed path as shown in Fig. 7.9.

Fig. 7.9
figure 9

Backlash in gear and the input-output relation

The most important characteristic of backlash is its multi-valued nature. It means that the output depends both on the current input value and on its past values. Due to multi-valued nonlinearities like backlash, the system will store energy that can lead to chattering or sustained oscillation or even instability.

Relay or on-off nonlinearity

Consider a saturation with zero linearity range and vertical slope; it is called an ideal relay where the output could be maximum positive, off, or maximum negative (Fig. 7.10).

Fig. 7.10
figure 10

Ideal relay

An example is the temperature control of a domestic heating system using a thermostat. The heating system turns on whenever the temperature is below the set-point and turns off when it is above that. Because of its discontinuous nature, the system will oscillate or chatter around the set-point with high frequency. To reduce the chattering frequency, as shown in Fig. 7.11, practical relays have a definite amount of dead-zone.

Fig. 7.11
figure 11

Practical relay

Due to the fact that a larger input is needed to close a relay, so, depending on their dead-zone range, a relay can perform as shown in Fig. 7.12.

Fig. 7.12
figure 12

Relay output: a no dead-zone b significant dead-zone

Friction

When two mechanical surfaces are sliding or trying to slide, there is a friction force in the opposite direction of moving. The special case is static or coulomb friction. Considering the relative velocity between the two surfaces as the input, the resulting force or the output is shown in Fig. 7.13.

Fig. 7.13
figure 13

Coulomb friction force as a function of velocity

In practice, where commonly there exist stiction and viscous damping as well, the output can be depicted as Fig. 7.14. As shown in this figure stiction force is bigger than coulomb force which makes the total friction a complex nonlinearity.

Fig. 7.14
figure 14

Friction model considering coulomb, stiction, and viscous frictions

Dealing with these nonlinearities requires a more sophisticated controller design where two of the well-known and highly efficient control techniques are adaptive control and Sliding Mode Control.

1.2.2 Adaptive and Sliding Mode Control (SMC) for Controlling Nonlinearities

Almost all modeled systems contain uncertainties due to intended simplifications such as unmodeled high-order dynamics or linearization of a nonlinear phenomenon, or inaccuracy of the system’s parameter. Neglecting the uncertainty results in an adverse effect on the control system. Hence, they should be considered in the controller design. Since linear controller’s performance are limited by, for example, waterbed effect, it is needed to deal with nonlinearities by nonlinear controllers. Two well-known and effective approaches to take care of nonlinearity and uncertainty are sliding mode control (SMC) and adaptive control. These two methods will be discussed in the following sections.

Sliding Mode Control

Sliding Mode Control (SMC) is a nonlinear control technique, which presents desirable characteristics such as accuracy, robustness, and fast dynamic response. The design of this controller is done in two parts:

  1. 1.

    A sliding surface, which fulfills the design specifications.

  2. 2.

    A controller law to move the system’s states to the designed surface.

This design procedure brings two main advantages: the possibility of having tailored dynamic response, and robustness to nonlinearity, uncertainty, and disturbance. In other words, SMC is capable of controlling a nonlinear process suffering from external disturbance and model uncertainty. For designing the SMC system, a system model could be considered as a nonlinear SISO system as follow:

$$\begin{aligned} \dot{\text{ x }}= & {} {\text{ f }}({\text{ x }},{\text{ u }})+{\text{ b }}({\text{ x }},{\text{ t }}) u\end{aligned}$$
(7.10)
$$\begin{aligned} {\text{ y }}= & {} {\text{ g }}({\text{ x }}, {\text{ t }}) \end{aligned}$$
(7.11)

where u is the scalar input, y is the scalar output, and \(x \in R^n\) is the state vector. The ideal controller is the one that y tracks \(y_d\)(desired output) and the tracking error \((e=y_d-y)\) tends to a small vicinity of zero after a finite time (transient response).

To design a SMC, the first step is defining the sliding surface as \(\sigma \)(t) in a way that zero error results in \(\sigma (t)=0\), and \(\sigma (t) \dot{\sigma } (t)<0\) fulfils for the rest of the time. A common form of \(\sigma \)(t), which depends on only one parameter is as:

$$\begin{aligned} \mathbf{\sigma (t)} = (\frac{d}{dt}+\lambda )^{n-1} e(t) \ \end{aligned}$$
(7.12)

where \(\lambda >0\) is a constant. For example, in the case of n=3 which is the order of the controller, the sliding surface is:

$$\begin{aligned} \mathbf{\sigma } = \ddot{\text{ e }}+2 \lambda \dot{\text{ e }} + \lambda ^2 e \ \end{aligned}$$
(7.13)

The second step is defining a control law that steers the system’s states onto the sliding surface, which makes \(\sigma =0\) in finite time. There are some approaches for defining the control law. The two most common ones are standard or the first-order control law and the second-order one will be discussed in the next sections. There is no dependency on the selected approach and SMC allows designing the controller based on an estimation of the original system’s dynamics.

First-order SMC

The following formula is one of the most simple SMC controller models. In this model, the control input is a discontinuous function of \(\sigma \):

$$\begin{aligned} u=-Usgn(\sigma ) \end{aligned}$$
(7.14)

where sgn(.) is the signum function and \(U>0\) is a sufficiently large constant. Therefore, the control signal is:

$$\begin{aligned} u = {\left\{ \begin{array}{ll} -U &{} \sigma < 0\\ U &{} \sigma > 0 \end{array}\right. } \end{aligned}$$
(7.15)

As a result, the \(\sigma \) variable would change typically as shown in Fig. 7.15.

Fig. 7.15
figure 15

Typical time response of \(\sigma \) variable

As seen in the Fig. 7.15, the system would do high-frequency chattering in a small vicinity of the desired surface rather than sliding on it. This high-frequency switching could cause oscillation especially in the control of the mechanical systems.

Since this chattering phenomenon is because of the discontinuous sign function, smoothed continuous approximation of it could be rather effective. Two common examples are:

$$ \begin{aligned} sat \quad u = -Usat(\sigma ,\varepsilon ) = -U \frac{\sigma }{\sigma +\varepsilon } \quad \varepsilon >0 \& \varepsilon \approx 0 \end{aligned}$$
(7.16)
$$ \begin{aligned} tanh \quad u = -Utanh(\frac{\sigma }{\varepsilon }) \quad \varepsilon >0 \& \varepsilon \approx 0 \end{aligned}$$
(7.17)

A comparison of the smoothed saturation and the sign function is depicted in Fig. 7.16.

Fig. 7.16
figure 16

Comparison of sign function and its alternative, smoothed saturation

However, smoothing the sign function will result in increasing the tracking error and decreasing the robustness. Another solution could be the usage of the higher-order SMC.

Second-order SMC

Second-order SMC is capable of the complete elimination of the chattering phenomenon without sacrificing robustness. The first-order SMC steers the system’s states in a way that \(\sigma (t)=0\) when the error is zero, while the second-order SMC also forces the derivative of \(\sigma (t)\) goes to zero. There exist many well-known functions to generate a second-order sliding mode law such as integral operation sliding surface, PID surface, and super-twisting algorithm. As an example, the super-twisting second-order SMC can be defined as:

$$\begin{aligned} {\left\{ \begin{array}{ll} u = -V \sqrt{|\sigma |}sgn(\sigma ) + w \\ \dot{w} = -Wsgn(\sigma ) \end{array}\right. } \end{aligned}$$
(7.18)

An effective tuning guide for the parameters are:

$$\begin{aligned} V = \sqrt{U} \quad = 1.1U \end{aligned}$$
(7.19)

where \(U>0\) is a constant that should be taken sufficiently large. Considering the comparison of the linear PI controller and the super-twisting second-order SMC depicted in Fig. 7.17. This algorithm can be seen as the nonlinear PI controller.

Fig. 7.17
figure 17

Block diagram of a linear PI controller and the super-twisting SMC

It is obvious that the produced control signal by the second-order SMC is continuous. Therefore, the system performs with no chattering.

Adaptive Control

Another approach to the control of a nonlinear system that can improve the system output with uncertainty is the adaptive control method. The basis of this approach is estimating the system’s parameters or uncertainties based on measured signals of the system. Therefore, adaptive control lays down in the field of nonlinear control.

This method is useful for a system experiencing a wide range of parameter changes, such as a robotic manipulator designed to manipulate loads of various weights. Adaptive control is mainly used in systems where there exists nonlinearity or the variation and uncertainty of its parameters are inevitable. The most important requirement of adaptive control is that parameter adaptation should be done significantly faster than the change of the system parameters. However, in practice, this requirement is often fulfilled since a rapid change of a parameter means that the modeling is not complete and should consider this dynamic behavior theoretically.

There exists another method of controlling nonlinearity and uncertainty, which is the robust control method. Although both methods deal with nonlinearity, there are some differences. In the case of slowly varying parameters, the adaptive control performance is significantly better than the robust control method. The reason is that the adaptive control estimates the varying parameters and redesigns the controller according to these changes. Thus, its performance improves over time, while the robust control method is conservative with consistent performance. Moreover, the robust control requires an estimation of the nonlinearity or uncertainty, while adaptive control can be designed with little or no prior estimation. However, on the other hand, comparing to adaptive control, robust control is capable of dealing with disturbances, fast varying parameters, and unmodeled dynamics. Therefore, a combination of these two methods could be a good solution especially when there is an external part such as rehabilitation systems [1].

As it is mentioned, the superiority of the adaptive control is that the controller learns and adjusts its parameters to enhance the tracking performance. There exist two main methods for this learning and adjustment process: model-reference adaptive control (MRAC) and self-tuning controller (STC). In this book, a brief explanation of the methods is presented to provide an overview of the tools that can be used in the field of haptic.

Model-Reference Adaptive Control

In this method, it is assumed that the structure of the plant’s model is known, but some parameters are unknown. A reference model defines the ideal response of the system and the adaptation law adjusts the controller parameters to respond like the reference model (Fig. 7.18).

Fig. 7.18
figure 18

Model-Reference Adaptive Control structure

The reference model should fulfill the expected performance of the system in both time and frequency domain characteristics. Furthermore, by considering the known structure of the plant, its order, and its relative degree, the expected performance could be achievable. In addition, the designed controller should be capable of providing the reference model’s performance when the plant’s model is exactly known.

Self-Tuning Controller (STC)

In the pole-placement method, where the controller is designed based on the plant’s parameters, its parameters could be estimated by using the input-output of the plant (Fig. 7.19). Then, the controller parameters are updated to control the estimated plant.

Fig. 7.19
figure 19

Self-Tuning Controller structure

The adaptation process in this method is different from the MRAC method. MRAC tries to adjust the controller parameters to make the system’s response as close as possible to the reference model. However, STC estimates the plant’s parameters and adjusts the controller’s parameters based on the estimated plant.

Here, the procedure of designing an adaptive controller is explained through an example. In [1], an adaptive law was designed for a sliding mode controlled wearable hand rehabilitation robot to overcome the stiffness variation of the patients’ hand. Using the Lyapunov function, not only the stability of the system is guaranteed but also the adaptive law is derived. The Lyapunov function was considered as:

$$\begin{aligned} V = \frac{1}{2}\sigma ^ 2 + \frac{1}{2} \widetilde{F}^2 \end{aligned}$$
(7.20)

where \(\sigma \) is the sliding surface and \(\widetilde{F} = F_{int}-\widetilde{F}\) is the estimation error of the user’s interaction force. Assuming that \(F_{int}\) changes slowly, the adaptive law and adaptive controller equation were derived based on the stability criteria of the Lyapunov method, or \(\dot{V}<\)0.

2 System Stability

As mentioned in above one of the most important goals of the control design is the stabilization of systems or processes during their life cycle, while operative or disabled. Due to the close coupling of haptic systems to a human user via a human machine interface, safety becomes most relevant. Consequently the focus of this chapter lies on system stability and its analysis by using certain methods applicable to many systems. It has to resemble the system’s behavior correctly, and has to be aligned with applied investigation technique. For the investigation of systems, subsystems, closed looped systems, and single or multi input output systems, a wide variety of different methods exists. The most important ones shall be introduced in this chapter.

2.1 Analysis of Linear System Stability

The stability analysis of linear time invariant systems is easily done by the investigation of the system poles or roots derived from the eigenvalue calculation of the system transfer function G(s). The decisive factor is the sign of the real part of these system poles. A negative sign in this real part indicates a stable eigenvalue; a positive sign denotes an unstable eigenvalue. The correspondence to the system stability becomes obvious while looking at the homogenous part of the solution of the ordinary differential equation describing the system behavior. As example a system shall be described by

$$\begin{aligned} T\dot{y}(t) + y(t) = K u(t). \end{aligned}$$
(7.21)

The homogenous part of the solution y(t) is derived using

$$\begin{aligned} y_{h} = e^{\lambda t}\quad \text { with }\lambda = -\frac{1}{T}. \end{aligned}$$
(7.22)

As it can be seen clearly, the pole \(\lambda =-\frac{1}{T}\) has a negative sign only if the time constant T has a positive sign. In this case the homogenous part of y(t) disappears for \(t\rightarrow \infty \), while it rises beyond each limit exponentially if the pole \(\lambda =-\frac{1}{T}\) is unstable. This section will not deal with the basic theoretical background of linear system stability, as these are basics of control theory. Focus of this section is the application of certain stability analysis methods. Herein it will be distinguished between methods for a direct stability analysis of a system or subsystem and techniques of the closed looped stability analysis. For direct stability analysis of linear system the investigation of the poles placement in the complex plane is fundamental. Besides the explicit calculation of the system poles or eigenvalues the Routh-Hurwitz criterion offers to determine the system stability and the system pole placement with explicit calculation. In many cases this simplifies the stability analysis. For the analysis of the closed loop stability the determination of the closed loop pole placement is also a possible approach. Additional methods leave room for further design aspects and extend the basic stability analysis. Well-known examples of such techniques are

  • Root locus method

  • Nyquist’s stability criterion.

The applicability of both methods will be discussed in the following without looking at the exact derivation.

2.1.1 Root Locus Method

The root locus offers the opportunity to investigate the pole placement in the complex plane depending on certain invariant system parameters. As example of invariant system parameters changing time constants or variable system gains might occur. The gain of the open loop is often of interest within the root locus method for closed loop stability analysis and control design. In Eq. (7.23) \(G_R\) denotes the transfer function of the controller, \(G_S\) describes the behavior of the system to be controlled.

$$\begin{aligned} -G_o = G_R G_S \end{aligned}$$
(7.23)

Using the root locus method, it is possible to apply predefined sketching rules whenever the dependency of the closed loop pole placement on the open loop gain K is of interest. The closed loop transfer function \(G_g\) is depicted by Eq. (7.24)

$$\begin{aligned} G_g = \frac{G_R G_S}{1 + G_R G_S} \end{aligned}$$
(7.24)

As an example an integrator system with a second order delay (IT\(_2\)) described by Eq. (7.25)

$$\begin{aligned} G_S = \frac{1}{s}\cdot \frac{1}{1 + s}\cdot \frac{1}{1 + 4s} \end{aligned}$$
(7.25)

is examined. The control transfer function is \(G_R = K_R\). Thus we find as open loop transfer function

$$\begin{aligned} -G_o = G_R G_S = \frac{K_R}{s(1+s)(1+4s)}. \end{aligned}$$
(7.26)

Using the sketching rules which can be found in various examples in literature [37, 48], the root locus graph as shown in Fig. 7.20 is derived. The graph indicates, that small gains \(K_R\) lead to a stable closed loop system since all roots have a negative real part. A rising \(K_R\) leads to two of the roots crossing the imaginary axis and the closed loop system becomes unstable. This simplified example proves that this method can easily be integrated in a control design process, as it delivers a stability analysis of the closed loop system only processing an examination on the open loop system. This issue is also one of the advantages of the Nyquist stability criterion. Additionally the definition of the open loop system is sufficient to derive a stability analysis of the system in a closed loop arrangement.

Fig. 7.20
figure 20

IT\(_2\) root locus

2.1.2 Nyquist’s Stability Criterion

This section will concentrate on the simplified Nyquist stability criterion investigating the open loop frequency response described by

$$\begin{aligned} -G_o(j\omega ) = G_R(j\omega ) G_S(j\omega ). \end{aligned}$$

The Nyquist stability criterion is based on the characteristic correspondence to amplitude and phase of the frequency response. As example we use the already introduced IT\(_2\)-system controlled by a proportional controller \(G_R = K_R\). The Bode plot of the frequency response is shown in Fig. 7.21: The stability condition which has to be met is given by the phase of the open loop frequency response, with \(\varphi (\omega ) > -180^{\circ }\) in case of the frequency response’s amplitude \(A(\omega )\) being above 0 dB. As shown in Fig. 7.21, the choice of the controller gain \(K_R\) transfers the amplitude graph of the open loop frequency response vertically without affecting the phase of the open loop frequency response. For most applications the specific requirement of a sufficient phase margin \(\varphi _R\) is compulsory. The resulting phase margin is also shown in Fig. 7.21. All such requirements have to be met in the closed control loop and must be determined to choose the correct control design method. In this simplified example the examined amplitude and phase of the open loop frequency response is dependent on the proportional controller gain \(K_R\), which is sufficient to establish system stability including a certain phase margin. More complex control structures such as PI, PIDT\(_n\) or Lead Lag extend the possibilities for control design to meet further requirements.

Fig. 7.21
figure 21

IT\(_2\) frequency response

This section showed the basic principle of the simplified Nyquist criterion being applicable to stable open loop systems. For an investigation of unstable open loop systems the general form of the Nyquist criterion must be used, which itself is not introduced in this book. For this basic knowledge it is recommended to consult [37, 48].

2.2 Analysis of Non-linear System Stability

The application of all previous approaches for the analysis of system stability is limited to linear time invariant systems. Nearly all real systems show nonlinear effects or consist of nonlinear subsystems. One approach to deal with these nonlinear systems is the linearization in a fixed working point. All further investigations are focused on this point, and the application of the previously presented methods becomes possible. If these methods are not sufficient, extended techniques for stability analysis of nonlinear systems must be applied. The following are examples of representing completely different approaches:

  • Principle of the harmonic balance

  • Phase plane analysis

  • Popov criterion and circle criterion

  • Lyapunov’s direct method

  • System passivity analysis.

Without dealing with the mathematical background or the exact proof the principles and the application of chosen techniques shall be demonstrated. At this point a complete explanation of this topic is too extensive due to the wide variety of the underlying methods. For further detailed explanation, [18,19,20, 34, 45, 49] are recommended.

2.2.1 Popov criterion

As an preliminary example the analysis of closed loop systems can be done applying the Popov criterion respectively the circle criterion. Figure 7.22 shows the block diagram of the corresponding closed loop structure of the system that is going to be analyzed.

Fig. 7.22
figure 22

Nonlinear closed loop system

The bock diagram consists of a linear transfer function \(\underline{G}(s)\) with arbitrary dynamics and a static non-linearity f(.). The state space formulation of \(\underline{G}(s)\) is as follows:

$$\begin{aligned} \dot{\textbf{x}}= & {} \textbf{A}\textbf{x} + \textbf{B}\tilde{u}\\ \textbf{y}= & {} \textbf{C}\textbf{x} \end{aligned}$$

Thus we find for the closed loop system description:

$$\begin{aligned} \dot{\textbf{x}}= & {} \textbf{A}\textbf{x} - \textbf{B}f(y)\\ \textbf{y}= & {} \textbf{C}\textbf{x}. \end{aligned}$$

In case that \(f(y) = k\cdot y\) this nonlinear system is reduced to linear system whose stability can be examined with the evaluation of the system’s eigenvalues. An arbitrary nonlinear function f(y) the complexity of the problem is extended. So first constraint on f(y) is that it exists only in a determined sector that is limited by a straight line through the origin with a gradient k. Figure 7.23 shows an equivalent example for the nonlinear function f(y). This constraint is depicted by the following equation:

$$\begin{aligned} 0 \le f(y) \le k y. \end{aligned}$$

The Popov criterion provides an intuitive handling for the stability analysis of the presented example. The system is asymptotically idle state (\(\dot{\textbf{x}}=\textbf{x}=\textbf{0}\)) stable if:

  • the linear subsystem \(\underline{G}(s)\) is asymptotically stable and fully controllable,

  • the nonlinear function meets the presented sector condition as shown in Fig. 7.23,

  • for an arbitrarily small number \(\rho \ge 0\) there exists a positive number \(\alpha \), so that the following inequality is satisfied:

    $$\begin{aligned} \forall \omega \ge 0\quad \text{ Re }[(1+j\alpha \omega )\underline{G}(j\omega )] + \frac{1}{k} \ge \rho \end{aligned}$$
    (7.27)
Fig. 7.23
figure 23

Sector condition

Equation (7.27) formulates the condition also know as Popov inequality. With

$$\begin{aligned} \underline{G}(j\omega ) = \text{ Re }(\underline{G}(j\omega )) + j\text{ Im }(\underline{G}(j\omega )) \end{aligned}$$
(7.28)

Eq. (7.27) leads to

$$\begin{aligned} \text{ Re }(\underline{G}(j\omega )) - \alpha \omega \text{ Im }(\underline{G}(j\omega )) + \frac{1}{k} \ge \rho \end{aligned}$$
(7.29)

With an additional definition of a related transfer function

$$\begin{aligned} G^{*} = \text{ Re }(\underline{G}(j\omega )) + j\omega \text{ Im }(\underline{G}(j\omega )) \end{aligned}$$
(7.30)

Eq. (7.29) states that the plot in the complex plane of \(\underline{G}^{*}\), the so called Popov plot, has to be located in a sector with an upper limit described by \(y=\frac{1}{\alpha }(x+\frac{1}{k})\). Figure 7.24 shows an example for the Popov plot of a system in the complex plane constrained by the sector condition. The close relation to the Nyquist criterion for the stability analysis of linear systems becomes quite obvious here. While the Nyquist criterion examines the plot of \(\underline{G}(j\omega )\) referred to the critical point (-1|0), the location of the Popov plot is checked for a sector condition defined by a straight line limit.

Fig. 7.24
figure 24

Popov plot

The application of the Popov criterion has the excelling advantage, that it is possible to gain a result out of the stability analysis without an exact formulation of the non-linearity within the system. All constraints for the nonlinear subsystem are restraint to the sector condition and the condition to have memoryless transfer behavior. The most complicated aspect within this kind of analysis is how to formulate the considered system structure in a way, that the Popov criterion can be applied. For completeness the circle criterion shall be mentioned whose sector condition is not represented by a straight line, rather

$$\begin{aligned} k_1 \le \frac{f(y)}{y} \le k_2. \end{aligned}$$

defines the new sector condition. For additional explanation on these constraints and the application of the circle criterion it is recommended to consider [34, 45, 49].

2.2.2 Lyapunov’s Direct Method

As second example for stability analysis of nonlinear systems the direct method by Lyapunov is introduced. The basic principle is that if both linear and nonlinear stable systems tend to a stable steady state, the complete system energy has to be dissipated continuously. Thus it is possible to gain result from stability analysis while verifying the characteristics of the function representing the state of energy in the system. Lyapunov’s direct method generalizes this approach to evaluate the system energy by the generation of an artificial scalar function which can describe not only the energy stored within the considered dynamic system, further it is used as an energy like function of a dissipative system. These kinds of functions are called Lyapunov functions V(x). For the examination of the system stability the already mentioned state space description of a nonlinear system is used:

$$\begin{aligned} {\dot{\text{ x }}}= & {} {\text{ f }}({\text{ x }}, {\text{ u }}, t)\\ {\text{ y }}= & {} {\text{ g }}({\text{ x }}, {\text{ u }}, t). \end{aligned}$$

By the definition of Lyapunov’s theorem the equilibrium at the phase plane origin \(\dot{\textbf{x}}=\textbf{x}=\textbf{0}\) is globally, asymptotically stable if

  1. 1.

    a positive definite scalar function \(V(\textbf{x})\) with \(\textbf{x}\) as the system state vector exists, meaning that \(V(\textbf{0}) = 0\) and \(V(\textbf{x}) > 0\,\forall \,\textbf{x} \ne \textbf{0}\),

  2. 2.

    \(\dot{V}\) is negative definite, meaning \(\dot{V}(\textbf{x}) \le 0\),

  3. 3.

    \(V(\textbf{x})\) is not limited, meaning \(V(\textbf{x}) \rightarrow \infty \) as \(\parallel x \parallel \rightarrow \infty \).

If these conditions are met in a bounded area at the origin only, the system is locally asymptotically stable.

As a clarifying example the following nonlinear first order system

$$\begin{aligned} \dot{x} +f{x} = 0 \end{aligned}$$
(7.31)

is evaluated. Herein f(x) denotes any continuous function of the same sign as its scalar argument x so that \(x\cdot f{x}>0\) and \(f(0)=0\). Applying this constraints a Lyapunov function candidate can be found described by

$$\begin{aligned} V=x^2. \end{aligned}$$
(7.32)

The time derivative of V(x) provides

$$\begin{aligned} \dot{V}=2x\dot{x}=-2 x f(x). \end{aligned}$$
(7.33)

Due to the assumed characteristics of f(x) all conditions of Lyapunov’s direct method are satisfied thus the system has globally asymptotically stable equilibrium at the origin. Although the exact function f(x) is not known, the fact that it exists in the first and third quadrant only is sufficient for \(\dot{V}(x)\) to be negative definite. As second example a multi-input multi-output system is examined depicted by its state space formulation

$$\begin{aligned} \dot{x}_1= & {} x_2 - x_1(x_1^2 + x_2^2)\\ \dot{x}_2= & {} -x_1 - x_2(x_1^2 + x_2^2). \end{aligned}$$

In this example the system has an equilibrium at the origin too. Consequently the following Lyapunov function candidate can be found

$$\begin{aligned} V(x_1,x_2) = x_1^2 + x_2^2. \end{aligned}$$
(7.34)

Thus the corresponding time derivative is

$$\begin{aligned} \dot{V}(x_1,x_2) =2x_1\dot{x_1} + 2x_2\dot{x_2} = -2(x_1^2 + x_2^2)^2. \end{aligned}$$
(7.35)

Hence \(V(x_1, x_2)\) is positive definite and \(\dot{V}(x_1, x_2)\) is negative definite. Thus the equilibrium at the origin is globally, asymptotically stable for the system.

A quite difficult aspect when using the Lyapunov’s direct method is given by how to find Lyapunov function candidates. No straight algorithm with a determined solution exists, which is a big disadvantage of this method. Slotine [45] proposes several structured approaches to gain Lyapunov function candidates namely

  • Krasovskii’s method and

  • the variable gradient method.

Besides these Slotine provides additional possibilities to involve the system’s physical principles in the procedure for the determining of Lyapunov function candidates while analyzing more complex nonlinear dynamic systems.

2.2.3 Passivity in Dynamic Systems

As another method for the stability analysis of dynamic systems the passivity formalism is introduced within this subsection. Functions can be extended to system combinations by using Lyapunov’s direct method, and evaluating the dissipation of energy in dynamic systems. The passivity formalism also is based on nonlinear positive definite storage functions \(V(\textbf{x})\) with \(V(\textbf{0}=\textbf{0})\) representing the overall system energy. The time derivative of this energy determines the system’s passivity. As example the general formulation of a system

$$\begin{aligned} {\dot{\text{ x }}}= & {} {\text{ f }}({\text{ x }}, {\text{ u }}, t)\\ {\text{ y }}= & {} {\text{ g }}({\text{ x }}, {\text{ u }}, t). \end{aligned}$$

is considered. This system is passive concerning the external supply rate \(S = \textbf{y}^T\textbf{u}\) if the inequality condition

$$\begin{aligned} \dot{V}(\textbf{x}) \le \textbf{y}^T\textbf{u} \end{aligned}$$
(7.36)

is satisfied. Khalil distinguishes several cases of system passivity depending on certain system characteristics  (Lossless, Input Strictly Passive, Output Strictly Passive, State Strictly Passive, Strictly Passive) [34]. If a system is passive concerning the external supply rate S, it is stable in the sense of Lyapunov.

The combination of passive systems using parallel or feedback structures inherits the passivity from its passive subsystems. With the close relation of system passivity to stability in the sense of Lyapunov, the examination of the system stability is possible by verifying the subsystem’s passivity. Based on this evaluation it can be concluded that the overall system is passive—always with the assumption that a correct system structure was built.

As an illustrating example the RLC circuit taken from [34] is analyzed in the following. The circuit structure is shown in Fig. 7.25.

Fig. 7.25
figure 25

Passivity analysis of an RLC-Network

The system’s state vector is defined by

$$\begin{aligned} i_L= & {} x_1\\ u_C= & {} x_2. \end{aligned}$$

The input u represents the supply voltage U, as output y the current i is observed. The resistors are described by the corresponding voltage current characteristics:

$$\begin{aligned} i_1= & {} f_1(u_{R1})\\ i_3= & {} f_3(u_{R3}) \end{aligned}$$

For the resistor which is coupled in series with the inductor the following behavior is assumed

$$\begin{aligned} U_{R2} = f_2(i_L) = f_2(x_1). \end{aligned}$$
(7.37)

Thus the nonlinear system is described by the differential equation:

$$\begin{aligned} L\dot{x_1}= & {} u - f_2(x_1) - x_2\\ C\dot{x_2}= & {} x_1 - f_3(x_2)\\ y= & {} x_1 + f_1(u) \end{aligned}$$

The presented RLC circuit is passive as long as the condition

$$\begin{aligned} V(\textbf{x}(t)) - V(\textbf{x}(0)) \le \int ^{t}_{0}u(\tau )y(\tau )d\tau \end{aligned}$$
(7.38)

is satisfied. In this example the energy stored in the system is described by the storage function

$$\begin{aligned} V(\textbf{x}(t)) = \frac{1}{2}Lx_1^2 + \frac{1}{2}Cx_2^2. \end{aligned}$$
(7.39)

Equation (7.38) leads to the condition for passivity:

$$\begin{aligned} \dot{V}(\textbf{x}(t),u(t)) \le u(t)y(t) \end{aligned}$$
(7.40)

which means, that the energy supplied to the system must be equal or higher than the time derivative of the energy function. Using \(V(\textbf{x})\) in the condition for passivity provides

$$\begin{aligned} \dot{V}(\textbf{x},u(t))= & {} Lx_1\dot{x_1} + Cx_2\dot{x}_2\\= & {} x_1\left( u-f_2(x_1)-x_2\right) + x_2\left( x_1-f_3(x_2)\right) \\= & {} x_1\left( u-f_2(x_1)\right) + x_2f_3(x_2)\\= & {} \left( x_1+f_1(u)\right) u - uf_1(u) - x_1f_2(x_1) -x_2f_3(x_2)\\= & {} uy - uf_1(u) - x_1f_2(x_1) -x_2f_3(x_2) \end{aligned}$$

and finally

$$\begin{aligned} u(t)y(t) = \dot{V}(\textbf{x},u(t)) + uf_1(u) + x_1f_2(x_1) + x_2f_3(x_2). \end{aligned}$$
(7.41)

In case that \(f_1\), \(f_2\) and \(f_3\) are passive subsystems, i.e. all functions describing the corresponding characteristics of the resistors exist only in the first and third quadrant, so \(\dot{V}(\textbf{x},u(t))\le u(t)y(t)\) is true, hence the RLC circuit is passive. Any coupling of this passive system to other passive systems in parallel or feedback structures again results in a passive system. For any passivity analysis and stability evaluation this method implements a structured procedure and shows a very high flexibility.

In conclusion it is necessary to mention, that all methods for stability analysis introduced in this section show certain advantages and disadvantages concerning their applicability, information value and complexity, regardless whether linear or nonlinear systems are considered. When a stability analysis is expected to be done, the applicability of a specific method should be checked individually. This section only can give a short overview on the introduced methods and techniques, and does explicitly not claim to be a detailed description due to the limited scope of this section. For any further study the reader is invited to consult the proposed literature.

3 Control of Multi-input Systems

There are four types of systems with different inputs and outputs. SISO (single input, single output), SIMO (single input, multiple outputs), MISO (multiple inputs, single output), and MIMO (multiple inputs, multiple outputs). Loop interactions result in unexpected effects from their variables and make these systems complicated.

Many practical systems are multi-input and often nonlinear, such as most of the robotic manipulators, cars, and aircrafts. In these systems, designing a feedback control to fulfill the desired performance and robustness characteristics become more challenging. Here, two control types, position control, and trajectory control will be discussed.

Fig. 7.26
figure 26

Two-link manipulator

Consider a simple planner robotic manipulator with only two links depicted in Fig. 7.26. The system’s dynamic model can be written as:

$$\begin{aligned} M(q)\ddot{q} + C(q,\dot{q})+g(q) =\tau \end{aligned}$$
(7.42)

where M(q) is the inertia matrix of the manipulator, \(C(q,\dot{q})\) is the centripetal and Coriolis torques, g(q) is the gravitational torques, and \(\tau \) is the actuator torques. As can be seen from the above equation, the system is strongly nonlinear (the coriolis and centripetal terms are always nonlinear) with coupled dynamics, which makes it challenging to design a feedback control structure.

As a solution, using high ratio geared actuators can effectively remove the nonlinearity and coupled dynamic difficulties. However, the backlash and friction of the gears, which are hard nonlinearities, adversely affect the performance of the system such as tracking and force control accuracy.

3.1 Position Control

Assume that the two-link manipulator is in the horizontal plane, thus \(g(p)=0\) , and it is required to move to a defined stationary position i.e. \(q_d\). One can realize that the most simple feedback control law to achieve position control is the joint PD controller, which controls each joint independently based on its position error and its time derivative as:

$$\begin{aligned} \tau _i = -k_{pi} \tilde{q_i} - k_{di} \dot{q_i} \end{aligned}$$
(7.43)

where \(k_{pi}>0 ,k_{di}>0 ,\tilde{q_i} = q_i - q_{di}\) is the position error and \(\dot{q_i}\) is the velocity of the ith joint. This control structure can be seen as a spring and damper that are connected to each joint where the neutral position of the springs is the desired position. As a result, the system performs damped oscillation towards the desired position. The stability can be checked by considering the total mechanical energy of the system as Lyapunov function:

$$\begin{aligned} V = \frac{1}{2}(\dot{q}^TM\dot{q} + \tilde{q}^TK_p \tilde{q} ) \end{aligned}$$
(7.44)

where \(K_p\) is the matrix of P controller coefficient, which is diagonal and positive definite. Therefore, the derivative of Lyapunov function can be derived as:

$$\begin{aligned} \dot{V} = - \dot{q}^Tk_d\dot{q} \le 0 \end{aligned}$$
(7.45)

where \(k_d\) is the matrix of D controller coefficient and the same as \(k_p\), it is diagonal and positive definite as well. As can be seen from the above equation, \(\dot{V}\) is the dissipated energy by D controller or the virtual damping. The time response of such a controlled system is almost the same as a damped mass-spring system. However, one should expect a significant variation of time response characteristics of such a highly nonlinear plant with constant controller parameters. Other solutions could be sliding mode control and adaptive control, which are capable of dealing with nonlinearity more effectively.

3.2 Trajectory Control

Now consider that the desired position changes with respect to time. Due to the strong nonlinearity of the manipulator in Fig. 7.27 and its equation, the PID-SISO controller structure can’t satisfy the desired tracking performance. One solution is to use the general form of the linear controller PID for a MIMO system. In [11], a PID-MIMO control law is tuned based on try and error. The advantage of PID-MIMO over the PID-SISO is that PID-MIMO benefits from the error of all joints to calculate the input of each joint. In other words, \(K_p\) and \(K_d\) matrices are not diagonal, they are symmetric and positive definite. As a result, the tracking performance significantly improves compared to PID-SISO (PID).

Fig. 7.27
figure 27

The PID-SISO (PID) controller structure (a) and the PID-MIMO controller structure (b)

Figure 7.28 depicts the tracking error of the two-linkage robot, which shows that in a multi-input system, utilizing a single-input PID results in significant tracking error.

Fig. 7.28
figure 28

Tracking error comparison of PID and PID-MIMO.png

Other effective control methods could be robust control and adaptive control methods. However, there is a difference in their performance. As mentioned previously, since the nonlinearity of the system is known and modeled, the adaptive control can lead to better performance. In other words, the robust control considers the nonlinearity as uncertainty, which is a much more conservative technique comparing to estimating the nonlinearity in the adaptive control method. Therefore, in this case, the adaptive control can result in superior performance. However, considering the haptic or rehabilitation systems, where the robot interacts with an unknown environment or user’s command, as discussed in [1] a robust adaptive controller enhances the system performance.

4 Control Law Design for Haptic Systems

As introduced in the beginning of this chapter, control design is a fundamental and necessary aspect within the development of haptic systems. Besides the techniques for system description and stability analysis the need for control design and the applicable design rules become obvious. Especially for the control design of a haptic system it is necessary to deal with several aspects and conditions to be satisfied during the design process. The following sections present several control structures and design schemes in order to set up a basic knowledge about the toolbox for analytic control design of haptic systems. This also involves some of the already introduced methods for system formulation and stability analysis, as these form the basis for most control design methods.

4.1 Structuring of the Control Design

As introduced in Chap. 6 various different structures of haptic systems exist. Demands on the control of these structures are derived in the following.  

Open-loop impedance controlled:

The user experiences an impression of force which is directly commanded via an open loop based only on a demand value. In Chap. 6 the basic scheme of this structure is shown by Fig. 6.2.

Closed-loop impedance controlled:

As it can be seen in Fig. 6.4, the user also experiences an impression of force which is fed back to a controller. Here a specific control design will be needed.

Open-loop admittance controlled:

In this scheme, the user experiences an impression of a defined position. In the open loop arrangement this position again is directly commanded based only on a demand value. Figure 6.6 shows the corresponding structure of this haptic scheme.

Closed-loop admittance controlled:

This last version as it is depicted in Fig. 6.8 shows its significant difference in the feedback of the force the user applies to the interface. This force is fed back to a demand value. This results in a closed loop arrangement that incorporates the user and his or her transfer characteristics. In difference to the closed loop impedance controlled scheme this structure uses a force as demanded value \(\underline{S}_F\) compared with the detected \(\underline{S}_S\), but the system output is still a position \(\underline{x}_{out}\). This results in the fact that the incorporation of the user into the closed loop behavior is more complex than it is in a closed loop impedance controlled scheme.

  All of these structures can be basically implemented in a haptic interaction as shown in Fig. 2.33. From this, all necessary control loops of the overall telemanipulation system become evident:

  • On the haptic interface site a control loop is closed incorporating the user which is valid as long as the user’s reaction is fed back to the central interface module for any further data processing or control.

  • On the process/environment site also, a closed loop exists if measurable process signals (reactions, disturbances) are fed back to the central interface module for data processing or control.

  • Underneath these top-level control loops various subsystem control loops exists which have a major impact on the overall system too. As an example, each electrical actuator will most likely be embedded in a cascaded control structure with current, speed and position control.

It becomes obvious that the design of a control system for a telemanipulation system with a haptic interface is complex and versatile. Consequently a generally valid procedure for control design cannot be given. The control structures must be designed step by step involving the following controllers:

  1. 1.

    Design of all controllers for the subsystem actuators

  2. 2.

    Design of a top level controller for the haptic interface

  3. 3.

    Design of a top level controller for the manipulator/VR-environment

  4. 4.

    Design of the system controller that connects interface and manipulator or VR environment.

This strict separation proposed above might not be the only way of structuring the overall system. Depending on the application and functionality, the purpose of the different controller and control levels might be in conflict to each other or simply overlap. Therefore it is recommended to set up the underlying system structure and define all applied control schemes corresponding to their required functionality.

While looking at the control of haptic systems, a similar structure can be established. For both the control of the process manipulation and the haptic display or interface the central interface module will have to generate demand values for force or position, that are going to be followed by the controllers underneath. These demand values derive from a calculation predefined by designed control laws. To gain such control laws a variety of methods and techniques for structural design and optimization can be applied depending on certain requirements. The following subsections give an overview of typical requirements to closed control loop behavior followed by examples for control design.

4.2 Requirement Definition

Besides the fundamental need for system stability with sufficient stability margins additional requirements can be set up to achieve a certain system behavior in a closed loop scheme such as dynamic or precision. A quantitative representation of these requirements can be made by the achievement of certain characteristics of the closed loop step response.

Figure 7.29 shows the general form of a typical closed loop step response and its main characteristics. As it can be seen the demanded value is reached and the basic control requirement is satisfied.

Fig. 7.29
figure 29

Closed loop step response requirements

Table 7.1 Parameter for control quality requirements

Additional characteristics are discussed and listed in Table 7.1. For all mentioned characteristics a quantitative definition of certain requirements is possible. For example the number and amplitude of overshoots shall not extend a defined limit or have a certain frequency spectrum that is of special interest for the control design in haptic systems. As it is analyzed in Chap. 3, the user’s impedance shows a significant frequency range which must not be excited within the control loop of the haptic device. Nevertheless a certain cut-off frequency has to be reached to establish a good performance of the dynamic behavior. All these issues are valid for the requirements to the control design of the process manipulation. In addition to the requirements from the step response due to changes of the setpoint value, it is necessary to formulate requirements concerning the closed loop system behavior considering disturbances originating from the process. Especially when interpreting the user’s reaction as disturbance within the overall system description a requirement set up for the disturbance reaction of the control loop has to be established. As it can be seen in Fig. 7.30 similar characteristics exist to determine the disturbance reaction quantitatively and qualitatively. In most cases both the step response behavior and the disturbance reaction cannot satisfy all requirements, as they often come into conflict with each other, which is caused by the limited flexibility of the applied optimization method. Thus it is recommended to estimate the relevance of step response and disturbance reaction in order to choose an optimization approach that is most beneficial. Although determined quantitatively, it is not possible to use all requirements in a predefined optimization method. In most cases an adjustment of requirements is necessary to be made, to apply specific control design and optimization methods. As an example the time \(T_\text {res}\) as depicted above cannot be used directly, and must be transferred into a requirement for the closed loop dynamic characterized by a definite pole placement.

Fig. 7.30
figure 30

Closed loop disturbance response requirements

Furthermore simulation techniques and tests offer iteration within the design procedure to gain an optimal control law. However, this very sufficient way of analyzing system behavior and test designed control laws suggests to forget about the analytic system and control design strategy and switch to a trial an error algorithm.

4.3 General Control Law Design

This section shall present some possible types of controllers and control structures that might be used in the already discussed control schemes. For optimization of the control parameters several methods exists. They are introduced here. Depending on the underlying system description several approaches to set up controllers and control structures are possible. This section will present the classic PID-control, additional control structures e.g. compensation, state feedback controllers, and observer based state space control.

4.3.1 Classic PID-Control

Maybe one of the most frequently used controllers is the parallel combination of a proportional (P), an integrating (I) and a derivative (D) controller. This combination is used in several variants including a P-controller, a PI combination, a PD combination or the complete PID structure. Using the PID structure all advantages of the individual components are combined. The corresponding controller transfer function is described by

$$\begin{aligned} \underline{G}_R = K_R\left( 1 + \frac{1}{T_N s} + T_V s\right) . \end{aligned}$$
(7.46)

Figure 7.31 shows the equivalent block diagram of a PID controller structure. Adjustable parameters in this controller are the proportional gain \(K_R\), the integrator time constant \(T_N\) and the derivative time \(T_V\).

Fig. 7.31
figure 31

PID block diagram

With optimized parameter adjustment a wide variety of control tasks can be handled. This configuration offers on the one hand the high dynamic of the proportional controller and on the other, the integrating component guarantees a high precision step response with a residuum \(x_d = 0\) for \(t \rightarrow \infty \). The derivative finally provides an additional degree of freedom that can be used for a certain pole placement of the closed loop system.

As major design techniques the following examples shall be introduced:  

Root Locus Method:

This method has its strength by the determined pole placement for the closed loop system, directly taken into account the dependence on the proportional gain \(K_R\). By a reasonable choice of \(T_N\) and \(T_D\) the additional system zeros are influenced which affects directly the resulting shape of the root locus and thus the stability behavior. Besides this the overall system dynamic can be designed.

Integral Criterion:

The second method for the optimization of the closed loop system step response or disturbance reaction is the minimization of an integral criterion. The basic procedure for this method is as the following: The tracking error \(x_d\) due to changes of the demanded set point or a process disturbance is integrated (and eventually weighted over time). This time integral will be minimized by adjusting the controller parameters. In case of convergence of this minimization, the result is a set of optimized controller parameters.

  For any additional theoretical background concerning controller optimization the reader is invited to consult the literature on control theory and control design [37, 38].

4.3.2 Additional Control Structures

In addition to the described PID controller additional control structures extend the influence on the control result without having an impact on the system stability. The following paragraphs therefore shall present and disturbance compensation and a direct feedforward of auxiliary process variables.

Disturbance Compensation

The basic principle of disturbance compensation assumes that if a disturbance on the process is measurable and its influence is known, this knowledge can be used to establish compensation by corresponding evaluation and processing. Figure 7.32 shows a simplified scheme of this additional control structure.

Fig. 7.32
figure 32

Simplified disturbance compensation

In this scheme a disturbance signal is assumed to affect the closed loop via a disturbance z transfer function \(\underline{G}_{D}\). By measuring the disturbance signal and processing the compensator transfer function \(\underline{G}_{C}\) results in a compensation of the disturbance interference. Assuming an optimal design of the compensator transfer function this interference caused by the disturbance is completely erased. The optimal design of a corresponding compensator transfer function is depicted by

$$\begin{aligned} \underline{G}_C = - \frac{\underline{G}_{D}}{\underline{G}_S}. \end{aligned}$$
(7.47)

This method assumes that a mathematical and practicable inversion of \(\underline{G}_{D}\) exists. For those cases where this assumption is not valid, the optimal compensator \(\underline{G}_{K}\) must be approximated. Furthermore Fig. 7.32 states clearly, that this additional control structure does not have any influence on the closed loop system stability and can be designed independently. Besides the practicability the additional effort should be taken into account. This effort will definitively increase just by the sensors to measure the disturbance signals and by the additional costs for realization of the compensator.

Auxiliary Input Feedforward

A similar structure compared to the disturbance compensation is the feedforward of auxiliary input variables. This principle is based on the knowledge of additional process variables that are used to influence the closed loop system behavior without affecting the system stability. Figure 7.33 shows an example of the feedforward of the demanded setpoint w to the controller signal u using a feedforward filter function \(\underline{G}_{FF}\).

Fig. 7.33
figure 33

Feedforward of auxiliary input variables

4.3.3 State Space Control

Corresponding to the techniques for the description of multi-input multi-output systems discussed prior in this chapter, the state space control provides additional features to cover the special characteristics within those systems. As described before, multi-input multi-output systems are preferably depicted as state space models. Using this mathematical formulation enables the developer to implement a control structure that controls the internal system states to demanded values. A big advantage is that the design methods for state space control use an overall approach for control design and optimization instead of a control design step by step for each system state. With this approach it becomes possible to deal with profoundly coupled multi-input multi-output systems with high complexity, and design a state space controller simultaneously. This section will present the fundamental state space control structures. This will cover the state feedback control as well as the observer based state space control. For further detailed procedures as well as design and optimization methods the reader is referred to [38, 49].

State Feedback Control

As it is shown in Fig. 7.34 this basic structure for state space control uses a feedback of the system states \(\textbf{x}\). Similar to the depiction in Fig. 7.2 the considered system is presented in state space description using the matrices \(\textbf{A}\), \(\textbf{B}\), \(\textbf{C}\) and \(\textbf{D}\). The system states \(\textbf{x}\) are fed back gained by the matrix \(\textbf{K}\) to the vector of the demanded values that were filtered by matrix \(\textbf{V}\). The results represent the system input vector u. Both matrices \(\textbf{V}\) and \(\textbf{K}\) do not have to be square matrices for a state space description is allowed to implement various dimensions for the state vector, the vector for the demanded values and the system input vector.

Fig. 7.34
figure 34

State feedback control

Observer Based State Space Control

The state space control structure discussed above requires a complete knowledge of all system states, which is nothing else but that they have to be measured and processed to be used in the control algorithm. From a practical point of view this not possible all the time due to technical limits as well as costs and effort. As a result the developer is faced with the challenge to establish a state space control without the complete knowledge of the system states. As a solution those system states that cannot be measured due to technical difficulties or significant cost factors are estimated using a state space observer structure that is shown in Fig. 7.35.

Fig. 7.35
figure 35

Observer based state space control

In this structure a system model is calculated in parallel to the real system. As exact as possible this system model is described by the corresponding parameter matrices \(\textbf{A}^*\), \(\textbf{B}^*\), \(\textbf{C}^*\) and \(\textbf{D}^*\). The model input also is represented by the input vector \(\textbf{u}\). Thus the model provides an estimation of the real system states \(\textbf{x}^*\) and an estimated system output vector \(\textbf{y}^*\). By comparison of this estimated output vector \(\textbf{y}^*\) with the real output \(\textbf{y}\), which is assumed to be measurable, the estimation error is fed back gained by the matrix \(\textbf{L}\). This results in a correction of the system state estimation \(\textbf{x}^*\). Any estimation error in the system states or the output vector due to varying initial states is corrected and the estimated states \(\textbf{x}^*\) are used to be gained by the equivalent matrix \(\textbf{K}\) and fed back for control.

This structure of an observer based state space control uses the Luenberger observer. In this configuration all real systems states are assumed not to be measurable thus the state space control refers to estimated values completely. Practically, the feedback of measurable system states is combined with the observer based estimation of additional system states. In [38, 49] examples for observer based state space control structures as well as methods for observer design are discussed in more detail.

Example: Cascade Control of a Linear Drive

As an example for the design of a controller the cascade control of a linear drive build up of an EC motor and a ball screw is considered in this section based on [32]. The consideration includes non-linear effects due to friction, temperature change and a non-linear degree of efficiency of the ball screw.

Fig. 7.36
figure 36

Equivalent circuit of the considered EC-motor with attached ball screw to transform rotary into translational movement

A schematic representation of the EC motor is given in Fig. 7.36. in which only one phase is illustrated for simplification. The motor is supplied with the voltage \(u_{DC}\). The resistance R and the inductance L represent the stator winding of the motor. The angular speed of the rotor \(\underline{\omega }_\text { M}\) generates a back electromotive force (back-EMF) \(u_{EMF}\). The mechanical properties of the motor are described by the motor torque \(\underline{M}_\text { e}\), the load torque \(\underline{M}_\text { L}\) and the moment of inertia of the rotor J. Mesh analysis yields to the equation for the electrical part of the motor

$$\begin{aligned} u_{DC}&= Ri + L \frac{di}{dt} + u_{\textit{EMF}} \end{aligned}$$
(7.48)

which can be written in the frequency domain as

$$\begin{aligned} \underline{U}_\text { DC} - \underline{U}_\text { EMF}&= \underline{I}_\text { } (R + sL) \end{aligned}$$
(7.49)

The back electromotive force \(U_{EMF}\) depends on the angular speed of the rotor \(\omega _M\), the back-EMF constant \(k_e\) and the parameter \(F(\phi _e)\) which describes the dependence of the back-EMF of the electrical angle \(\phi _e\).

$$\begin{aligned} u_{EMF}&= k_e \omega _M F(\phi _e) \end{aligned}$$
(7.50)

The motor torque \(M_e\) generated by the motor current i correlates with the mechanical load \(M_L\) and the angular acceleration \(\omega _M\) of the rotor with the moment of inertia J. It follows:

$$\begin{aligned} M_e&= \frac{i \cdot u_{EMF}}{\omega _M} = ik_eF(\Phi _e)= J\frac{d\omega _M}{dt} + M_L \end{aligned}$$
(7.51)

In the frequency domain the mechanical properties of the motor are described by

$$\begin{aligned} M_e - M_L&= sJ\omega _M. \end{aligned}$$
(7.52)

The model takes three different types of non-linearities into account: friction, temperature change and a non-linear efficiency of the ball screw. The friction is modeled as the sum of a static friction \(K_F\) and a dynamic friction \(k_F \cdot \omega _M\). So the equilibrium of moments of the rotor can now be written as

$$\begin{aligned} M_e - M_L - K_F&= (k_F + sJ)\omega _M. \end{aligned}$$
(7.53)
Fig. 7.37
figure 37

a Equivalent thermal circuit of the EC motor, b efficiency of the ball screw depending on the mechanical load

The influence of changes in temperature on motor parameters is modeled by a thermal equivalent circuit shown in Fig. 7.37a. The temperature change of the stator winding \(T_W\) can be determined by

$$\begin{aligned} \varDelta T_W&= \frac{R_{th1} T_{th2}s + R_{th1}R_{th2}}{T_{th1}T_{th2}s^2 + (T_{th1}+T_{th2})s}P_{el} + \frac{R_{th2}}{T_{th1}T_{th2}s^2 + (T_{th1}+T_{th2} + R_{th2}C_{th1})s+1}P_{fric}. \end{aligned}$$
(7.54)

with

$$\begin{aligned} T_{th1} = R_{th1}C_{th1} \quad \text {and} \quad T_{th2}= R_{th2}C_{th2} \end{aligned}$$
(7.55)

The resulting resistance of the stator winding \(R_*\) and the back-EMF constant \(k_{e*}\) can be derived with knowledge of the temperature coefficients \(\alpha _R\), \(\alpha _k\) from

$$\begin{aligned} R_*&= R(1+ \alpha _R \varDelta T_W), \quad k_{e*} = k_e(1+\alpha _k \varDelta T_W). \end{aligned}$$
(7.56)

The efficiency of the ball screw depends on the mechanical load of the linear drive. Its qualitative characteristics are shown in Fig. 7.37 (b) and can be included in the model as characteristics in a lookup table. The resulting model can be computed for example in Matlab/Simulink and used for simulation and the design of a controller. In this example a cascade controller is chosen (Fig. 7.38). It contains of an inner loop for current control, a middle loop for velocity control and an outer loop for position control. As controller for the different control loops P- or PI-controllers are used.

Fig. 7.38
figure 38

Structure of cascade controller of EC motor

5 Control of Teleoperation Systems

In the previous sections an overview on system description and control aspects in general, which can be used for the design of local and global control laws, was given. The focus of this section lies on special methods used for modeling of haptic systems stability analysis of bilateral telemanipulators. In contrary to Sect 7.4 special tools for the development of control laws are presented here, which based upon the two-port hybrid representation of bilateral telemanipulators (Sect. 7.5.1). Subsequently in Sect. 7.5.2 a definition of transparency will be introduced, which can be used to analyze the performance of a haptic system in dependency of the system characteristics and the chosen control law. In Sect. 7.5.3 the general control model for telemanipulators will be introduced to close the gap between the closed loop representation, known from general control theory and used in the Sects. 7.17.4, and the two-port hybrid representation. In section Sect. 7.5.4 it will be shown, how a stable and safe operation of the haptic system can be achieved. Furthermore the design of stable control laws in the presence of time-delays will be presented in section Sect. 7.5.5.

5.1 Two-Port Representation

In general a haptic system is a bilateral telemanipulator, where a user handles a master device to control a slave device, which is interacting with an environment. A common representation of a bilateral telemanipulator is the general two-port model as shown in Fig. 7.39.

Fig. 7.39
figure 39

General Two-Port-Model of a telemanipulator

User and environment are represented by one-ports, characterized by their mechanical impedances \(\underline{Z}_\text { H}\) and \(\underline{Z}_\text { E}\) as they can be seen as passive elements [33], see Chap. 3. The mechanical impedance \(\underline{Z}_\text { }\) is defined by Eq. (7.57)

$$\begin{aligned} \underline{Z}_\text { } = \frac{\underline{F}_\text { }}{\underline{v}_\text { }} \end{aligned}$$
(7.57)

The user manipulates the master device, which controls the slave device. The slave interacts with the environment. The behavior of the telemanipulator is described by its hybrid matrix \(\textbf{H}\) [21, 43]. So the coupling of user action and interaction with the environment is described by the following hybrid matrix taking forces and velocities at the master and slave side and the properties of the haptic system into account.

(7.58)

In this case, the four h-parameters represent

(7.59)

Please note that the velocity of the slave \(v_E\) is taken into account with a negative sign. This is done to fulfill the convention for general two-ports, where the flow is always flowing into a port. The hybrid two-port representation as shown before is often used to determine stability criteria and to describe performance properties of bilateral telemanipulators. Despite the formulation with force as flow variable (also found in [21, 35], for example), one can also find velocity as flow variable in other two-port-descriptions of bilateral telemanipulators [25] . As long as the coupling is defined by the impedance formulation given in Eq. (7.57), these both variants of the two-port descriptions are interchangeable.

5.2 Transparency

Beside system stability performance is an important design criterion in the development of haptic systems. The function of a haptic system is to provide a high fidelity force feedback of the contact force at the slave side to the user manipulating the master device of the telemanipulator. One parameter often used to evaluate the haptic sensation presented to the user is transparency. If the user interacts directly with the environment, he experiences a haptic sensation, which is determined by the mechanical impedance \(\underline{Z}_\text { E}\) of the environment. If the user is coupled to the environment via a telemanipulator system, he experiences a force impression, which is determined by the backward force gain and the mechanical input impedance of the master device. It is desirable that the haptic sensation for the user of the telemanipulator is the same as interacting directly with the environment. Therefore the telemanipulator has to display the mechanical impedance of the environment \(\underline{Z}_\text { E}\) at the master device. Assume that \(h_{12} = h_{21} = 1\), so there’s no scaling of velocity or force. Therefore the following conditions have to be hold to reach full transparency.

$$\begin{aligned} \underline{F}_\text { H} = \underline{F}_\text { E} \quad \text {and} \quad \underline{v}_\text { H} = \underline{v}_\text { E}. \end{aligned}$$
(7.60)

From this follows that for perfect transparency [35]

$$\begin{aligned} \underline{Z}_\text { H} = \underline{Z}_\text { E} \end{aligned}$$
(7.61)

Therefore the force experienced by the user at the master device is

$$\begin{aligned} \underline{F}_\text { H} = \underline{h}_\text { 11} \underline{v}_\text { H} + \underline{h}_\text { 12} \underline{F}_\text { E} \end{aligned}$$

and for the velocity at the slave side holds

$$\begin{aligned} -\underline{v}_\text { E} = \underline{h}_\text { 21} \underline{v}_\text { H} + \underline{h}_\text { 22} \underline{F}_\text { E}. \end{aligned}$$

Therefore the mechanical impedance displayed by the master and felt by the user is described by

$$\begin{aligned} \underline{Z}_\text { T} = \frac{\underline{F}_\text { T}}{\underline{v}_\text { T}} = \frac{\underline{h}_\text { 11} \underline{v}_\text { H} + \underline{h}_\text { 12} \underline{F}_\text { E} }{\frac{\underline{v}_\text { E} - \underline{h}_\text { 22} \underline{F}_\text { E}}{\underline{h}_\text { 21}}} \end{aligned}$$
(7.62)

By analyzing Eq. (7.62) the conditions for perfect transparency can be derived. To achieve perfect transparency output admittance at the slave side and input impedance at the master side have to be zero. From this follows that for perfect transparency, in the case of no scaling, the matrix has to be in the form

It is obvious that perfect transparency is in practice not achievable without further actions taken, due to non-zero input impedance \(\underline{h}_\text { 11}\) and output admittance \(\underline{h}_\text { 22}\) of the manipulator system. If the input impedance would be zero, the user would not feel the mechanical properties of the master device (mass, friction, compliance). An output admittance of zero relates to an ideal stiff slave device.

5.2.1 A Perception-Oriented Consideration of Transparency

To obtain a transparent system, the system’s engineer has two options: Work on the control structure, as described in the following sections or consider the perception capabilities of the human user in the definition of transparency. The latter is focus of this section, that is based on the more detailed elaborations in [27]. It has to be noted, that this approach still lacks some experimental evaluation.

Up till now, transparency as defined in Eqs. (7.60) and (7.61) is a binary criterion: A system is either transparent if all conditions are fulfilled or is not transparent, if one of the equalities is not given. Despite this formulation, one can define the absolute transparency error \(\underline{e}_\text { T}\) according to Heredia et al. as shown in Eq. (7.63) [30]

$$\begin{aligned} \underline{e}_\text { T} = \underline{Z}_\text { H} - \underline{Z}_\text { E} \end{aligned}$$
(7.63)

and the relative transparency error \(\underline{e'}_\text { T}\) as shown in Eq. (7.64)

$$\begin{aligned} \underline{e'}_\text { T} = \frac{\underline{Z}_\text { H} - \underline{Z}_\text { E}}{\underline{Z}_\text { H}} \end{aligned}$$
(7.64)

When analyzed along the whole intended dynamic range and in all relevant \(\hookrightarrow \) DoF of the haptic system, Eqs. (7.63) and (7.64) allow for the quantitative comparison of different haptic systems and can give insight in the relevant ranges of frequency that have to be optimized for a more transparent system. They also provide the basis for the integration of perception properties in the assessment of transparency.

From the above mentioned definitions of transparency (Eqs. (7.60) and (7.61)) one can conclude, that \(\underline{e}_\text { T} = \underline{e'}_\text { T} {\mathop {=}\limits ^{!}}0\) to fulfill the requirement of transparency. On the other hand it is obvious that a human user will not perceive all possible mechanical impedances, since the perception capabilities are limited as shown in Sect. 2.1. To obtain a quantified range for \(\underline{e}_\text { T}\) and \(\underline{e'}_\text { T}\), a thought experimentFootnote 1 is conducted in the following [46].

Experiment Assumptions

The following assumptions are made for the thought experiment about the user and the teleoperation scenario:

  1. 1.

    Linear behavior of haptic perception as discussed in Sect. 2.1.4.2 is assumed, which holds for a wide range of tool-mediated teleoperation scenarios. Super-threshold perception properties like masking are neglected.

  2. 2.

    For each user there exists a known mechanical impedance \(\underline{Z}_\text { user}\). This impedance generally depends on external parameters like temperature, contact force as shown in Chap. 3. All of these parameters are assumed to be known and invariant over the course of the experiment. Further, a set of frequency-dependent sensory thresholds for deflection and forces exists. They are labeled with \(F_\theta \) and \(d_\theta \) respectively. Both thresholds can be coupled using the mechanical impedance of the user and \(\omega = 2 \pi f\) as the angular frequency of the haptic signal as stated in Eq. (7.65) [28].

    $$\begin{aligned} |\underline{Z}_\text { user}| = \left| \frac{F_\theta }{j\omega d_\theta }\right| \end{aligned}$$
    (7.65)
  3. 3.

    The user is able to impose an interaction force \(\underline{F}_\text { user,int}\) or deflection \(\underline{d}_\text { user,int}\) on the teleoperation system that does not necessarily trigger a sensation event at the contact point. This is for example possible by the movement of an arm, while only the fingertips are in contact with the teleoperation system.

  4. 4.

    The teleoperation system is perfectly transparent, i.e. \(|\underline{e}_\text { T}| = 0\) for all frequencies. The system is able to read and display forces and deflections reproducible below the absolute thresholds of the user.

  5. 5.

    The environment is considered passive for simplification reasons.

Thought Experiment

For the experiment, an impedance type system is assumed, i.e. the user imposes a deflection on the haptic interface of the teleoperation system and interaction forces measured are displayed to the user. First, we assume an environment impedance \(\underline{Z}_\text { E} < \underline{Z}_\text { user}\). Further evaluation leads to Eq. (7.66).

$$\begin{aligned} \underline{Z}_\text { E} = \frac{\underline{F}_\text { E}}{j \omega \underline{d}_\text { E}} < \frac{\underline{F}_\text { user}}{j \omega \underline{d}_\text { user}} = \underline{Z}_\text { user} \end{aligned}$$
(7.66)

For an impedance type system, the user can be modeled as a source of deflection or velocity. In that case, the induced deflection of the teleoperation system equals the deflection of the environment \(\underline{d}_\text { user,int} = \underline{d}_\text { H} = \underline{d}_\text { E}\). With Eq. (7.66) this leads to \(\underline{F}_\text { H} = \underline{F}_\text { E}<\underline{F}_\text { user}\). Assuming, that the deflection \(\underline{d}_\text { user,int}\) imposed by the user is smaller as the the user’s detection threshold \(d_\theta \) (assumption no. 3), the resulting amount of force displayed to the user \(|\underline{F}_\text { user}|\) is smaller than the individual force threshold \(F_\theta \) according to Eq. (7.65).

This experiment can can extended to admittance type systems easily. Descriptively the result can be interpreted as the environment “evading” manipulation, as for example a slow moving hand in free air: The arm muscles serve as a deflection source moving the hand, but the interaction forces of the air molecules are too small to be detected.

For large environment impedances, the inequalities above are reversed. In that case, the forces or deflections resulting from the interaction are lager than the detection threshold, the user will feel an interaction with the environment.

Experiment Analysis

One can reason that the user impedance will limit the transparency error function from Eq. (7.64) from the experiment. This is done in such a way, that environment impedances lower than the user impedance will be neglected as shown in Eq. (7.67).

$$\begin{aligned} \underline{e'}_\text { T} = \frac{\underline{Z}_\text { H} - \max {(\underline{Z}_\text { E},\underline{Z}_\text { user})}}{\max {(\underline{Z}_\text { E},\underline{Z}_\text { user})}} \end{aligned}$$
(7.67)

If the user impedance is greater than the environment impedance, the user impedance is used, since the user will not feel any haptic stimuli generated by the lower environment impedance. If the user impedance is smaller than the environment impedance, the environment impedance is used as a reference for the transparency error.

Up till now, only absolute detection thresholds were considered, that describe the detection properties of haptic perception. In a second step, the discrimination properties shall be considered in more detail. It is assumed, that a system is transparent enough for a satisfactory usage, if errors are smaller than the differences that can be detected by the user. This difference can be described in a conservative way by the \(\hookrightarrow \) JND as defined in Sect. 2.1. With that, a limit can be imposed on Eq. (7.67) as given by Eq. (7.68)

$$\begin{aligned} \underline{e'}_\text { T} = \frac{\underline{Z}_\text { t} - \max {(\underline{Z}_\text { e},\underline{Z}_\text { user})}}{\max {(\underline{Z}_\text { e},\underline{Z}_\text { user})}} < c_\text {JND(z)} \end{aligned}$$
(7.68)

This limit \(c_\text {JND(z)}\) is defined as the JND of an arbitrary mechanical impedance. Although this value is not clearly measurable, it can be either bordered by the JNDs of ideal components like springs, masses and viscous dampers (see Sect. 2.1 for values) or by the JNDs of forces and deflections (since a change in impedance can be detected if the resulting force or deformation for a fixed imposure of deflection or force respectively exceeds the JND). With known values, this leads to a probably sufficient limit of \(\left| \underline{e'}_\text { T}\right| \le 3\,{\text {dB}}\).

With Eq. (7.68) a perception-considering error term of the transparency of haptic teleoperation systems is given. One has to keep in mind the assumptions of the underlying thought experiment and the fact, that experimental evaluation of this approach is still focus of current research activities by the authors.

5.3 General Control Model for Teleoperators

In principle a telemanipulator system can be divided into three different layers as shown in Fig. 7.40. The first layer contains the mechanical, electrical and local control properties of the master device. The second layer represents the communication channels between the master and slave and therefore eventually occurring time delays. The third layer describes mechanical, electrical and local control properties of the slave device. As mentioned before the dynamic behavior of a master and accordingly a slave device (first and third layer) is determined by its mechanical and electrical characteristics. Dependent on the type of actuator used in the master device respectively slave device a distinction is made between impedance and admittance devices. Impedance devices receive a force command and apply a force to their environment. On the contrary admittance devices receive a velocity command and behave as a velocity source interacting with the environment (see Chap. 6).

Fig. 7.40
figure 40

Schematic illustration of a telemanipulator

Customarily dominant parameters are the mass and friction of the device. Compliance can be minimized by a well-considered mechanical design. In addition it can be assumed that the dynamic characteristics of the electronic can be disregarded because the mechanical design is dominating the overall performance of the device. A local controller design may extend the usable frequency range of the device and can guarantee a stable operation of the device. In addition it’s possible to change the characteristics of the device from impedance behavior to admittance behavior and vice versa [25].

The second layer describes the characteristics of the communication channel. Significant physical values, which have to be transmitted between master and slave manipulator are the values for forces and velocities at the master and slave side. Therefore telemanipulators exhibit at least two and up to four communication channels for transmitting these values. These communication paths may be afflicted with a significant time delay T, which can cause instability of the whole system.

Fig. 7.41
figure 41

System block diagram of a general telemanipulator in impedance-impedance-architecture as shown in [24]

Figure 7.41 shows the system block diagram of a general four-channel architecture bilateral telemanipulator using impedance actuators for master and slave manipulator, for instance electric motors [24, 35]. In total there are four possible combinations of impedance and admittance devices, impedance-impedance, impedance-admittance, admittance-impedance and admittance-admittance.

In this section the impedance-impedance architecture is used due to its common use because of the high hardware availability. The forces of user and environment \(\underline{F}_\text { H}\) and \(\underline{F}_\text { E}\) are independent values. The mechanical impedance of user and environment is described by \(\underline{Z}_\text { H}\) and \(\underline{Z}_\text { E}\). The communication layer contains of four transmission elements \(C_1\), \(C_2\), \(C_3\) and \(C_4\) for transmitting the contact forces and velocities \(\underline{v}_\text { H}\), \(\underline{F}_\text { E}\), \(\underline{F}_\text { H}\) and \(\underline{v}_\text { E}\) between master and slave side. \(\underline{Z}_\text { m}^{-1}\) and \(\underline{Z}_\text { s}^{-1}\) represent the mechanical admittance of master controller and slave manipulator. In addition \(C_\text {mP}\) and \(C_\text {sP}\) are local master and slave position controllers and \(C_\text {mF}\) and \(C_\text {sF}\) are local force controllers.

The dynamics of the four-channel architecture are described by the following equations:

$$\begin{aligned} \underline{F}_\text { CM}&= C_\text {mF}\underline{F}_\text { H} - C_\text {4}e^{-sT}\underline{v}_\text { E} - C_\text {2}e^{-sT}\underline{F}_\text { E} - C_\text {mP}\underline{v}_\text { H} \\ \underline{F}_\text { CS}&= C_\text {1}e^{-sT}\underline{v}_\text { H} + C_\text {3}e^{-sT}\underline{F}_\text { H} - C_\text {sF}\underline{F}_\text { E} - C_\text {sP}\underline{v}_\text { E} \\ \underline{Z}_\text { s}\underline{v}_\text { E}&= \underline{F}_\text { CS} - \underline{F}_\text { E} \\ \underline{Z}_\text { m}\underline{v}_\text { H}&= \underline{F}_\text { CM} + \underline{F}_\text { H} \\ \end{aligned}$$

So the closed loop dynamics of the telemanipulator are represented by

(7.69)
(7.70)

As presented in Sect. 7.5.1 it is common to describe the dynamics of a telemanipulator by two-port representation. In addition several stability analysis methods can be applied on two-port model. From Eqs. (7.69) and (7.69) with (7.58) the following h-parameters can be obtained:

$$\begin{aligned} \underline{h}_\text { 11}&= \frac{(\underline{Z}_\text { m} + C_{mP})\cdot (\underline{Z}_\text { s} + C_{sP}) + C_1 C_4e^{-2sT}}{(1 + C_{mF})\cdot (\underline{Z}_\text { s} + C_{sP}) -C_3 C_4e^{-2sT}} \end{aligned}$$
(7.71)
$$\begin{aligned} \underline{h}_\text { 12}&= \frac{C_2(\underline{Z}_\text { s} + C_{sP})e^{-sT} - C_4(1 + C_{sF})e^{-sT}}{(1 + C_{mF})\cdot (\underline{Z}_\text { s} + C_{sP}) -C_3 C_4e^{-2sT}} \end{aligned}$$
(7.72)
$$\begin{aligned} \underline{h}_\text { 21}&= -\frac{C_3(\underline{Z}_\text { m} + C_{mP})e^{-sT} + C_1(1 + C_{mF})e^{-sT}}{(1 + C_{mF})\cdot (\underline{Z}_\text { s} + C_{sP}) -C_3 C_4e^{-2sT}} \end{aligned}$$
(7.73)
$$\begin{aligned} \underline{h}_\text { 22}&= \frac{(1 + C_{sF})\cdot (1 + C_{mF}) - C_2 C_3e^{-2sT}}{(1 + C_{mF})\cdot (\underline{Z}_\text { s} + C_{sP}) -C_3 C_4e^{-2sT}} \end{aligned}$$
(7.74)

With Eq. (7.62) and Eqs. (7.71)–(7.74) the impedance transmitted to the user \(\underline{Z}_\text { T}\) is given by Eq. (7.75) [25].

(7.75)

Perfect transparency is achievable, if the time delay T is insignificant. The controllers must hold the following conditions, which are known as the transparency-optimized control law [24, 35]:

$$\begin{aligned} C_1&= \underline{Z}_\text { s} + C_\text {sP} \nonumber \\ C_2&= 1 + C_\text {mF} \nonumber \\ C_3&= 1 + C_\text {sF} \nonumber \\ C_4&= -\left( \underline{Z}_\text { m} + C_\text {mP}\right) \nonumber \\ C_2, C_3&\ne 0 \end{aligned}$$
(7.76)

By use of local position and force controllers of master and slave \(C_\text {mp}\), \(C_\text {sp}\), \(C_\text {mF}\) and \(C_\text {sF}\), a perfect transparency can achieved with only three communication-channels. In this case the force feedback from slave to master \(C_2\) can be neglected [24, 26].

The most common control architecture is the forward-flow architecture [21] also known as force feedback or position-force architecture [35], which uses the two channels \(C_1\) and \(C_2\). \(C_3\) and \(C_4\) are set to zero. The position respectively velocity \(\underline{v}_\text { h}\) at the master manipulator is transmitted to the slave. The slave manipulator feeds back the contact forces between manipulator and environment \(\underline{F}_\text { e}\). Due to not compensated impedances of master and slave devices perfect transparency is not achievable by telemanipulators build up in the basic forward flow architecture. This architecture has been described and analyzed by many authors [8, 9, 21, 22, 25, 35].

5.4 Stability Analysis of Teleoperators

Besides the general stability analysis for dynamic systems from Sect. 7.2, several approaches for stability analysis of haptic devices has been published. Most of them use the two-port-representation introduced in Sect. 7.5.1 for stability analysis and controller design and were derived from classical network theory and communications technology. The subsequent section gives an introduction to the most important of them and also presents methods to guarantee stability of the system under time-delay.

5.4.1 Passivity

The concept of passivity for dynamic systems has been introduced in Sect. 7.2.2. Within this subsection the focus is on the application of this concept on the stability analysis of haptic devices. Assume the two-port representation of a telemanipulator as presented in Fig. 7.40. Furthermore, it shall be assumed that the energy stored in the system at time \(t=0\) is \(V(t=0)=0\). The power \(P_\text {in}\) at the input of the system at a time t is given by the product of the force \(F_\text {H}(t)\) applied by the user to the master times the master velocity \(v_\text {H}(t)\).

$$\begin{aligned} P_\text {in}&= F_\text {H}(t) \cdot v_\text {H}(t) \end{aligned}$$

Accordingly the power \(P_\text {out}\) at the output of the telemanipulator is given by the contact force of the slave \(F_\text {E}(t)\) manipulating the environment times the velocity of the slave \(v_\text {E}(t)\)

$$\begin{aligned} P_\text {out}&= F_\text {E}(t) \cdot v_\text {E}(t) \end{aligned}$$

Thus the telemanipulator is passive and therefore stable as long as the following inequality is fulfilled.

$$\begin{aligned} \int _0^t{\left( P_\text {in}(\tau ) - P_\text {out}(\tau ) {\text {d}}{\tau } \right) } = \int _0^t{\left( F_\text {H}(\tau ) \cdot v_\text {H}(\tau ) - F_\text {E}(\tau ) \cdot v_\text {E}(\tau ) {\text {d}}{\tau } \right) } \ge V(t) \end{aligned}$$
(7.77)

Alternatively the criterion can be expressed in the form of the time derivative of Eq. (7.77)

$$\begin{aligned} F_\text {H}(t) \cdot v_\text {H}(t) - F_\text {E}(t) \cdot v_\text {E}(t) \ge V(t) \end{aligned}$$
(7.78)

From Eq. (7.77) respectively Eq. (7.78) it can be seen that the telemanipulator must not generate energy to be passive. Thus a very easy method to receive a stable telemanipulator system is to implement higher damping, but it is decreasing the performance of the system.

Considering the frequency domain passivity of the system can be analyzed by using the immitance matrix of the transfer function [8, 9, 13,14,15, 40, 42, 43]. A system is passive and hence inherently stable, if the immitance matrix G(s) of the n-port network is positive real. The criteria for positive realness of the immitance matrix, which have to be satisfied, are [7, 29]:

  1. 1.

    G(s) has real elements for real s

  2. 2.

    The elements of G(s) have no poles in \({\text {Re}}(s)>0\) and poles on the \(j\omega \)-axis are simple, and such that the associated residue matrix is non-negative definite Hermitian

  3. 3.

    For any real value of \(\omega \) such that no element of \(G(j\omega )\) has a pole for this value, \(G(jw)+G(jw)\) is non-negative definite Hermitian

    For real rational G(s), points 1 and 3 may be replaced by

  4. 4.

    \(G(s)+G(s)\) is non-negative definite Hermitian in \({\text {Re}}(s)>0\).

User and Environment can be seen as passive [33] Therefore if passivity of the telemanipulator system can be proofed, the whole closed loop of user, telemanipulator and environment can be guaranteed to be passive and hence stable. It has been shown, that a robust (passive) control law and transparency are conflicting objectives in the design of telemanipulators [35]. In many cases the haptic sensation presented to the user can be poor, if a fixed damping value is used to guarantee passivity of the telemanipulator. Thus a new approach by using passivity based control law and improving performance has been done by implementing a passivity observer and passivity controller. The passivity controller increases damping of the system only when needed to guarantee stability. A further benefit from this concept is, that no parameter estimation for the dynamic model of the telemanipulator has to be done and if considered, uncertainties can be compensated [23, 44].

5.4.2 Absolute Stability Criterion (Llewellyn)

A stability criterion for linear two-ports has been derived by Llewellyn [12, 29, 36]. His motivation was the investigation of generalized transmission lines and active networks. Later several authors have used the criteria formulated by Llewellyn to analyze the stability of telemanipulators or to design control laws for bilateral teleoperation [3,4,5, 25]. The criterion is formulated in the frequency domain and it is assumed that the two-port is linear and time-invariant, at least locally [2]. A linear two-port is absolute stable if and only if there exists no set of passive terminations for which the system is unstable.

The following criteria provide both necessary and sufficient conditions for absolute stability for linear two-ports.

  1. 1.

    G(s) has no poles in the right half s-plane, only simple poles on the imaginary axis

  2. 2.

    \({\text {Re}}(g_{11})>0\), \({\text {Re}}(g_{22})>0\)

  3. 3.

    .

The conditions 1 and 2 guarantee passivity of the system when there is no coupling between master and slave. This case occurs, when master or slave are free or clamped. Condition 3 guarantees stability, if master and slave are coupled.

These criteria may be applied to every type of immitance matrix, thus the impedance-matrix, admittance-matrix, hybrid-matrix or inverse hybrid-matrix. If the criteria are fulfilled for one form of immitance matrix they are fulfilled for the other three forms as well. A network for which \(h_{21} = - h_{12}\), which is the same as \(z_{21}=z_{12}\) holds is said to be reciprocal. In this particular case the tests for passivity and unconditional stability are the same. A passive network will always be absolute stable, but an absolute stable network is not necessarily passive. A two-port which is not unconditional stable is potentially unstable, but this does not mean that it is definitely unstable as shown in Fig. 7.42.

Fig. 7.42
figure 42

Stability-Activity diagram

5.5 Effects of Time Delay

When master and slave are far apart from each other, communication data have to be transmitted over a long distance with significant time-delays, which can lead to instabilities unless the bandwidth of signals entering the communication block is severely limited. Reason for this is a non-passive communication block [8], so energy is generated inside the communication block.

5.5.1 Scattering Theory

Anderson [8,9,10] used the scattering theory in order to find a stable control law for bilateral teleoperation systems with time delay. Scattering variables were well known from transmission line theory. The scattering operator S maps effort plus flow into effort minus flow and is defined in terms of an incident wave \(F(t)+v(t)\) and a reflected wave \(F(t)-v(t)\).

$$\begin{aligned} F(t)-v(t) = S(t)\left( F(t)+v(t)\right) \end{aligned}$$

For LTI systems S can be expressed in the frequency domain as follows:

$$\begin{aligned} F(s)-v(s) = S(s)\left( F(s)+v(s)\right) \end{aligned}$$

In the case of a two-port the scattering matrix can be related to the hybrid matrix \(\textbf{H}(s)\) by loop transformation, which leads to:

To ensure passivity of the system the reflected wave must not carry higher energy content than the incident wave. Therefore a system is passive if and only if the norm of its scattering operator S(s) is less than or equal to one [8].

$$\begin{aligned} \left\| S(s) \right\| _\infty <\le 1 \end{aligned}$$

5.5.2 Wave Variables

Wave variables were used by Niemeyer [40, 42] to design a robust control strategy for bilateral telemanipulation with time-delay. It separates the total power flow into two parts, one the power flowing into the system and the other part representing the power flowing out of the system. Afterward, these two parts are associated with input and output waves. This approach is also valid for non-linear systems. Assume the two-port shown in Fig. 7.43 using \(\dot{x}_m\) and \(F_\text {e}\) as inputs.

Fig. 7.43
figure 43

Wave based teleoperator model

Therefore the power flow through the two-port can be written as

$$\begin{aligned} P(t)&=\dot{x}^T_M F_T - \dot{x}^T_s F_S = \frac{1}{2}u^T_M u_T - \frac{1}{2} v_M^T v_T + \frac{1}{2} u_S^T u_S - \frac{1}{2} v_S^T v_S. \end{aligned}$$

Here the vectors \(\textbf{u}_M\) and \(\textbf{u}_S\) are input waves, which increase the power flow into the system. Analog to this \(\textbf{v}_M\) and \(\textbf{v}_S\) are output waves decreasing the power flow into the system. Note that the velocity is denoted here as \(\dot{x}\). The transformation from the power variables to wave variables is described by

$$\begin{aligned} u_M&= \frac{1}{\sqrt{2b}}(F_M + b \dot{x}_M)\\ u_S&= \frac{1}{\sqrt{2b}}(F_S - b \dot{x}_S)\\ v_M&= \frac{1}{\sqrt{2b}}(F_M - b \dot{x}_M)\\ v_S&= \frac{1}{\sqrt{2b}}(F_S + b \dot{x}_S) \end{aligned}$$

The wave impedance b relates velocity to force and represents an opportunity to tune the behavior of the system. Large b values leads to an increased force feedback at the cost of high inertial forces. Small b values lower any unwanted sensations, so fast movement is possible, but decreases also the force impression of contact forces between slave and environment [41]. The wave variables can be inverted to provide the power variables as a function of the wave variables.

$$\begin{aligned} F_M&= \sqrt{\frac{b}{2}}(u_M + v_M)\\ F_S&= \sqrt{\frac{b}{2}}(u_S + v_S)\\ \dot{x}_M&= \frac{1}{\sqrt{2b}}(u_M - \dot{x}_M)\\ \dot{x}_S&= -\frac{1}{\sqrt{2b}}(u_S + v_S) \end{aligned}$$

By transmitting the wave variables instead of the power variables the system remains stable even if the time-delay T is not known [40]. Note that when the actual time-delay T is reduced to zero, transmitting wave variables is identical to transmitting velocity and force.

6 Control of Rehabilitation Robots

In this section, some control strategies are explained briefly while avoiding the mathematical formulations. A rehabilitation robot needs to fulfill two requirements to be effective and comfortable. First, high accuracy trajectory tracking is needed to precisely follow the predefined trajectory by the physiotherapist. Second, avoidance of harsh interaction force or torque during the therapy, since the patient usually is not able to control her/his muscles, thus unpredicted movements occur. Therefore, the robot must suppress these undesired interactions in a way that the patient does not experience any harsh force or torque. Some of the strategies to meet the mentioned requirements are discussed in the following sections.

6.1 Control Strategies

The first controller choice is the well-known PID controller due to its simple structure and tuning rule. However, due to the highly nonlinear characteristics of rehabilitation robots, PID, fuzzy-PID, or adaptive-PID controllers result in significant undesired overshoot and response delay. Overshoot raises the uncomfortable feeling of the patients and if it is too large, it can cause harm to them. Therefore, a highly robust and stable control structure such as SMC is needed. Many variations of SMC are used in this field such as adaptive SMC, terminal SMC, and super-twisting nonsingular terminal SMC. The main drawback of SMC is the chattering phenomenon due to signum function and high-frequency switching when the system reaches the sliding surface.

In [6], a super-twisting nonsingular terminal sliding mode control (ST-NTSMC) is designed to guarantee the predefined trajectory tracking accuracy of a knee and ankle rehabilitation robot (KARR). As mentioned previously, the super-twisting algorithm eliminates the chattering of SMC while keeping the tracking accuracy. The nonsingular terminal SMC is used to enhance the convergence speed and steady-state tracking of the linear-SMC without singularity. In rehabilitation, the goal is to track the predefined joint trajectory by the physiotherapist, while considering the patients’ condition such as post-stroke patients who their muscles may move involuntary and exert torques to the robot, that are undesirable and could result in an uncomfortable situation or even worsen the patient’s condition. Using admittance control before the ST-NTSMC could suppress this problem. As depicted in Fig. 7.44, instead of feeding the reference trajectory directly to the ST-NTSMC loop, the modified trajectory is used as the input of the SMC loop. This modification is done by measuring the interaction torque and applying it to a dynamic model to calculate the resulting change of the trajectory using equation (7.79).

$$\begin{aligned} M\ddot{\tilde{x}} + C\dot{\tilde{x}}+K\tilde{x} = \tau _{int} \end{aligned}$$
(7.79)

where \(\tilde{x} = x_r-x_m\),\(x_r\) is the predefined trajectory, and \(x_m\) is the modified smooth trajectory that ST-NTSMC will follow. The parameters of this dynamic model define the smoothness of the trajectory change. As a result, the predefined trajectory is adjusted in the direction of the interaction torque to eliminate uncomfortable force/torque.

Fig. 7.44
figure 44

The control structure of the rehabilitation robot [6]

As a result, the system allows deviation from reference trajectory when undesired interaction torque occurred, while accurately tracks the predefined trajectory when there is no interaction torque.

In [39], a fuzzy SMC is used for a hand rehabilitation robot. In this structure, the fuzzy controller is utilized to reduce the chattering of the SMC. The inputs of the fuzzy controller are S and \(\dot{S}\) (sliding surface and its time derivative), and the output \((u_{fa})\) is a control signal to compensate for the abrupt variation of the SMC’s control signal due to sign function and return the sliding variables to the desired surface Fig. 7.45. Experimental results show that the average chattering of the fuzzy SMC is about \(25\%\) of the original SMC.

Fig. 7.45
figure 45

Fuzzy sliding mode controller structure [39]

To overcome the variation of the interaction force during the therapy and creating a smooth trajectory tracking performance, in [1], an adaptive law is proposed to estimate the interaction force (Fig. 7.46). The adaptive law is derived in a way that fulfills the Lyapunov stability criterion and is a function of S and robot’s physical characteristics.

Fig. 7.46
figure 46

© Springer Nature, all rights reserved

Adaptive fuzzy sliding mode controller structure [1]

These are some examples of the control strategies that are used in rehabilitation robots to illustrate the importance of tracking accuracy and the smoothness of the interaction force or torque. The latter is more important and should be considered in the controller design.

6.2 Friction and Backlash Compensation

The practical systems are not ideal and face with friction (viscous and/or coulomb). In addition, based on the mechanical design and transmission mechanism they could experience backlash as well. As discussed previously, backlash and coulomb friction are hard nonlinearities and during the controller design should be taken into consideration and suppressed.

In [6], the coulomb friction is considered and modelled as:

$$\begin{aligned} F(\dot{\theta })=C\dot{\theta } + F_f sign(\dot{\theta }) \end{aligned}$$
(7.80)

where C is the viscous friction coefficient and \( F_f\) is the coulomb friction. Furthermore, since precise modeling of a nonlinear system is not practical, the model is considered as the nominal model and the total friction is expressed as:

$$\begin{aligned} F(\dot{\theta })=C\dot{\theta } + F_f sign(\dot{\theta }) + \varDelta F(\dot{\theta }) \end{aligned}$$
(7.81)

where \(\varDelta F(\dot{\theta })\) is the uncertainty of friction modeling. Using a robust controller such as ST-NTSMC (or mainly SMC), the system performs robustly with high tracking accuracy. It is important to mention that the accuracy of the nominal model directly affects the performance of the system.

Considering the backlash, the situation is worse since the backlash nonlinearity not only depends on the current condition but also the past condition. In [16, 17] a cable mechanism is used as a motion transmission mechanism. The mechanism is called Bowden-cable transmission where the input-output \((\phi _{in}-\phi _{out})\) relation can be expressed as:

$$\begin{aligned} \dot{\phi }_{out}= {\left\{ \begin{array}{ll} c_1\dot{\phi }_{in} \quad \dot{\phi }_{in} > 0 \\ c_2\dot{\phi }_{in} \quad \dot{\phi }_{in} < 0 \end{array}\right. } \end{aligned}$$
(7.82)

or

$$\begin{aligned} \dot{\phi }_{out}= {\left\{ \begin{array}{ll} c_1(\phi _{in}-B_1) \quad \dot{\phi }_{in} > 0 \\ c_2(\phi _{in}+B_2) \quad \dot{\phi }_{in} < 0 \end{array}\right. } \end{aligned}$$
(7.83)

Figure 7.47 depicts an example of the input-output relation of this mechanism and illustrates the parameter of equation (7.84).

Fig. 7.47
figure 47

Input-output relation of Bowden-cable transmission

The nonlinear equation (7.84) is considered as:

$$\begin{aligned} \phi _{out} = \alpha _{\phi } \phi _{in} + D \end{aligned}$$
(7.84)

where \(\alpha _{\phi }>0\) is the slope of backlash hysteresis and the dead-zone is considered as the model uncertainty D. Therefore, an adaptive controller is designed to estimate the \(\alpha _{\phi }\) and D by getting the tracking error \((\phi _{in}-\phi _{out})\) as input. Experimental results show that the backlash configuration is either constant or variable (due to the flexibility of the sheaths); the adaptive compensation significantly enhances the tracking accuracy and reduces its error by a factor of five.

Compensating the backlash of this cable transmission mechanism allows putting the actuator(s) away from the joint(s), which reduces the inertia of the rehabilitation robot.

To put it in a nutshell, a robust controller such as SMC is needed for trajectory tracking. Moreover, a control strategy such as adaptive control or admittance control is required to allow the system to perform smoothly in case of undesired interaction force/torque from the patient, which is common for them. In addition, depending on the mechanical design, the friction and/or backlash should be considered and compensated effectively to ensure accurate and smooth tracking. Finally, since there exist uncertainties in the environment and patient interaction, the adaptive control, if designed properly, could significantly enhance the performance of the system.

7 Conclusion

The control design for haptic devices faces the developing engineer with a complex manifold challenge. According to the fundamental requirement, to establish a safe reliable and determined influence on all structures, subsystems, or processes the haptic system is composed of, an analytical approach for control system design is not negligible anymore. It provides a wide variety of methods and techniques to be able to cover many issues that arise during this design process. This chapter intends to introduce the fundamental theoretical background. It shows several tasks, functions and aspects the developer will have to focus on, as well as certain methods and techniques that are going to be useful tools for the system’s analysis and the process of control design.

Starting with an abstracted view on the overall system, the control design process is based on an investigation and mathematical formulation of the system’s behavior. To achieve this a wide variety of methods exists, that can be used for system description depending on the degree of complexity. Besides methods for the description of linear or linearized systems, this chapter introduced techniques for system description to represent nonlinear system behavior. Furthermore the analysis of multi-input multi-output systems is based on the state space description, which is presented here, too. All of these techniques on the one hand are aimed at the mathematical representation of the analyzed systems as exact as possible, on the other hand they need to satisfy the requirement for a system description that further control design procedures are applicable to. These two requirements will lead to a tradeoff between establishing an exact system formulation that can be used in analysis and control design procedures without extending the necessary effort unreasonably.

Within system analysis of haptic systems the overall system stability is the most important aspect that has to be guaranteed and proven to be robust against model uncertainties. The compendium of methods for stability analysis contains techniques that are applicable to linear or nonlinear system behavior, corresponding to their underlying principles that of course limit the usability. The more complex the mathematical formulation of the system becomes, the higher the effort gets for system analysis. This comes in direct conflict to the fact that a stability analysis of a system with a simplified system description can only provide a proof of stability for this simplified model of the real system. Therefore the impact of all simplifying assumptions must be evaluated to guarantee the robustness of the system stability.

The actual objective within establishing a control scheme for haptic systems is the final design of controller and control structures that have to be implemented in the system in various levels to perform various functions. Besides the design of applicable controllers or control structures the optimization of adjustable parameters is also part of this design process As shown in many examples in the literature on control design a comprehensive collection of control design techniques and optimization methods exists, that enable the developer to cover the emerging challenges, and satisfy various requirements within the development of haptic systems as far as automatic control is concerned.

Recommended Background Reading

  • [25] Hashtrudi-Zaad, K. & Salcudean, S.: Analysis of control Architectures for Teleoperation Systems with Impedance/Admittance Master and Slave Manipulators. In: The International Journal of Robotics Research, SAGE Publications, 2001.

    Thorough analysis of different control schemes for impedance and admittance type systems.

  • [31] Hirche, S. & Buss, M.: Human perceived transparency with time delay In: Manuel Ferre et al. (eds.), Advances in Telerobotics, Springer, 2007.

    Analysis of the effects of time delay on transparency and the perception of compliance and mass.

  • [47] Tavakoli, M.; Patel, R.; Moallem, M. & Aziminejad, A.: Haptics for Teleoperated Surgical Robotic Systems World Scientific Publishing, Shanghai, 2008.

    Description and Design of a minimal invasive surgical robot with haptic feedback including an analysis of stability issues and the effect of time delay.