Skip to main content
Log in

A neural network-based approach for variable admittance control in human–robot cooperation: online adjustment of the virtual inertia

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

This paper proposes an approach for variable admittance control in human–robot collaboration depending on the online training of neural network. The virtual inertia is an important factor for the system stability, and its tuning is investigated in improving the human–robot cooperation. The design of the variable virtual inertia controller is analyzed, and the choice of the neural network type and their inputs and output is justified. The error backpropagation analysis of the designed system is elaborated since the end-effector velocity error depends indirectly on the multilayer feedforward neural network output. The proposed controller performance is experimentally investigated, and its generalization ability is evaluated by conducting cooperative tasks with the help of multiple subjects using the KUKA LWR manipulator under different conditions and tasks than the ones used for the neural network training. Finally, a comparative study is presented between the proposed method and previous published ones.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. The NN is completely trained with the third subject.

Abbreviations

F :

The applied force by human hand (N)

m :

The virtual inertia coefficient of admittance controller of the robot (kg)

c :

The virtual damping coefficient of admittance controller of the robot (Ns/m)

V a :

The velocity of admittance controller of the robot (m/s)

V :

The actual velocity of robot end-effector (m/s)

x(t):

The position of minimum jerk trajectory at time t (m)

x 0 :

The initial position of the minimum jerk trajectory, x0 = 0.0 m

x f :

The final position of minimum jerk trajectory (m)

t :

The time (s)

t f :

The duration of motion (s)

τ :

The normalized time and equal to t/tf

\(V_{\text{jerk}} = \dot{x}\left( t \right)\) :

The velocity of minimum jerk trajectory model (m/s)

TF:

The transfer function

e :

The velocity error representing the difference between the minimum jerk trajectory velocity (the reference) and the robot end-effector velocity (the actual) (m/s)

x i :

The ith input of the neural network, where i = 0, 1, 2, 3

b ij :

The weights between the input i and the hidden neuron j which is located in the neural network hidden layer

\(h^{\prime}_{j}\) :

The summation of the multiplication of the weights bij by xi

\(\varphi_{j} \left( \cdot \right)\) :

The hyperbolic tangent activation function at neural network hidden layer

\(y^{\prime}_{j}\) :

The output of the hidden layer at each hidden neuron j

n :

The hidden neurons number at neural network hidden layer

\(\varphi_{k} \left( \cdot \right)\) :

The hyperbolic tangent activation function at the neural network output layer

\(b^{\prime}_{1j}\) :

The weights between output neuron (at output layer) and hidden neuron j (at hidden layer)

\(O^{\prime}_{1}\) :

The summation of (multiplication of output of the hidden layer \(y^{\prime}_{j}\) by the weights \(b^{\prime}_{1j}\))

E :

The instantaneous error energy, or in another meaning, the square of the error (m2/s2)

\(\eta\) :

The learning rate parameter of algorithm of backpropagation

\(\alpha\) :

The momentum constant \(\left( { 0 \le \alpha < 1} \right)\)

\(\delta^{\prime}_{k}\) :

The local gradient for output neuron

\(\delta^{\prime}_{j}\) :

The local gradient for hidden neuron j

\(\dot{q}\) :

The joints’ velocities

\(J\left( q \right)\) :

The 6 × 6 Jacobian matrix

m crit :

The lowest value of the virtual inertia of the admittance controller of the robot

MSE:

The mean squared error from neural network training

NN:

The neural network

MLFFNN:

A multilayer feedforward NN

VAC:

Variable admittance controller

VVIC:

Variable virtual inertial admittance control

LDI:

Low damping and inertia system

MDI:

Medium damping and inertia system

HDI:

High damping and inertia system

References

  1. Dautenhahn K (2007) Methodology and themes of human–robot interaction: a growing research field. Int J Adv Robot Syst 4(1):103–108

    Google Scholar 

  2. Moniz AB, Krings B (2016) Robots working with humans or humans working with robots? Searching for social dimensions in new human–robot interaction in industry. Societies 6(3):1–21

    Article  Google Scholar 

  3. De Santis A, Siciliano B, De Luca A, Bicchi A (2008) An atlas of physical human—robot interaction. Mech Mach Theory 43(3):253–270

    Article  MATH  Google Scholar 

  4. Khatib O, Yokoi K, Brock O, Chang K, Casal A (1999) Robots in human environments: basic autonomous capabilities. Int J Rob Res 18(7):684–696

    Article  Google Scholar 

  5. Hogan N (1985) Impedance control: an approach to manipulation: Part I theory; Part II implementation; Part III applications. J Dyn Syst Meas Control 107(1):1–24

    Article  Google Scholar 

  6. Sam Ge S, Li Y, Wang C (2014) Impedance adaptation for optimal robot—environment interaction. Int J Control 87(2):249–263

    Article  MathSciNet  MATH  Google Scholar 

  7. Song P, Yu Y, Zhang X (2019) A tutorial survey and comparison of impedance control on robotic manipulation. Robotica 37:1–36

    Article  Google Scholar 

  8. Adams RJ, Hannaford B (1999) Stable Haptic Interaction with Virtual Environments. IEEE Trans Robot Autom 15(3):465–474

    Article  Google Scholar 

  9. Magrini E, Flacco F, De Luca A (2015) Control of generalized contact motion and force in physical human-robot interaction. In: 2015 IEEE international conference on robotics and automation (ICRA) Washington, pp 2298–2304

  10. Newman WS, Zhang Y (1994) Stable interaction control and coulomb friction compensation using natural admittance control. J Robot Syst 1(1):3–11

    Article  Google Scholar 

  11. Surdilovic D (1996) Contact stability issues in position based impedance control : theory and experiments. In: Proceedings of the 1996 IEEE international conference on robotics and automation, pp 1675–1680

  12. Duchaine V, Gosselin M (2007) General model of human-robot cooperation using a novel velocity based variable impedance control. In: Second joint EuroHaptics conference and symposium on haptic interfaces for virtual environment and teleoperator systems (WHC’07), pp 446–451

  13. Du Z, Wang W, Yan Z, Dong W, Wang W (2017) Variable admittance control based on fuzzy reinforcement learning for minimally invasive surgery manipulator. sensors 17(4):1–15

    Article  Google Scholar 

  14. Dimeas F, Aspragathos N (2014) Fuzzy learning variable admittance control for human-robot cooperation. In: 2014 IEEE/RSJ international conference on intelligent robots and systems (IROS 2014), pp 4770–4775

  15. Tsumugiwa T, Yokogawa R, Hara K (2001) Variable impedance control with regard to working process for man-machine cooperation-work system. In: Proceedings of the 2001 IEEE/RSI international conference on intelligent robots and systems, pp 1564–1569

  16. Lecours A, Mayer-st-onge B, Gosselin C (2012) Variable admittance control of a four-degree-of-freedom intelligent assist device. In: 2012 IEEE international conference on robotics and automation, pp 3903–3908

  17. Okunev V, Nierhoff T, Hirche S (2012) Human-preference-based control design : adaptive robot admittance control for physical human-robot interaction. In: 2012 IEEE RO-MAN: The 21st IEEE international symposium on robot and human interactive communication, pp 443–448

  18. Landi CT, Ferraguti F, Sabattini L, Secchi C, Bonf M, Fantuzzi C (2017) Variable admittance control preventing undesired oscillating behaviors in physical human-robot interaction. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3611–3616

  19. Colonnese N, Okamura A (2012) M-width : stability and accuracy of haptic rendering of virtual mass. In: Robotics: Science and Systems 2012

  20. Colonnese N, Okamura AM (2015) M-width: stability, noise characterization, and accuracy of rendering virtual mass. Int J Robot Res 34(6):781–798

    Article  Google Scholar 

  21. Keemink AQ, Van Der Kooij H, Stienen AH (2018) Admittance control for physical human–robot interaction. Int J Robot Res 37:1–24

    Article  Google Scholar 

  22. Dimeas F, Aspragathos N (2016) Online stability in human-robot cooperation with admittance control. IEEE Trans Haptics 9(2):267–278

    Article  Google Scholar 

  23. Landi CT, Ferraguti F, Sabattini L, Secchi C, Fantuzzi C (2017) Admittance control parameter adaptation for physical human-robot interaction. In: 2017 IEEE international conference on robotics and automation (ICRA), pp 2911–2916

  24. Bascetta L, Ferretti G (2019) Ensuring safety in hands-on control through stability analysis of the human-robot interaction. Robot Comput Integr Manuf 57(2019):197–212

    Article  Google Scholar 

  25. Aydin Y, Tokatli O, Patoglu V, Basdogan C (2018) Stable physical human-robot interaction using fractional order admittance control. IEEE Trans Haptics 11(3):464–475

    Article  Google Scholar 

  26. Sharkawy A-N, Koustoumpardis PN, Aspragathos N (2018) Variable admittance control for human–robot collaboration based on online neural network training. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS 2018)

  27. Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson, London

    Google Scholar 

  28. Rad AB, Bui TW, Li V, Wong YK (2000) A new on-line PID tuning method using neural networks. IFAC Proc Vol IFAC Work Digit Control Past Present Futur PID Control 33(4):443–448

    Google Scholar 

  29. Elbelady SA, Fawaz HE, Aziz AMA (2016) Online self tuning PID control using neural network for tracking control of a pneumatic cylinder using pulse width modulation piloted digital valves. Int J Mech Mechatron Eng IJMME-IJENS 16(3):123–136

    Google Scholar 

  30. Hernández-Alvarado R, García-Valdovinos LG, Salgado-Jiménez T, Gómez-Espinosa A, Fonseca-Navarro F (2016) Neural network-based self-tuning PID control for underwater vehicles. Sensors 16(9):1429

    Article  Google Scholar 

  31. Singh A, Yang L, Levine S (2017) GPLAC: generalizing vision-based robotic skills using weakly labeled images. In: Proceedings of the IEEE international conference on computer vision, pp 5852–5861

  32. Sharkawy AN, Koustoumpardis PN, Aspragathos N (2020) Human–robot collisions detection for safe human–robot interaction using one multi-input–output neural network. Soft Comput 24(9):6687–6719

    Article  Google Scholar 

  33. Flash T, Hogan N (1985) The coordination of arm movements: an experimentally confirmed mathematical model. J Neurosci 5(7):1688–1703

    Article  Google Scholar 

  34. Sharkawy A-N, Aspragathos N (2018) Human-robot collision detection based on neural networks. Int J Mech Eng Robot Res 7(2):150–157

    Article  Google Scholar 

  35. Sharkawy A-N, Koustoumpardis PN, Aspragathos N (2018) Manipulator collision detection and collided link identification based on neural networks. In: Nikos A, Panagiotis K, Vassilis M (eds) Advances in service and industrial robotics. RAAD 2018. Mechanisms and machine science. Springer, Cham, pp 3–12

    Google Scholar 

  36. Lu S, Chung JH, Velinsky SA (2005) Human-robot collision detection and identification based on wrist and base force/torque sensors. In: Proceedings of the 2005 IEEE international conference on robotics and automation, pp 3796–3801

  37. Eski I, Erkaya S, Savas S, Yildirim S (2011) Fault detection on robot manipulators using artificial neural networks. Robot Comput Integr Manuf 27(1):115–123

    Article  Google Scholar 

  38. Ito M, Noda K, Hoshino Y, Tani J (2006) Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model. Neural Netw 19(3):323–337

    Article  MATH  Google Scholar 

  39. Nielsen MA (2015) Neural networks and deep learning. Determination Press, Baltimore

    Google Scholar 

  40. Sassi MA, Otis MJD, Campeau-Lecours A (2017) Active stability observer using artificial neural network for intuitive physical human–robot interaction. Int J Adv Robot Syst 14(4):1–16

    Article  Google Scholar 

  41. De Momi E, Kranendonk L, Valenti M, Enayati N, Ferrigno G (2016) A neural network-based approach for trajectory planning in robot-human handover tasks. Front Robot AI 3(June):1–10

    Google Scholar 

  42. Anderson D, McNeill G (1992) Artificial neural networks technology: a DACS state-of-the-art report. Utica, New York

    Google Scholar 

  43. Jeatrakul P, Wong KW (2009) Comparing the performance of different neural networks for binary classification problems. In: 2009 8th international symposium on natural language processing, SNLP’09, pp 111–115

  44. Xie T, Yu H, Wilamowski B (2011) Comparison between traditional neural networks and radial basis function networks. In: Proceedings—ISIE 2011: 2011 IEEE international symposium on industrial electronics, pp 1194–1199

  45. Kurban T, Beşdok E (2009) A comparison of RBF neural network training algorithms for inertial sensor based terrain classification. Sensors 9(8):6312–6329

    Article  Google Scholar 

  46. Wang X, Ding Y, Shao H (1998) The improved radial basis function neural network and its application. Artif Life Robot 2(1):8–11

    Article  Google Scholar 

  47. Chiang YM, Chang LC, Chang FJ (2004) Comparison of static-feedforward and dynamic-feedback neural networks for rainfall-runoff modeling. J Hydrol 290(3–4):297–311

    Article  Google Scholar 

  48. Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks Razvan. In: Proceedings of the 30th international conference on machine learning

  49. Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 5(2):157–166

    Article  Google Scholar 

  50. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing: exploration of the microstructure of cognition. MIT Press, Cambridge, pp 318–362

    Chapter  Google Scholar 

  51. Kwon S, Kim J (2011) Real-time upper limb motion estimation from surface electromyography and joint angular velocities using an artificial neural network for human-machine cooperation. 522 IEEE Trans Inf Technol Biomed 15(4):522–530

    Article  Google Scholar 

  52. Mavroidis C, Flanz J, Dubowsky S, Drouet P, Goitein M (1998) High performance medical robot requirements and accuracy analysis. Robot Comput Integr Manuf 14(5):329–338

    Article  Google Scholar 

  53. Mirsepassi T (1958) Graphical evaluation of a convolution integral. JSTOR, New York, pp 202–212

    MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the volunteers who participate in the experiments and Prof. Papadopoulos Polycarpos, Computational Fluid Dynamics Lab., Department of Engineering Sciences, University of Patras, for his help in checking our provided mathematical analysis. Abdel-Nasser Sharkawy is funded by the “Egyptian Cultural Affairs and Missions Sector” and “Hellenic Ministry of Foreign Affairs Scholarship” for Ph.D. study in Greece.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdel-Nasser Sharkawy.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 First part: derivation of the part \(\frac{\partial V }{\partial m }\)

Equation (22) is derived in the following equations;

Equation (21) can be rewritten as,

$$\begin{aligned} \frac{\partial V\left( s \right)}{\partial m} & = \left( {\frac{ - 1}{{m^{2} }}} \right)\left( {\left( {F\left( s \right)} \right)\left( {\frac{1}{{\left( {s + \frac{c}{m}} \right)}}} \right) - \left( {\frac{c}{m}} \right)\left( {F\left( s \right)} \right)\left( {\frac{1}{{\left( {s + \frac{c}{m}} \right)^{2} }}} \right)} \right) \\ & = \left( {\frac{ - 1}{{m^{2} }}} \right)\left( {P_{1} \left( s \right) - \left( {\frac{c}{m}} \right)P_{2} \left( s \right)} \right) \\ \end{aligned}$$
(A.1)

By taking the inverse of Laplace Transform for (A.1) and using the convolution theorem, we can get,

$$\begin{aligned} P_{2} \left( t \right) & = \mathop \int \limits_{0}^{t} F\left( u \right) G_{1} \left( {t - u} \right){\text{d}}u \\ & = \mathop \int \limits_{0}^{t} F\left( u \right)\left( {t - u} \right)e^{{\frac{ - c}{m}\left( {t - u} \right)}} {\text{d}}u \\ \end{aligned}$$
(A.2)

It is known from convolution theorem that \(\mathop \int \nolimits_{0}^{t} F\left( u \right) G_{1} \left( {t - u} \right){\text{d}}u = \mathop \int \nolimits_{0}^{t} F\left( {t - u} \right) G_{1} \left( u \right){\text{d}}u\), so (A.2) is converted to

$$\begin{aligned} P_{2} \left( t \right) & = \mathop \smallint \limits_{0}^{t} F\left( {t - u} \right)G_{1} \left( u \right){\text{d}}u \\ & = \mathop \smallint \limits_{0}^{t} F\left( {t - u} \right)ue^{{\frac{{ - c}}{m}u}} {\text{d}}u \\ \end{aligned}$$
(A.3)

Based on the approach proposed by Mirsepassi [53], where a graphical evaluation of a convolution integral was presented, if the interval [0, t] is divided into small segments from 0 to \(n\Delta t\) where \(t = n\Delta t\), (A.3) can be given as

$$P_{2} \left( t \right) = \mathop \int \limits_{0}^{t} F\left( {t - u} \right) G_{1} \left( u \right){\text{d}}u = \left[ {\mathop \int \limits_{0}^{\Delta t} F_{n} G_{1} \left( u \right){\text{d}}u + \mathop \int \limits_{\Delta t}^{2\Delta t} F_{n - 1} G_{1} \left( u \right){\text{d}}u + \cdots + \mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} F_{n - i + 1} G_{1} \left( u \right){\text{d}}u + \cdots + \mathop \int \limits_{{\left( {n - 1} \right)\Delta t}}^{n\Delta t} F_{1} G_{1} \left( u \right){\text{d}}u} \right]$$
(A.4)

Equation (A.4) is rewritten as follows,

$$\begin{aligned} P_{2} \left( t \right) & = \mathop \sum \limits_{i = 1}^{i = n} F_{n - i + 1} \mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} G_{1} \left( u \right){\text{d}}u \\ & = \mathop \sum \limits_{i = 1}^{i = n} F_{n - i + 1} \mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} ue^{{\frac{ - c}{m}u}} {\text{d}}u \\ \end{aligned}$$
(A.5)

where the part \(\mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} ue^{{\frac{ - c}{m}u}} {\text{d}}u\) is calculated using the “Integration by parts” as

$$\begin{aligned} \mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} ue^{{\frac{ - c}{m}u}} {\text{d}}u & = \left[ {\frac{ - m}{c}u e^{{\frac{ - c}{m}u}} - \left( {\frac{m}{c}} \right)^{2} e^{{\frac{ - c}{m}u}} } \right]_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} \\ & = \frac{ - m}{c}i\Delta te^{{\frac{ - c}{m}i\Delta t}} - \left( {\frac{m}{c}} \right)^{2} e^{{\frac{ - c}{m} i\Delta t}} - \frac{ - m}{c}\left( {i - 1} \right)\Delta t e^{{\frac{ - c}{m} \left( {i - 1} \right)\Delta t}} + \left( {\frac{m}{c}} \right)^{2} e^{{\frac{ - c}{m} \left( {i - 1} \right)\Delta t}} \\ & = \frac{ - m}{c}\left( {i\Delta te^{{\frac{ - c}{m}i\Delta t}} + \frac{m}{c}e^{{\frac{ - c}{m} i\Delta t}} - \left( {i - 1} \right)\Delta t e^{{\frac{ - c}{m} \left( {i - 1} \right)\Delta t}} - \frac{m}{c}e^{{\frac{ - c}{m} \left( {i - 1} \right)\Delta t}} } \right) \\ \end{aligned}$$
(A.6)

By substituting (A.6) into (A.5), then we get

$$P_{2} \left( t \right) = \frac{ - m}{c}\mathop \sum \limits_{i = 1}^{i = n} F_{n - i + 1} \left( {i\Delta te^{{\frac{ - c}{m}i\Delta t}} + \frac{m}{c}e^{{\frac{ - c}{m} i\Delta t}} - \left( {i - 1} \right)\Delta t e^{{\frac{ - c}{m} \left( {i - 1} \right)\Delta t}} - \frac{m}{c}e^{{\frac{ - c}{m} \left( {i - 1} \right)\Delta t}} } \right)$$
(A.7)

The inverse of the Laplace Transform for \(P_{1} \left( s \right)\) is calculated as

$$\begin{aligned} P_{1} \left( t \right) & = \mathop \int \limits_{0}^{t} F\left( {t - u} \right) G_{2} \left( u \right){\text{d}}u \\ & = \mathop \int \limits_{0}^{t} F\left( {t - u} \right) e^{{\frac{ - c}{m}u}} {\text{d}}u \\ \end{aligned}$$
(A.8)

Equation (A.8) is calculated in the same way used before for (A.3), so

$$\begin{aligned} P_{1} \left( t \right) & = \mathop \sum \limits_{i = 1}^{i = n} F_{n - i + 1} \mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} G_{2} \left( u \right){\text{d}}u \\ & = \mathop \sum \limits_{i = 1}^{i = n} F_{n - i + 1} \mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} e^{{\frac{ - c}{m}u}} {\text{d}}u \\ \end{aligned}$$
(A.9)

The part \(\mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} e^{{\frac{ - c}{m}u}} {\text{d}}u\) is calculated easily as

$$\begin{aligned} \mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} e^{{\frac{ - c}{m}u}} {\text{d}}u & = \frac{ - m}{c}\left[ {e^{{\frac{ - c}{m}u}} } \right]_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} \\ & = \frac{ - m}{c}\left( {e^{{\frac{ - c}{m}i\Delta t}} - e^{{\frac{ - c}{m}\left( {i - 1} \right)\Delta t}} } \right) \\ \end{aligned}$$
(A.10)

By substituting (A.10) into (A.9), we get

$$P_{1} \left( t \right) = \frac{ - m}{c}\mathop \sum \limits_{i = 1}^{i = n} F_{n - i + 1} \left( {e^{{\frac{ - c}{m}i\Delta t}} - e^{{\frac{ - c}{m}\left( {i - 1} \right)\Delta t}} } \right)$$
(A.11)

By substituting (A.7) and (A.11) into the inverse Laplace transform of (A.1), then we get

$$\begin{aligned} \frac{\partial V}{\partial m} & = \left( {\frac{ - 1}{{m^{2} }}} \right)\left( {P_{1} - \left( {\frac{c}{m}} \right)P_{2} } \right) = \left( {\frac{ - 1}{{m^{2} }}} \right)P_{1} + \left( {\frac{c}{{m^{3} }}} \right)P_{2} \\ \frac{\partial V}{\partial m} & = \frac{1}{{m^{2} }}\mathop \sum \limits_{i = 1}^{i = n} F_{n - i + 1} \left( {\left( {i - 1} \right)\Delta t e^{{\frac{ - c}{m} \left( {i - 1} \right)\Delta t}} - i\Delta t e^{{\frac{ - c}{m}i\Delta t}} } \right) \\ \end{aligned}$$
(A.12)

The way to implement the algorithm of (A.12) or (22) in real time is illustrated in the “second part”.

1.2 Second part: algorithm for Eq. (22)

From Eq. (22) or (A.12), it is noted that after many n loops, some sum terms can be neglected since they are very small (approximately zero). It is difficult to calculate this equation online using huge n loops, because the calculation time will be very long and the robot will stop so we should neglect the terms that approximating zero. To identify at which loop the error will be very small if we neglect these terms, real data with different determined loops (n) are used to calculate the equation. The equation is calculated from i = 1 to i = n and if the terms from i = n − z to i ≥ n that have very small values (approximately close to zero) are ignored then the error percentage is calculated. Figure 14 shows some results of this process for Eq. (22) which calculate \(\frac{\partial V}{\partial m}\) and as presented from the diagrams with increasing n the error percentage decreases. It should be noted that in our experiments, we calculate this equation using n = 20 where the error percentage is low (3.75%) in order to shorten the calculation time, so the robot can move smoothly.

Fig. 14
figure 14

The error percentage by ignoring the small parts from Eq. (22) that calculate \(\frac{\partial V}{\partial m}\) after many n loop

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharkawy, AN., Koustoumpardis, P.N. & Aspragathos, N. A neural network-based approach for variable admittance control in human–robot cooperation: online adjustment of the virtual inertia. Intel Serv Robotics 13, 495–519 (2020). https://doi.org/10.1007/s11370-020-00337-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-020-00337-4

Keywords

Navigation