Abstract
This paper proposes an approach for variable admittance control in human–robot collaboration depending on the online training of neural network. The virtual inertia is an important factor for the system stability, and its tuning is investigated in improving the human–robot cooperation. The design of the variable virtual inertia controller is analyzed, and the choice of the neural network type and their inputs and output is justified. The error backpropagation analysis of the designed system is elaborated since the end-effector velocity error depends indirectly on the multilayer feedforward neural network output. The proposed controller performance is experimentally investigated, and its generalization ability is evaluated by conducting cooperative tasks with the help of multiple subjects using the KUKA LWR manipulator under different conditions and tasks than the ones used for the neural network training. Finally, a comparative study is presented between the proposed method and previous published ones.
Similar content being viewed by others
Notes
The NN is completely trained with the third subject.
Abbreviations
- F :
-
The applied force by human hand (N)
- m :
-
The virtual inertia coefficient of admittance controller of the robot (kg)
- c :
-
The virtual damping coefficient of admittance controller of the robot (Ns/m)
- V a :
-
The velocity of admittance controller of the robot (m/s)
- V :
-
The actual velocity of robot end-effector (m/s)
- x(t):
-
The position of minimum jerk trajectory at time t (m)
- x 0 :
-
The initial position of the minimum jerk trajectory, x0 = 0.0 m
- x f :
-
The final position of minimum jerk trajectory (m)
- t :
-
The time (s)
- t f :
-
The duration of motion (s)
- τ :
-
The normalized time and equal to t/tf
- \(V_{\text{jerk}} = \dot{x}\left( t \right)\) :
-
The velocity of minimum jerk trajectory model (m/s)
- TF:
-
The transfer function
- e :
-
The velocity error representing the difference between the minimum jerk trajectory velocity (the reference) and the robot end-effector velocity (the actual) (m/s)
- x i :
-
The ith input of the neural network, where i = 0, 1, 2, 3
- b ij :
-
The weights between the input i and the hidden neuron j which is located in the neural network hidden layer
- \(h^{\prime}_{j}\) :
-
The summation of the multiplication of the weights bij by xi
- \(\varphi_{j} \left( \cdot \right)\) :
-
The hyperbolic tangent activation function at neural network hidden layer
- \(y^{\prime}_{j}\) :
-
The output of the hidden layer at each hidden neuron j
- n :
-
The hidden neurons number at neural network hidden layer
- \(\varphi_{k} \left( \cdot \right)\) :
-
The hyperbolic tangent activation function at the neural network output layer
- \(b^{\prime}_{1j}\) :
-
The weights between output neuron (at output layer) and hidden neuron j (at hidden layer)
- \(O^{\prime}_{1}\) :
-
The summation of (multiplication of output of the hidden layer \(y^{\prime}_{j}\) by the weights \(b^{\prime}_{1j}\))
- E :
-
The instantaneous error energy, or in another meaning, the square of the error (m2/s2)
- \(\eta\) :
-
The learning rate parameter of algorithm of backpropagation
- \(\alpha\) :
-
The momentum constant \(\left( { 0 \le \alpha < 1} \right)\)
- \(\delta^{\prime}_{k}\) :
-
The local gradient for output neuron
- \(\delta^{\prime}_{j}\) :
-
The local gradient for hidden neuron j
- \(\dot{q}\) :
-
The joints’ velocities
- \(J\left( q \right)\) :
-
The 6 × 6 Jacobian matrix
- m crit :
-
The lowest value of the virtual inertia of the admittance controller of the robot
- MSE:
-
The mean squared error from neural network training
- NN:
-
The neural network
- MLFFNN:
-
A multilayer feedforward NN
- VAC:
-
Variable admittance controller
- VVIC:
-
Variable virtual inertial admittance control
- LDI:
-
Low damping and inertia system
- MDI:
-
Medium damping and inertia system
- HDI:
-
High damping and inertia system
References
Dautenhahn K (2007) Methodology and themes of human–robot interaction: a growing research field. Int J Adv Robot Syst 4(1):103–108
Moniz AB, Krings B (2016) Robots working with humans or humans working with robots? Searching for social dimensions in new human–robot interaction in industry. Societies 6(3):1–21
De Santis A, Siciliano B, De Luca A, Bicchi A (2008) An atlas of physical human—robot interaction. Mech Mach Theory 43(3):253–270
Khatib O, Yokoi K, Brock O, Chang K, Casal A (1999) Robots in human environments: basic autonomous capabilities. Int J Rob Res 18(7):684–696
Hogan N (1985) Impedance control: an approach to manipulation: Part I theory; Part II implementation; Part III applications. J Dyn Syst Meas Control 107(1):1–24
Sam Ge S, Li Y, Wang C (2014) Impedance adaptation for optimal robot—environment interaction. Int J Control 87(2):249–263
Song P, Yu Y, Zhang X (2019) A tutorial survey and comparison of impedance control on robotic manipulation. Robotica 37:1–36
Adams RJ, Hannaford B (1999) Stable Haptic Interaction with Virtual Environments. IEEE Trans Robot Autom 15(3):465–474
Magrini E, Flacco F, De Luca A (2015) Control of generalized contact motion and force in physical human-robot interaction. In: 2015 IEEE international conference on robotics and automation (ICRA) Washington, pp 2298–2304
Newman WS, Zhang Y (1994) Stable interaction control and coulomb friction compensation using natural admittance control. J Robot Syst 1(1):3–11
Surdilovic D (1996) Contact stability issues in position based impedance control : theory and experiments. In: Proceedings of the 1996 IEEE international conference on robotics and automation, pp 1675–1680
Duchaine V, Gosselin M (2007) General model of human-robot cooperation using a novel velocity based variable impedance control. In: Second joint EuroHaptics conference and symposium on haptic interfaces for virtual environment and teleoperator systems (WHC’07), pp 446–451
Du Z, Wang W, Yan Z, Dong W, Wang W (2017) Variable admittance control based on fuzzy reinforcement learning for minimally invasive surgery manipulator. sensors 17(4):1–15
Dimeas F, Aspragathos N (2014) Fuzzy learning variable admittance control for human-robot cooperation. In: 2014 IEEE/RSJ international conference on intelligent robots and systems (IROS 2014), pp 4770–4775
Tsumugiwa T, Yokogawa R, Hara K (2001) Variable impedance control with regard to working process for man-machine cooperation-work system. In: Proceedings of the 2001 IEEE/RSI international conference on intelligent robots and systems, pp 1564–1569
Lecours A, Mayer-st-onge B, Gosselin C (2012) Variable admittance control of a four-degree-of-freedom intelligent assist device. In: 2012 IEEE international conference on robotics and automation, pp 3903–3908
Okunev V, Nierhoff T, Hirche S (2012) Human-preference-based control design : adaptive robot admittance control for physical human-robot interaction. In: 2012 IEEE RO-MAN: The 21st IEEE international symposium on robot and human interactive communication, pp 443–448
Landi CT, Ferraguti F, Sabattini L, Secchi C, Bonf M, Fantuzzi C (2017) Variable admittance control preventing undesired oscillating behaviors in physical human-robot interaction. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3611–3616
Colonnese N, Okamura A (2012) M-width : stability and accuracy of haptic rendering of virtual mass. In: Robotics: Science and Systems 2012
Colonnese N, Okamura AM (2015) M-width: stability, noise characterization, and accuracy of rendering virtual mass. Int J Robot Res 34(6):781–798
Keemink AQ, Van Der Kooij H, Stienen AH (2018) Admittance control for physical human–robot interaction. Int J Robot Res 37:1–24
Dimeas F, Aspragathos N (2016) Online stability in human-robot cooperation with admittance control. IEEE Trans Haptics 9(2):267–278
Landi CT, Ferraguti F, Sabattini L, Secchi C, Fantuzzi C (2017) Admittance control parameter adaptation for physical human-robot interaction. In: 2017 IEEE international conference on robotics and automation (ICRA), pp 2911–2916
Bascetta L, Ferretti G (2019) Ensuring safety in hands-on control through stability analysis of the human-robot interaction. Robot Comput Integr Manuf 57(2019):197–212
Aydin Y, Tokatli O, Patoglu V, Basdogan C (2018) Stable physical human-robot interaction using fractional order admittance control. IEEE Trans Haptics 11(3):464–475
Sharkawy A-N, Koustoumpardis PN, Aspragathos N (2018) Variable admittance control for human–robot collaboration based on online neural network training. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS 2018)
Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson, London
Rad AB, Bui TW, Li V, Wong YK (2000) A new on-line PID tuning method using neural networks. IFAC Proc Vol IFAC Work Digit Control Past Present Futur PID Control 33(4):443–448
Elbelady SA, Fawaz HE, Aziz AMA (2016) Online self tuning PID control using neural network for tracking control of a pneumatic cylinder using pulse width modulation piloted digital valves. Int J Mech Mechatron Eng IJMME-IJENS 16(3):123–136
Hernández-Alvarado R, García-Valdovinos LG, Salgado-Jiménez T, Gómez-Espinosa A, Fonseca-Navarro F (2016) Neural network-based self-tuning PID control for underwater vehicles. Sensors 16(9):1429
Singh A, Yang L, Levine S (2017) GPLAC: generalizing vision-based robotic skills using weakly labeled images. In: Proceedings of the IEEE international conference on computer vision, pp 5852–5861
Sharkawy AN, Koustoumpardis PN, Aspragathos N (2020) Human–robot collisions detection for safe human–robot interaction using one multi-input–output neural network. Soft Comput 24(9):6687–6719
Flash T, Hogan N (1985) The coordination of arm movements: an experimentally confirmed mathematical model. J Neurosci 5(7):1688–1703
Sharkawy A-N, Aspragathos N (2018) Human-robot collision detection based on neural networks. Int J Mech Eng Robot Res 7(2):150–157
Sharkawy A-N, Koustoumpardis PN, Aspragathos N (2018) Manipulator collision detection and collided link identification based on neural networks. In: Nikos A, Panagiotis K, Vassilis M (eds) Advances in service and industrial robotics. RAAD 2018. Mechanisms and machine science. Springer, Cham, pp 3–12
Lu S, Chung JH, Velinsky SA (2005) Human-robot collision detection and identification based on wrist and base force/torque sensors. In: Proceedings of the 2005 IEEE international conference on robotics and automation, pp 3796–3801
Eski I, Erkaya S, Savas S, Yildirim S (2011) Fault detection on robot manipulators using artificial neural networks. Robot Comput Integr Manuf 27(1):115–123
Ito M, Noda K, Hoshino Y, Tani J (2006) Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model. Neural Netw 19(3):323–337
Nielsen MA (2015) Neural networks and deep learning. Determination Press, Baltimore
Sassi MA, Otis MJD, Campeau-Lecours A (2017) Active stability observer using artificial neural network for intuitive physical human–robot interaction. Int J Adv Robot Syst 14(4):1–16
De Momi E, Kranendonk L, Valenti M, Enayati N, Ferrigno G (2016) A neural network-based approach for trajectory planning in robot-human handover tasks. Front Robot AI 3(June):1–10
Anderson D, McNeill G (1992) Artificial neural networks technology: a DACS state-of-the-art report. Utica, New York
Jeatrakul P, Wong KW (2009) Comparing the performance of different neural networks for binary classification problems. In: 2009 8th international symposium on natural language processing, SNLP’09, pp 111–115
Xie T, Yu H, Wilamowski B (2011) Comparison between traditional neural networks and radial basis function networks. In: Proceedings—ISIE 2011: 2011 IEEE international symposium on industrial electronics, pp 1194–1199
Kurban T, Beşdok E (2009) A comparison of RBF neural network training algorithms for inertial sensor based terrain classification. Sensors 9(8):6312–6329
Wang X, Ding Y, Shao H (1998) The improved radial basis function neural network and its application. Artif Life Robot 2(1):8–11
Chiang YM, Chang LC, Chang FJ (2004) Comparison of static-feedforward and dynamic-feedback neural networks for rainfall-runoff modeling. J Hydrol 290(3–4):297–311
Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks Razvan. In: Proceedings of the 30th international conference on machine learning
Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 5(2):157–166
Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing: exploration of the microstructure of cognition. MIT Press, Cambridge, pp 318–362
Kwon S, Kim J (2011) Real-time upper limb motion estimation from surface electromyography and joint angular velocities using an artificial neural network for human-machine cooperation. 522 IEEE Trans Inf Technol Biomed 15(4):522–530
Mavroidis C, Flanz J, Dubowsky S, Drouet P, Goitein M (1998) High performance medical robot requirements and accuracy analysis. Robot Comput Integr Manuf 14(5):329–338
Mirsepassi T (1958) Graphical evaluation of a convolution integral. JSTOR, New York, pp 202–212
Acknowledgements
The authors would like to thank the volunteers who participate in the experiments and Prof. Papadopoulos Polycarpos, Computational Fluid Dynamics Lab., Department of Engineering Sciences, University of Patras, for his help in checking our provided mathematical analysis. Abdel-Nasser Sharkawy is funded by the “Egyptian Cultural Affairs and Missions Sector” and “Hellenic Ministry of Foreign Affairs Scholarship” for Ph.D. study in Greece.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 First part: derivation of the part \(\frac{\partial V }{\partial m }\)
Equation (22) is derived in the following equations;
Equation (21) can be rewritten as,
By taking the inverse of Laplace Transform for (A.1) and using the convolution theorem, we can get,
It is known from convolution theorem that \(\mathop \int \nolimits_{0}^{t} F\left( u \right) G_{1} \left( {t - u} \right){\text{d}}u = \mathop \int \nolimits_{0}^{t} F\left( {t - u} \right) G_{1} \left( u \right){\text{d}}u\), so (A.2) is converted to
Based on the approach proposed by Mirsepassi [53], where a graphical evaluation of a convolution integral was presented, if the interval [0, t] is divided into small segments from 0 to \(n\Delta t\) where \(t = n\Delta t\), (A.3) can be given as
Equation (A.4) is rewritten as follows,
where the part \(\mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} ue^{{\frac{ - c}{m}u}} {\text{d}}u\) is calculated using the “Integration by parts” as
By substituting (A.6) into (A.5), then we get
The inverse of the Laplace Transform for \(P_{1} \left( s \right)\) is calculated as
Equation (A.8) is calculated in the same way used before for (A.3), so
The part \(\mathop \int \limits_{{\left( {i - 1} \right)\Delta t}}^{i\Delta t} e^{{\frac{ - c}{m}u}} {\text{d}}u\) is calculated easily as
By substituting (A.10) into (A.9), we get
By substituting (A.7) and (A.11) into the inverse Laplace transform of (A.1), then we get
The way to implement the algorithm of (A.12) or (22) in real time is illustrated in the “second part”.
1.2 Second part: algorithm for Eq. (22)
From Eq. (22) or (A.12), it is noted that after many n loops, some sum terms can be neglected since they are very small (approximately zero). It is difficult to calculate this equation online using huge n loops, because the calculation time will be very long and the robot will stop so we should neglect the terms that approximating zero. To identify at which loop the error will be very small if we neglect these terms, real data with different determined loops (n) are used to calculate the equation. The equation is calculated from i = 1 to i = n and if the terms from i = n − z to i ≥ n that have very small values (approximately close to zero) are ignored then the error percentage is calculated. Figure 14 shows some results of this process for Eq. (22) which calculate \(\frac{\partial V}{\partial m}\) and as presented from the diagrams with increasing n the error percentage decreases. It should be noted that in our experiments, we calculate this equation using n = 20 where the error percentage is low (3.75%) in order to shorten the calculation time, so the robot can move smoothly.
Rights and permissions
About this article
Cite this article
Sharkawy, AN., Koustoumpardis, P.N. & Aspragathos, N. A neural network-based approach for variable admittance control in human–robot cooperation: online adjustment of the virtual inertia. Intel Serv Robotics 13, 495–519 (2020). https://doi.org/10.1007/s11370-020-00337-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11370-020-00337-4