Robotic constant-force grinding control with a press-and-release model and model-based reinforcement learning

  • Tie ZhangEmail author
  • Meng Xiao
  • Yanbiao Zou
  • Jiadong Xiao


When a workpiece is ground by a robot, the issue often arises that the grinding force signal can easily suffer overshoot during the impact stage and instability during the processing stage. In this paper, a force control algorithm for use in the impact and processing stages of robotic constant-force grinding is proposed based on a press-and-release model and model-based reinforcement learning. For the impact stage, a press-and-release model for compensating for the robot deformation is established to enable the indirect control of the magnitude of the force signal by regulating the ratio of the pressing and release times to prevent overshoot. In the processing stage, model-based reinforcement learning is applied to quickly obtain the optimal processing parameters. Through iterative experiments, model-based reinforcement learning is used to update the model and then continue to search for the optimal processing parameters until the normal force reaches the desired state. Experimental results show that force control based on the proposed algorithm converges fast; in the impact and processing stages, the normal force converges to the set range after 4 and 3 iterations, respectively. The normal force is more stable than it is under position control. Moreover, the surface roughness Ra of the workpiece is reduced by 30.68%.


Industrial robot Grinding Force control Reinforcement learning Model-based 


Funding information

This research was funded by the National Science and Technology Major Project of China (2015ZX04005006), Science and Technology Planning Project of GuangDong Province, China (2014B090921004, 2015B010918002), Science and Technology Planning Project of Zhongshan City (2016F2FC0006).


  1. 1.
    Furtado LFF, Villani E, Trabasso LG, Sutério R (2017) A method to improve the use of 6-dof robots as machine tools. Int J Adv Manuf Technol 92(9-12):2487–2502. CrossRefGoogle Scholar
  2. 2.
    Liu L, Ulrich BJ, Elbestawi MA (1990) Robotic grinding force regulation: design, implementation and benefits. IEEE Int Conf Robot Autom:258–265.
  3. 3.
    Zhu D, Xu X, Yang Z, Zhuang K, Yan S, Ding H (2018) Analysis and assessment of robotic belt grinding mechanisms by force modeling and force control experiments. Tribol Int 120:93–98. CrossRefGoogle Scholar
  4. 4.
    Xie X, Sun L (2016) Force control based robotic grinding system and application. WCICA:2552–2555.
  5. 5.
    Tian F, Li Z, Lv C, Liu G (2016) Polishing pressure investigations of robot automatic polishing on curved surfaces. Int J Adv Manuf Technol 87:1–8. CrossRefGoogle Scholar
  6. 6.
    Calanca A, Muradore R, Fiorini P (2016) A review of algorithms for compliant control of stiff and fixed-compliance robots. IEEE/ASME Trans Mechatron 21(2):613–624. CrossRefGoogle Scholar
  7. 7.
    He J, Pan Z, Zhang H (2007) Adaptive force control for robotic machining process. Proceedings of the 2007 American Control Conference 1-6.
  8. 8.
    Pires JN, Godinho T, Araújo R (2004) Force control for industrial applications using a fuzzy PI controller. Sensor Rev 24(1):60–67. CrossRefGoogle Scholar
  9. 9.
    Ye B, Song B, Li Z, Xiong S, Tang X (2013) A study of force and position tracking control for robot contact with an arbitrarily inclined plane. Int J Adv Robot Syst 10(1):1. CrossRefGoogle Scholar
  10. 10.
    Solanes JE, Gracia L, Muñoz-Benavent P, Valls Miro J, Perez-Vidal C, Tornero J (2018) Robust hybrid position-force control for robotic surface polishing. J Manuf Sci Eng 141(1):11013-1–11013-14. CrossRefGoogle Scholar
  11. 11.
    Mendes N, Neto P (2015) Indirect adaptive fuzzy control for industrial robots: a solution for contact applications. Expert Syst Appl 42(22):8929–8935. CrossRefGoogle Scholar
  12. 12.
    Jung S, Hsia TC (2002) Robust neural force control scheme under uncertainties in robot dynamics and unknown environment. IEEE Trans Ind Electron 47(2):403–412. CrossRefGoogle Scholar
  13. 13.
    Tao Y, Zheng J, Lin Y (2016) A sliding mode control-based on a RBF neural network for deburring industry robotic systems. Int J Adv Robot Syst 13(1):8. CrossRefGoogle Scholar
  14. 14.
    Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: a survey. Int J Robot Res 32(11):1238–1274. CrossRefGoogle Scholar
  15. 15.
    Arulkumaran K, Deisenroth MP, Brundage M, Bharath AA (2017) Deep reinforcement learning: a brief survey. IEEE Signal Proc Mag 34:26–38. CrossRefGoogle Scholar
  16. 16.
    Ng AY (2003) Shaping and policy search in reinforcement learning. University of California, BerkeleyGoogle Scholar
  17. 17.
    Mülling K, Kober J, Kroemer O, Peters J (2013) Learning to select and generalize striking movements in robot table tennis. Int J Robot Res 32(3):263–279. CrossRefGoogle Scholar
  18. 18.
    Renaudo E, Girard B, Chatila R, Khamassi M (2015) Respective advantages and disadvantages of model-based and model-free reinforcement learning in a robotics neuro-inspired cognitive architecture. Procedia Comput Sci 71:178–184. CrossRefGoogle Scholar
  19. 19.
    Polydoros AS, Nalpantidis L (2017) Survey of model-based reinforcement learning: applications on robotics. J Intell Robot Syst 86(2):1–21. CrossRefGoogle Scholar
  20. 20.
    Kamalapurkar R, Andrews L, Walters P, Dixon WE (2017) Model-based reinforcement learning for infinite-horizon approximate optimal tracking. IEEE Trans Neur Net Lear 28:753–758. CrossRefGoogle Scholar
  21. 21.
    Doerr A, Nguyen-Tuong D, Marco A, Schaal S, Trimpe S (2017) Model-based policy search for automatic tuning of multivariate PID controllers. 2017 IEEE International Conference on Robotics and Automation (ICRA): 5295-5301.
  22. 22.
    Malkin S, Guo C (2008) Grinding technology: theory and application of machining with abrasives. Industrial Press, New YorkGoogle Scholar
  23. 23.
    Landers RG, Ulsoy AG (1999) Model-based machining force control. J Dyn Syst Meas Control 122(3):521–527. CrossRefGoogle Scholar
  24. 24.
    Rnmo S, Olofsson BR, Robertsson A, Johansson R (2012) Increasing time-efficiency and accuracy of robotic machining processes using model-based adaptive force control. 10th. IFAC Symp Robot Control 45(22):543–548. CrossRefGoogle Scholar
  25. 25.
    Sheng X, Xu L, Wang Z (2017) A position-based explicit force control strategy based on online trajectory prediction. Int J Robot Autom 32(1):93–100. CrossRefGoogle Scholar
  26. 26.
    Komati B, Pac MR, Ranatunga I, Clévy C, Dan OP, Lutz P (2013) Explicit force control vs impedance control for micromanipulation. ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering: V001T09A018.
  27. 27.
    Sutton RS (2017) Reinforcement learning: an introduction. MIT Press, CambridgezbMATHGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  • Tie Zhang
    • 1
    Email author
  • Meng Xiao
    • 1
  • Yanbiao Zou
    • 1
  • Jiadong Xiao
    • 1
  1. 1.School of Mechanical and Automotive EngineeringSouth China University of TechnologyGuangzhouPeople’s Republic of China

Personalised recommendations