Skip to main content

Advertisement

Log in

Adaptive obstacle avoidance in path planning of collaborative robots for dynamic manufacturing

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

Gaussian Mixture Model (GMM)/Gaussian Mixture Regression (GMR) is a paramount technology of learning from demonstrations to perform human–robotic collaboration. However, GMM/GMR is ineffective in supporting dynamic manufacturing where random obstacles in the applications generate potential safety concerns. In this paper, an improved GMM/GMR-based approach for collaborative robots (cobots) path planning is designed to achieve adaptive obstacle avoidance in dynamic manufacturing. The approach is realised via three innovative steps: (i) new quality assessment criteria for a cobot’s paths produced by GMM/GMR are defined; (ii) based on the criteria, demonstrations and parameters of GMM/GMR are adaptively amended to eliminate collisions and safety issues between a cobot and obstacles; (iii) a fruit fly optimisation algorithm is incorporated into GMM/GMR to expedite the computational efficiency. Case studies with different complexities are used for approach validation in terms of feature retention from demonstrations, regression path smoothness and obstacle avoidance effectiveness. Results of the case studies and benchmarking analyses show that the approach is robust and efficient for dynamic manufacturing applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Abbreviations

GMM:

Gaussian mixture modelling

GMR:

Gaussian mixture regression

FFO:

Fruit fly optimisation

EM:

Expectation-maximisation

LfD:

Learning from demonstrations

K :

The total number of Gaussian clusters

\({\varvec{\mu}}_{{\varvec{i}}}\) :

The mean of the i-th Gaussian cluster

\({\varvec{\sigma}}_{{\varvec{i}}}\) :

The covariance of the i-th Gaussian cluster

\(N_{i} ({\varvec{p}}_{j} |{\varvec{\mu}}_{{\varvec{i}}} ,{\varvec{\sigma}}_{{\varvec{i}}} )\) :

The probability density function of the i-th Gaussian cluster

\(LL\) :

Log of the likelihood function of demonstrations

\({\varvec{p}}_{j}\) :

A constructive point from demonstrations

\({\varvec{a}}_{i1}\) :

Eigenvector/direction of the major axis of the i-th Gaussian cluster

\({\varvec{a}}_{i2}\) :

Eigenvector/direction of the minor axis of the i-th Gaussian cluster

\(\sqrt {\lambda_{i1} }\) :

Length of the major axis of the i-th Gaussian cluster

\(\sqrt {\lambda_{i2} }\) :

Length of the minor axis of the i-th Gaussian cluster

D :

The dimension of the demonstration data

\(t\) :

Time-step

\(x_{j} ,y_{j} ,z_{j}\) :

The coordinate of the j-th point in original demonstrations

\(x_{j}^{{\prime }} ,y_{j}^{{\prime }} ,z_{j}^{{\prime }}\) :

The coordinate of the j-th point in Gaussian noise-enhanced points of demonstrations

\(\mu_{x{\text{-}}noise} ,\mu_{y{\text{-}}noise} ,\mu_{z{\text{-}}noise}\) :

Mean of Gaussian noises of x-dimension, y-dimension, z-dimension

\(\sigma_{x{\text{-}}noise} ,\sigma_{y{\text{-}}noise} ,\sigma_{z{\text{-}}noise}\) :

Variance of Gaussian noises of x-dimension, y-dimension, z-dimension

\(r_{x} ,r_{y} ,r_{z}\) :

Random value, which are considered as the noises of x- dimension, y-dimension, z-dimension

\(K\left( i \right)\) :

The curvature of the i-th point of a path

\(\widetilde{{K_{path} }}\) :

The overall average curvature of a path

N :

The number of demonstrations

\({\varvec{var}}_{d\left( y \right)} \left[ t \right]\) :

The variance of the t-th point of the demonstrations in the y-dimension

\({\varvec{var}}_{d\left( x \right)} ,{\varvec{var}}_{d\left( x \right)} ,{\varvec{var}}_{d\left( z \right)}\) :

Variance of all the points in the demonstrations in the x-dimension, y-dimension and z-dimension

\({\varvec{var}}_{d\_r\left( y \right)} \left[ t \right]\) :

Variance of the t-th point of the demonstrations and the regression path in the y-dimension

\({\varvec{var}}_{d\_r\left( x \right)} ,{\varvec{var}}_{d\_r\left( y \right)} ,{\varvec{var}}_{d\_r\left( z \right)}\) :

Variance of all the points of the demonstrations and the regression path in the x-dimension, y-dimension and z-dimension

\( x_{j}^{{\prime \prime }} , y_{j}^{{\prime \prime }} ,z_{j}^{{\prime \prime }}\) :

The new coordinates of the t-th point of the demonstration with noises after the translation process

U :

A unit vector of the points moving direction

d safety :

The pre-set safety distance

RN :

A random value following a standard Gaussian distribution

R w :

The width of the enveloping area of the obstacle

dis :

The minimum distance between new regression path and the dangerous points of the obstacle

SF :

Pre-set safety threshold

\(S_{i} ,\Delta_{i}\) :

The flying distance of an individual fruit fly

References

  • Calinon, S. (2016). A tutorial on task-parametrized movement learning and retrieval. Intelligent Service Robotics, 9(1), 1–29.

    Article  Google Scholar 

  • Duque, D. A., Prieto, F. A., & Hoyos, J. G. (2019). Trajectory generation for robotic assembly operations using learning by demonstration. Robotics and Computer Integrated Manufacturing, 57, 292–302.

    Article  Google Scholar 

  • El Zaatari, S., Wang, Y. Q., Hu, Y. D., & Li, W. D. (2021). An improved approach of task-parameterized learning from demonstrations for cobots in dynamic manufacturing. Journal of Intelligent Manufacturing., 69, 102109.

    Google Scholar 

  • Foumani, M., Ibrahim, M. Y., & Gunawan, I. (2014). Scheduling rotationally arranged robotic cells served by a multi-function robot. International Journal of Production Research, 52(13), 4037–4058.

    Article  Google Scholar 

  • Huang, J., Pham, D. T., Li, R., Qu, M., Wang, Y. J., Kerin, M., Su, S. Z., Ji, C. Q., Mahomed, O., Khalil, R., Stockton, D., Xu, W. J., Liu, Q., & Zhou, Z. D. (2021). An experimental human-robot collaborative disassembly cell. Computers & Industrial Engineering, 155, 107189.

    Article  Google Scholar 

  • Khansari-Zadeh, S. M., & Billard, A. (2011). Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Transactions on Robotics, 27(5), 943–957.

    Article  Google Scholar 

  • Koc, O., & Peters, J. (2019). Learning to serve: An experimental study for a new learning from demonstrations framework. IEEE Robotics and Automation Letters, 4(2), 1784–1791.

    Article  Google Scholar 

  • Kyrarini, M., Haseeb, M. A., Ristić-Durrant, D., & Gräser, A. (2019). Robot learning of industrial assembly task via human demonstrations. Autonomous Robots, 43(1), 239–257.

    Article  Google Scholar 

  • Li, X., Cheng, H., & Liang, X. (2019). Adaptive motion planning framework by learning from demonstration. Industrial Robot, 46(4), 541–552.

    Article  Google Scholar 

  • Li, X., Cheng, H., Ji, G., & Chen, J. (2017). Learning complex assembly skills from kinect based human robot interaction. In Proceedings of the 2017 IEEE international conference on robotics and biomimetics (ROBIO), Macau, China (pp. 2646–2651).

  • Liang, Y. C., Lu, X., Li, W. D., & Wang, S. (2018). Cyber physical system and big data enabled energy efficient machining optimisation. Journal of Cleaner Production, 187, 46–62.

    Article  Google Scholar 

  • Lin, H. I. (2020). Learning on robot skills: Motion adjustment and smooth concatenation of motion blocks. Engineering Applications of Artificial Intelligence, 91, 103619.

    Article  Google Scholar 

  • Lin, H.I., & Lai, C.C. (2012). Robot reaching movement synthesis by human demonstration. In 2012 7th International conference on computer science & education (ICCSE), Melbourne, Australia (pp. 980–985).

  • Micheler, S., Goh, Y., & Lohse, N. (2020). A transformation of human operation approach to inform system design for automation. Journal of Intelligent Manufacturing, 32, 201–220.

    Article  Google Scholar 

  • Michieletto, S., Chessa, N., & Menegatti, E. (2013). Learning how to approach industrial robot tasks from natural demonstrations. In Proceedings of the 2013 IEEE workshop on advanced robotics and its social impacts, Tokyo, Japan (pp. 255–260).

  • Mok, J., Lee, Y., Ko, S., Choi, I., & Choi, H. S. (2017). Gaussian-mixture based potential field approach for UAV collision avoidance. In Proceedings of the 2017 56th annual conference of the society of instrument and control engineers of Japan (SICE), Kanazawa, Japan (pp. 1316–1319).

  • Pan, W. T. (2012). A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowledge-Based Systems, 26(2), 69–74.

    Article  Google Scholar 

  • Sheng, W., Thobbi, A., & Gu, Y. (2014). An integrated framework for human–robot collaborative manipulation. IEEE Transactions on Cybernetics, 45(10), 2030–2041.

    Article  Google Scholar 

  • Tamas, L., & Murar, M. (2019). Smart CPS: Vertical integration overview and user story with a cobot. International Journal of Computer Integrated Manufacturing, 32(4–5), 504–521.

    Article  Google Scholar 

  • Ti, B., Gao, Y., Li, Q., & Zhao, J. (2019a). Human intention understanding from multiple demonstrations and behavior generalization in dynamic movement primitives framework. IEEE Access, 7, 36186–36194.

    Article  Google Scholar 

  • Ti, B., Gao, Y., Li, Q., & Zhao, J. (2019). Dynamic movement primitives for movement generation using GMM-GMR analytical method. In Proceedings of the 2019 IEEE 2nd international conference on information and computer technologies (ICICT), Kahului, HI, USA (pp. 250–254).

  • Villani, V., Pini, F., Leali, F., & Secchi, C. (2018). Survey on human-robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics, 55, 248–266.

    Article  Google Scholar 

  • Wang, F., Zhou, X. Q., Wang, J. H., Zhang, X., & Song, B. (2020). Joining force of human muscular task planning with robot robust and delicate manipulation for programming by demonstration. IEEE/ASME Transactions on Mechatronics, 25(5), 2574–2584.

    Article  Google Scholar 

  • Wang, Y. Q., Hu, Y. D., Zaatari, S. E., Li, W. D., & Zhou, Y. (2021). Optimised learning from demonstrations for collaborative robots. Robotics and Computer-Integrated Manufacturing, 71, 102169.

    Article  Google Scholar 

  • Ye, C. C., Yang, J. X., & Ding, H. (2020). Bagging for Gaussian mixture regression in robot learning from demonstration. Journal of Intelligent Manufacturing. https://doi.org/10.1007/s10845-020-01686-8(inpress)

    Article  Google Scholar 

  • Zhou, Y., Chen, M., Du, G., Ping, Z., et al. (2017). Intelligent grasping with natural human-robot interaction. Industrial Robot, 45(1), 44–53.

    Article  Google Scholar 

Download references

Acknowledgements

This research was sponsored by the National Natural Science Foundation of China (Project No. 51975444), and partially funded by the UK industrial and research partners (the Unipart Powertrain Application Ltd. (UK) and the Institute of Digital Engineering (UK)).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weidong Li.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, Y., Wang, Y., Hu, K. et al. Adaptive obstacle avoidance in path planning of collaborative robots for dynamic manufacturing. J Intell Manuf 34, 789–807 (2023). https://doi.org/10.1007/s10845-021-01825-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10845-021-01825-9

Keywords

Navigation