Skip to main content
Log in

A Comparison of Four Neural Networks Algorithms on Locomotion Intention Recognition of Lower Limb Exoskeleton Based on Multi-source Information

  • Research Article
  • Published:
Journal of Bionic Engineering Aims and scope Submit manuscript

Abstract

Lower Limb Exoskeletons (LLEs) are receiving increasing attention for supporting activities of daily living. In such active systems, an intelligent controller may be indispensable. In this paper, we proposed a locomotion intention recognition system based on time series data sets derived from human motion signals. Composed of input data and Deep Learning (DL) algorithms, this framework enables the detection and prediction of users’ movement patterns. This makes it possible to predict the detection of locomotion modes, allowing the LLEs to provide smooth and seamless assistance. The pre-processed eight subjects were used as input to classify four scenes: Standing/Walking on Level Ground (S/WOLG), Up the Stairs (US), Down the Stairs (DS), and Walking on Grass (WOG). The result showed that the ResNet performed optimally compared to four algorithms (CNN, CNN-LSTM, ResNet, and ResNet-Att) with an approximate evaluation indicator of 100%. It is expected that the proposed locomotion intention system will significantly improve the safety and the effectiveness of LLE due to its high accuracy and predictive performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

All data and materials related to the study can be obtained through contacting the author at 213332822@st.usst.edu.cn.

References

  1. Mooney, L. M., Rouse, E. J., & Herr, H. M. (2014). Autonomous exoskeleton reduces metabolic cost of human walking during load carriage. Journal of Neuroengineering and Rehabilitation, 11, 1–11. https://doi.org/10.1186/1743-0003-11-80

    Article  Google Scholar 

  2. Yang, J. T., Sun, T. R., Cheng, L., & Hou, Z. G. (2022). Spatial repetitive impedance learning control for robot-assisted rehabilitation. IEEE/ASME Transactions on Mechatronics, 28, 1280–1290. https://doi.org/10.1109/TMECH.2022.3221931

    Article  Google Scholar 

  3. Mokhtari, M., Taghizadeh, M., & Mazare, M. (2021). Impedance control based on optimal adaptive high order super twisting sliding mode for a 7-DOF lower limb exoskeleton. Meccanica, 56, 535–548. https://doi.org/10.1007/s11012-021-01308-4

    Article  MathSciNet  Google Scholar 

  4. Zhong, B. X., Da Silva, R. L., Li, M., Huang, H., & Lobaton, E. (2020). Environmental context prediction for lower limb prostheses with uncertainty quantification. IEEE Transactions on Automation Science and Engineering, 18, 458–470. https://doi.org/10.1109/TASE.2020.2993399

    Article  Google Scholar 

  5. Tucker, M. R., Olivier, J., Pagel, A., Bleuler, H., Bouri, M., Lambercy, O., Millán, J. D. R., Riener, R., Vallery, H., & Gassert, R. (2015). Control strategies for active lower extremity prosthetics and orthotics: a review. Journal of Neuroengineering and Rehabilitation, 12, 1–30. https://doi.org/10.1186/1743-0003-12-1

    Article  PubMed  PubMed Central  Google Scholar 

  6. Young, A. J., & Ferris, D. P. (2016). State of the art and future directions for lower limb robotic exoskeletons. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25, 171–182. https://doi.org/10.1109/TNSRE.2016.2521160

    Article  PubMed  Google Scholar 

  7. Mokhtari, M., Taghizadeh, M., & Mazare, M. (2021). Hybrid adaptive robust control based on CPG and ZMP for a lower limb exoskeleton. Robotica, 39, 181–199. https://doi.org/10.1017/S0263574720000260

    Article  Google Scholar 

  8. Hu, B., Rouse, E., & Hargrove, L. (2018). Fusion of bilateral lower-limb neuromechanical signals improves prediction of locomotor activities. Front Robot AI, 5, 78. https://doi.org/10.3389/frobt.2018.00078

    Article  PubMed  PubMed Central  Google Scholar 

  9. Huang, H., Zhang, F., Hargrove, L. J., Dou, Z., Rogers, D. R., & Englehart, K. B. (2011). Continuous locomotion-mode identification for prosthetic legs based on neuromuscular–mechanical fusion. IEEE Transactions on Biomedical Engineering, 58, 2867–2875. https://doi.org/10.1109/TBME.2011.2161671

    Article  PubMed  Google Scholar 

  10. Laschowski, B., McNally, W., Wong, A., & McPhee, J. (2022). Environment classification for robotic leg prostheses and exoskeletons using deep convolutional neural networks. Frontiers in Neurorobotics, 15, 1–17. https://doi.org/10.3389/fnbot.2021.730965

    Article  Google Scholar 

  11. Kurbis, A. G., Laschowski, B., & Mihailidis, A. (2022). Stair recognition for robotic exoskeleton control using computer vision and deep learning. IEEE International Conference on Rehabilitation Robotics, Rotterdam, Netherlands, 2022, 1–6. https://doi.org/10.1109/ICORR55369.2022.9896501

    Article  Google Scholar 

  12. Kemaev, I., Polykovskiy, D., & Vetrov, D. (2018). Reset: learning recurrent dynamic routing in resnet-like neural networks. The 10th Asian Conference on Machine Learning, Beijing, China, 95, 422–437. https://doi.org/10.48550/arXiv.1811.04380

  13. Wang, M., Wu, X. Y., Liu, D. X., & Wang, C. (2016). A human motion prediction algorithm based on HSMM for SIAT's exoskeleton. The 35th Chinese Control Conference, Chengdu, China, 3891–3896. https://doi.org/10.1109/ChiCC.2016.7553959

  14. Patzer, I., & Asfour, T. (2019). Minimal sensor setup in lower limb exoskeletons for motion classification based on multi-modal sensor data. IEEE International Conference on Intelligent Robots and Systems, Macau, China, 8164–8170. https://doi.org/10.1109/Humanoids43949.2019.9035014

  15. Wu, X. Y., Yuan, Y., Zhang, X. K., Wang, C., Xu, T. T., & Tao, D. C. (2022). Gait phase classification for a lower limb exoskeleton system based on a graph convolutional network model. IEEE Transactions on Industrial Electronics, 69, 4999–5008. https://doi.org/10.1109/tie.2021.3082067

    Article  Google Scholar 

  16. Ren, B., Zhang, Z. Q., Zhang, C., & Chen, S. L. (2022). Motion trajectories prediction of lower limb exoskeleton based on long short-term memory (LSTM) networks. Actuators, 11, 1–15. https://doi.org/10.3390/act11030073

    Article  Google Scholar 

  17. Chen, C. F., Du, Z. J., He, L., Shi, Y. J., Wang, J. Q., & Dong, W. (2021). A novel gait pattern recognition method based on LSTM-CNN for lower limb exoskeleton. Journal of Bionic Engineering, 18, 1059–1072. https://doi.org/10.1007/s42235-021-00083-y

    Article  Google Scholar 

  18. Su, B. B., & Gutierrez-Farewik, E. M. (2020). Gait trajectory and gait phase prediction based on an LSTM network. Sensors, 20, 1–17. https://doi.org/10.3390/s20247127

    Article  Google Scholar 

  19. Li, J. X., Gao, T., Zhang, Z. H., Wu, G. H., Zhang, H., Zheng, J. B., Gao, Y. F., & Wang, Y. (2022). A novel method of pattern recognition based on TLSTM in lower limb exoskeleton in many terrains. The 4th International Conference on Intelligent Control, Measurement and Signal Processing, Hangzhou, China, 733–737. https://doi.org/10.1109/ICMSP55950.2022.9859005

  20. Zhu, M., Guan, X. R., Li, Z., He, L., Wang, Z., & Cai, K. S. (2023). sEMG-based lower limb motion prediction using CNN-LSTM with improved PCA optimization algorithm. Journal of Bionic Engineering, 20, 612–627. https://doi.org/10.1007/s42235-022-00280-3

    Article  Google Scholar 

  21. Lu, Y. Z., Wang, H., Zhou, B., Wei, C. F., & Xu, S. Q. (2022). Continuous and simultaneous estimation of lower limb multi-joint angles from sEMG signals based on stacked convolutional and LSTM models. Expert Systems with Applications, 203, 1–20. https://doi.org/10.1016/j.eswa.2022.117340

    Article  Google Scholar 

  22. Guo, C. Y., Song, Q. Z., & Liu, Y. L. (2022). Research on the application of multi-source information fusion in multiple gait pattern transition recognition. Sensors (Basel), 22, 1–12. https://doi.org/10.3390/s22218551

    Article  Google Scholar 

  23. Zhang, X. D., Li, H. Z., Dong, R. L., Lu, Z. F., & Li, C. X. (2022). Electroencephalogram and surface electromyogram fusion-based precise detection of lower limb voluntary movement using convolution neural network-long short-term memory model. Frontiers in Neuroscience, 16, 1–21. https://doi.org/10.3389/fnins.2022.954387

    Article  ADS  CAS  Google Scholar 

  24. Zhang, K. E., Wang, J., De Silva, C. W., & Fu, C. L. (2020). Unsupervised cross-subject adaptation for predicting human locomotion intent. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28, 646–657. https://doi.org/10.1109/TNSRE.2020.2966749

    Article  PubMed  Google Scholar 

  25. Kuniaki Saito, Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 3723–3732. https://doi.org/10.1109/CVPR.2018.00392

  26. Zhang, K. E., Xiong, C. H., Zhang, W., Liu, H. Y., Lai, D. Y., Rong, Y. M., & Fu, C. L. (2019). Environmental features recognition for lower limb prostheses toward predictive walking. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27, 465–476. https://doi.org/10.1109/TNSRE.2019.2895221

    Article  PubMed  Google Scholar 

  27. Hur, T., Bang, J., Huynh-The, T., Lee, J. W., Kim, J. I., & Lee, S. Y. (2018). Iss2Image: A novel signal-encoding technique for CNN-based human activity recognition. Sensors (Basel), 18, 1–19. https://doi.org/10.3390/s18113910

    Article  Google Scholar 

  28. Khatun, M., Yousuf, M., Ahmed, S., Uddin, M. Z., Alyami, S., Al-Ashhab, S., Akhdar, H., Khan, A., Azad, A. K. M., & Moni, M. A. (2022). Deep CNN-LSTM with self-attention model for human activity recognition using wearable sensor. IEEE Journal of Translational Engineering in Health and Medicine, 10, 1–1. https://doi.org/10.1109/JTEHM.2022.3177710

    Article  Google Scholar 

  29. Zhao, J. F., Mao, X., & Chen, L. J. (2018). Learning deep features to recognise speech emotion using merged deep CNN. IET Signal Processing, 12, 713–721. https://doi.org/10.1049/iet-spr.2017.0320

    Article  Google Scholar 

  30. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735

    Article  CAS  PubMed  Google Scholar 

  31. Zhou, X., Wu, X. T., Ding, P., Li, X. G., He, N. H., Zhang, G. Z., & Zhang, X. X. (2019). Research on transformer partial discharge UHF pattern recognition based on CNN-LSTM. Energies, 13, 1–13. https://doi.org/10.3390/en13010061

    Article  CAS  Google Scholar 

  32. He, K. M., Zhang, X. Y., Ren, S. Q., & Sun, J., (2016). Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 770–778. https://doi.org/10.1109/CVPR.2016.90

  33. Liu, T. L., Luo, R. H., Xu, L. Q., Feng, D. C., Cao, L., Liu, S. Y., & Guo, J. J. (2022). Spatial channel attention for deep convolutional neural networks. Mathematics, 10, 1–10. https://doi.org/10.3390/math10101750

    Article  Google Scholar 

  34. Woo, S., Park, J., Lee, J.Y., & Kweon, I. S. (2018). CBAM: Convolutional block attention module. The 15th European Conference on Computer Vision, Munich, Germany, 11211, 3–19. https://doi.org/10.1007/978-3-030-01234-2_1

  35. Zhang, H., Wu, C. R., Zhang, Z. Y., Zhu, Y., Lin, H. B., Zhang, Z., Sun, Y., He, T., Mueller, J., & Manmatha, R. (2022). Resnest: Split-attention networks. Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 2736–2746. https://doi.org/10.1109/CVPRW56347.2022.00309

  36. Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 7132–7141. https://doi.org/10.1109/CVPR.2018.00745

  37. Pinto-Fernandez, D., Torricelli, D., del Carmen Sanchez-Villamanan, M., Aller, F., Mombaur, K., Conti, R., Vitiello, N., Moreno, J. C., & Pons, J. L. (2020). Performance evaluation of lower limb exoskeletons: A systematic review. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28, 1573–1583. https://doi.org/10.1109/TNSRE.2020.2989481

    Article  PubMed  Google Scholar 

  38. Wan, S. H., Qi, L. Y., Xu, X. L., Tong, C., & Gu, Z. H. (2020). Deep learning models for real-time human activity recognition with smartphones. Mobile Networks and Applications, 25, 743–755. https://doi.org/10.1007/s11036-019-01445-x

    Article  Google Scholar 

  39. Reyes-Ortiz, J. L., Oneto, L., Samà, A., Parra, X., & Anguita, D. (2016). Transition-aware human activity recognition using smartphones. Neurocomputing, 171, 754–767. https://doi.org/10.1016/j.neucom.2015.07.085

    Article  Google Scholar 

  40. Reiss, A., & Stricker, D. (2012). Introducing a new benchmarked dataset for activity monitoring. The 16th International Symposium on Wearable Computers, Newcastle, England, 108–109. https://doi.org/10.1109/ISWC.2012.13

  41. Xia, K., Huang, J. G., & Wang, H. Y. (2020). LSTM-CNN architecture for human activity recognition. IEEE Access, 8, 56855–56866. https://doi.org/10.1109/ACCESS.2020.2982225

    Article  Google Scholar 

  42. Kwapisz, J. R., Weiss, G. M., & Moore, S. A. (2011). Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter, 12, 74–82. https://doi.org/10.1145/1964897.1964918

    Article  Google Scholar 

  43. Roggen, D., Calatroni, A., Rossi, M., Holleczek, T., Förster, K., Tröster, G., Lukowicz, P., Bannach, D., Pirkl, G., & Ferscha, A. (2010). Collecting complex activity datasets in highly rich networked sensor environments. The 7th International Conference on Networked Sensing Systems, Kassel, Germany, 233–240. https://doi.org/10.1109/INSS.2010.5573462

  44. Zhong, B. X., Silva, R. L. D., Tran, M., Huang, H., & Lobaton, E. (2022). Efficient environmental context prediction for lower limb prostheses. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 52, 3980–3994. https://doi.org/10.1109/TSMC.2021.3084036

    Article  Google Scholar 

  45. Zhang, L. L., Xie, Y. X., Xidao, L., & Zhang, X. (2018). Multi-source heterogeneous data fusion. International Conference on Artificial Intelligence and Big Data, Chengdu, China, 47–51. https://doi.org/10.1109/ICAIBD.2018.8396165

  46. Jiang, M. M., Wu, Q., & Li, X. T. (2022). Multisource heterogeneous data fusion analysis of regional digital construction based on machine learning. Journal of Sensors, 2022, 1–11. https://doi.org/10.1155/2022/8205929

    Article  Google Scholar 

  47. Zhang, F., Yang, J., Sun, C., Guo, X., & Wan, T. T. (2021). Research on multi-source heterogeneous data fusion technology of new energy vehicles under the new four modernizations. Journal of Physics: Conference Series, 1865, 1–15. https://doi.org/10.1088/1742-6596/1865/2/022034

    Article  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge the financial support of Shanghai Science and Technology innovation action plan (19DZ2203600).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Duojin Wang.

Ethics declarations

Conflict of interest

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Gu, X. & Yu, H. A Comparison of Four Neural Networks Algorithms on Locomotion Intention Recognition of Lower Limb Exoskeleton Based on Multi-source Information. J Bionic Eng 21, 224–235 (2024). https://doi.org/10.1007/s42235-023-00435-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42235-023-00435-w

Keywords

Navigation