Skip to main content
Log in

Behavioral control task supervisor with memory based on reinforcement learning for human—multi-robot coordination systems

面向人—多机器人协同系统的带记忆强化学习行为控制任务管理器

  • Research Article
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

In this study, a novel reinforcement learning task supervisor (RLTS) with memory in a behavioral control framework is proposed for human—multi-robot coordination systems (HMRCSs). Existing HMRCSs suffer from high decision-making time cost and large task tracking errors caused by repeated human intervention, which restricts the autonomy of multi-robot systems (MRSs). Moreover, existing task supervisors in the null-space-based behavioral control (NSBC) framework need to formulate many priority-switching rules manually, which makes it difficult to realize an optimal behavioral priority adjustment strategy in the case of multiple robots and multiple tasks. The proposed RLTS with memory provides a detailed integration of the deep Q-network (DQN) and long short-term memory (LSTM) knowledge base within the NSBC framework, to achieve an optimal behavioral priority adjustment strategy in the presence of task conflict and to reduce the frequency of human intervention. Specifically, the proposed RLTS with memory begins by memorizing human intervention history when the robot systems are not confident in emergencies, and then reloads the history information when encountering the same situation that has been tackled by humans previously. Simulation results demonstrate the effectiveness of the proposed RLTS. Finally, an experiment using a group of mobile robots subject to external noise and disturbances validates the effectiveness of the proposed RLTS with memory in uncertain real-world environments.

摘要

针对人—多机器人协同系统提出一种基于行为控制框架的带记忆强化学习任务管理器 (RLTS). 由于重复的人工干预, 现有人—多机器人协同系统决策时间成本高、 任务跟踪误差大, 限制了多机器人系统的自主性. 此外, 基于零空间行为控制框架的任务管理器依赖手动制定优先级切换规则, 难以在多机器人和多任务情况下实现最优行为优先级调整策略. 提出一种带记忆强化学习任务管理器, 基于零空间行为控制框架融合深度Q-网络和长短时记忆神经网络知识库, 实现任务冲突时最优行为优先级调整策略以及降低人为干预频率. 当机器人在紧急情况下置信度不足时, 所提带记忆强化学习任务管理器会记忆人类干预历史, 在遭遇相同人工干预情况时重新加载历史控制信号. 仿真结果验证了该方法的有效性. 最后, 通过一组受外界噪声和干扰的移动机器人实验, 验证了所提带记忆强化学习任务管理器在不确定现实环境中的有效性.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Jie HUANG and Zhibin MO designed the research. Zhibin MO and Zhenyi ZHANG processed the data and drafted the paper. Jie HUANG and Yutao CHEN helped organize the paper. Jie HUANG, Zhibin MO, and Yutao CHEN revised and finalized the paper.

Corresponding author

Correspondence to Yutao Chen  (陈宇韬).

Ethics declarations

Jie HUANG, Zhibin MO, Zhenyi ZHANG, and Yutao CHEN declare that they have no conflict of interest.

Additional information

Project supported by the National Natural Science Foundation of China (No. 61603094)

List of supplementary materials

Video S1 Behavioral control based on reinforcement learning for human—multi-robot coordination systems

Electronic supplementary materials

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, J., Mo, Z., Zhang, Z. et al. Behavioral control task supervisor with memory based on reinforcement learning for human—multi-robot coordination systems. Front Inform Technol Electron Eng 23, 1174–1188 (2022). https://doi.org/10.1631/FITEE.2100280

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2100280

Key words

关键词

CLC number

Navigation