计算机与现代化 ›› 2023, Vol. 0 ›› Issue (06): 33-38.doi: 10.3969/j.issn.1006-2475.2023.06.006

• 算法设计与分析 • 上一篇    下一篇

一种基于混合样本的经验回放策略

赖建彬, 冯刚   

  1. 华南师范大学计算机学院,广东 广州 510635
  • 收稿日期:2022-07-11 修回日期:2022-08-24 出版日期:2023-06-28 发布日期:2023-06-28
  • 通讯作者: 冯刚(1967—),男,湖南衡阳人,副教授,博士,研究方向:深度强化学习,E-mail: fenggang@scnu.edu.com。
  • 作者简介:赖建彬(1997—),男,广东广州人,硕士研究生,研究方向:深度强化学习,E-mail: laijianbin67@163.com。

An Experience Replay Strategy Based on Mixed Samples

LAI Jian-bin, FENG Gang   

  1. School of Computer Science, South China Normal University, Guangzhou 510635, China
  • Received:2022-07-11 Revised:2022-08-24 Online:2023-06-28 Published:2023-06-28

摘要: 经验回放策略已经成为深度强化学习算法的一个重要组成部分,它不仅可以加速深度强化学习算法收敛,而且还能加强智能体的表现。主流的经验回放策略使用均匀抽样、优先经验回放、专家经验回放等方法加速学习。为了进一步提高深度强化学习中经验样本的利用率,本文提出一种基于混合样本的经验回放策略(ER-MS)。该策略主要使用立即学习最新经验和复习成功经验2种方法,对智能体与环境交互产生的最新样本进行立即学习,同时使用额外的经验缓存池保存成功回合样本进行经验回放。实验表明,基于混合样本的经验回放策略结合DDPG算法能够在OpenAI mujoco任务中取得更优成绩。

关键词: 经验回放, 深度强化学习, 专家经验

Abstract: Experience replay strategy has become an important part of deep reinforcement learning algorithm. It can not only accelerate the convergence of deep reinforcement learning algorithm, but also enhance the performance of agents. Mainstream experience replay strategies use uniform sampling, priority experience replay, expert experience replay and other methods to accelerate learning. In order to further improve the utilization of experience samples in deep reinforcement learning, this paper proposes an experience replay strategy based on mixed samples (ER-MS). This strategy mainly uses two methods: immediate learning of the latest experience and review of successful experience. It immediately learns the latest samples generated by the interaction between the agent and the environment, and uses an additional experience buffer pool to save the samples of successful rounds for experience replay. Experiments show that the experience replay strategy based on mixed samples combined with DDPG algorithm can achieve better results in Open AI mujoco task.

Key words: experience replay, deep reinforcement learning, expert experience

中图分类号: