A Performance-Based Start State Curriculum Framework for Reinforcement Learning
| Authors |
|
|---|---|
| Publication date | 2020 |
| Book title | AAMAS'20 |
| Book subtitle | proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems : May 9-13, 2020, Auckland, New Zealand |
| ISBN (electronic) |
|
| Event | 19th International Conference on Autonomous Agents and MultiAgent Systems |
| Pages (from-to) | 1503-1511 |
| Publisher | Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems |
| Organisations |
|
| Abstract |
Sparse reward problems present a challenge for reinforcement learning (RL) agents. Previous work has shown that choosing start states according to a curriculum can significantly improve the learning performance. We observe that many existing curriculum generation algorithms rely on two key components: Performance measure estimation and a start selection policy. Therefore, we propose a unifying framework for performance-based start state curricula in RL, which allows to analyze and compare the performance influence of the two key components. Furthermore, a new start state selection policy using spatial performance measure gradients is introduced. We conduct extensive empirical evaluations to compare performance-based start state curricula and investigate the influence of performance measure model choice and estimation. Benchmarking on difficult robotic navigation tasks and a high-dimensional robotic manipulation task, we demonstrate state-of-the-art performance of our novel spatial gradient curriculum.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://dl.acm.org/doi/10.5555/3398761.3398934 http://ifaamas.org/Proceedings/aamas2020/pdfs/p1503.pdf |
| Other links | http://www.ifaamas.org/Proceedings/aamas2020/ |
| Downloads |
p1503
(Final published version)
|
| Permalink to this page | |
