EMO: Episodic Memory Optimization for Few-Shot Meta-Learning

Open Access
Authors
Publication date 2023
Journal Proceedings of Machine Learning Research
Event 2nd Conference on Lifelong Learning Agents
Volume | Issue number 232
Number of pages 20
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Few-shot meta-learning presents a challenge for gradient descent optimization due to the limited number of training samples per task. To address this issue, we propose an episodic memory optimization for meta-learning, we call EMO, which is inspired by the human ability to recall past learning experiences from the brain’s memory. EMO retains the gradient history of past experienced tasks in external memory, enabling few-shot learning in a memory-augmented way. By learning to retain and recall the learning process of past training tasks, EMO nudges parameter updates in the right direction, even when the gradients provided by a limited number of examples are uninformative. We prove theoretically that our algorithm converges for smooth, strongly convex objectives. EMO is generic, flexible, and model-agnostic, making it a simple plug-and-play optimizer that can be seamlessly embedded into existing optimization-based few-shot meta-learning approaches. Empirical results show that EMO scales well with most few-shot classification benchmarks and improves the performance of optimization-based meta-learning methods, resulting in accelerated convergence.
Document type Article
Note Proceedings of The 2nd Conference on Lifelong Learning Agents, 22-25 August 2023, McGill University, Montréal, Québec, Canada
Language English
Published at https://doi.org/10.48550/arXiv.2306.05189
Published at https://proceedings.mlr.press/v232/du23a.html
Downloads
du23a (Final published version)
Permalink to this page
Back