Biologically plausible reinforcement learning of deep cognitive processing

Open Access
Authors
  • A.R. van den Berg
Supervisors
Award date 01-07-2025
Number of pages 134
Organisations
  • Faculty of Science (FNWI) - Swammerdam Institute for Life Sciences (SILS)
Abstract
Can we enable biologically plausible neural networks to learn complex cognitive functions? In this thesis, I developed novel learning rules and training methods that are inspired from neuroscience and investigated how well artificial neural networks trained with these techniques align with the behaviour and neural activity from animals performing the same tasks. In the second chapter, I designed a simplified model trained with a local learning rule to perform tasks that require the flexible use of memory, both within trials and across learning experiences through meta-learning. I demonstrated that these networks exhibit important characteristics also observed in animals trained on these tasks. In the third chapter, I extended this learning rule to deeper architectures to investigate how memories are represented and maintained through the different layers of the network. In the fourth chapter, I accelerated and improved the learning dynamics of networks trained with reinforcement learning so they can scale to larger, more complex problems such as ImageNet. Finally, I summarise these findings and discuss how to place them in a broader context, delineating which challenges and opportunities remain for the field. Overall, the chapters in this thesis contribute to the advancement of more flexible and scalable biologically plausible neural networks for deep cognitive control.
Document type PhD thesis
Language English
Downloads
Permalink to this page
cover
Back