LieGG: Studying Learned Lie Group Generators

Open Access
Authors
Publication date 2023
Host editors
  • S. Koyejo
  • S. Mohamed
  • A. Agarwal
  • D. Belgrave
  • K. Cho
  • A. Oh
Book title 36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Book subtitle New Orleans, Louisiana, USA, 28 November-9 December 2022
ISBN
  • 9781713871088
ISBN (electronic)
  • 9781713873129
Series Advances in Neural Information Processing Systems
Event Thirty-sixth Conference on Neural Information Processing Systems
Volume | Issue number 33
Pages (from-to) 25212-25223
Publisher San Diego, CA: Neural Information Processing Systems Foundation
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Symmetries built into a neural network have appeared to be very beneficial for a wide range of tasks as it saves the data to learn them. We depart from the position that when symmetries are not built into a model a priori, it is advantageous for robust networks to learn symmetries directly from the data to fit a task function. In this paper, we present a method to extract symmetries learned by a neural network and to evaluate the degree to which a network is invariant to them. With our method, we are able to explicitly retrieve learned invariances in a form of the generators of corresponding Lie-groups without prior knowledge of symmetries in the data. We use the proposed method to study how symmetrical properties depend on a neural network's parameterization and configuration. We found that the ability of a network to learn symmetries generalizes over a range of architectures. However, the quality of learned symmetries depends on the depth and the number of parameters.
Document type Conference contribution
Note With supplemental file
Language English
Published at https://papers.nips.cc/paper_files/paper/2022/hash/a120382cf4e2e06d94d7ae7ac96fbe25-Abstract-Conference.html
Other links https://www.proceedings.com/68431.html
Downloads
Supplementary materials
Permalink to this page
Back