Discrete Latent Structure in Neural Networks

Open Access
Authors
  • André F.T. Martins
Publication date 02-06-2025
Journal Foundations and Trends in Signal Processing
Volume | Issue number 19 | 2
Pages (from-to) 99-211
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Many types of data from fields including natural language processing, computer vision, and bioinformatics are well represented by discrete, compositional structures such as trees, sequences, or matchings. Latent structure models are a powerful tool for learning to extract such representations, offering a way to incorporate structural bias, discover insight about the data, and interpret decisions. However, effective training is challenging as neural networks are typically designed for continuous computation. This text explores three broad strategies for learning with discrete latent structure: continuous relaxation, surrogate gradients, and probabilistic estimation. Our presentation relies on consistent notations for a wide range of models. As such, we reveal many new connections between latent structure learning strategies, showing how most consist of the same small set of fundamental building blocks, but use them differently, leading to substantially different applicability and properties.
Document type Review article
Note Publisher Copyright: ©2025 V. Niculae et al.
Language English
Published at https://doi.org/10.48550/arXiv.2301.07473 https://doi.org/10.1561/2000000134
Other links https://www.scopus.com/pages/publications/105007165623
Downloads
Permalink to this page
Back