Compositional Concept Generalization with Variational Quantum Circuits

Authors
  • Hala Hawashin
  • Mina Abbaszadeh
  • Nicholas Joseph
  • Beth Pearson
  • M. Lewis ORCID logo
  • Mehrnoosh Sadrzadeh
Publication date 2025
Book title 2025 IEEE International Conference on Quantum Artificial Intelligence (QAI)
Book subtitle QAI 2025 : 2-5 November 2025, Napoli, Italy : proceedings
ISBN
  • 9798331569877
ISBN (electronic)
  • 9798331569860
Event 2025 IEEE International Conference on Quantum Artificial Intelligence
Pages (from-to) 34-40
Number of pages 7
Publisher Los Alamitos, California: IEEE Computer Society
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Compositional generalization is a key facet of human cognition, but lacking in current AI tools such as vision-language models. Previous work examined whether a compositional tensorbased sentence semantics can overcome the challenge, but led to negative results. We conjecture that the increased training efficiency of quantum models will improve performance in these tasks. We interpret the representations of compositional tensor-based models in Hilbert spaces and train Variational Quantum Circuits to learn these representations on an image captioning task requiring compositional generalization. We used two image encoding techniques: a multi-hot encoding (MHE) on binary image vectors and an angle/amplitude encoding on image vectors taken from the vision-language model CLIP. We achieve good proof-of-concept results using noisy MHE encodings. Performance on CLIP image vectors was more mixed, but still outperformed classical compositional models.
Document type Conference contribution
Language English
Published at https://doi.org/10.1109/QAI63978.2025.00013
Downloads
Compositional_Concept_Generalization_with_Variational_Quantum_Circuits (Embargo up to 2026-07-23) (Final published version)
Permalink to this page
Back