Evaluating Compositional Generalisation in VLMs and Diffusion Models

Open Access
Authors
  • Beth Pearson
  • Bilal Boulbarss
  • Michael Wray
  • M. Lewis ORCID logo
Publication date 2025
Host editors
  • Lea Frermann
  • Mark Stevenson
Book title The 14th Joint Conference on Lexical and Computational Semantics : proceedings of the conference (*SEM 2025)
Book subtitle StarSEM 2025 : November 8-9, 2025
ISBN (electronic)
  • 9798891763401
Event 14th Joint Conference on Lexical and Computational Semantics
Pages (from-to) 122–133
Number of pages 12
Publisher Kerrville. TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
A fundamental aspect of the semantics of natural language is that novel meanings can be formed from the composition of previously known parts.Vision-language models (VLMs) have made significant progress in recent years, however, there is evidence that they are unable to perform this kind of composition. For example, given an image of a red cube and a blue cylinder, a VLM such as CLIP is likely to incorrectly label the image as a red cylinder or a blue cube, indicating it represents the image as a ‘bag-of-words’ and fails to capture compositional semantics. Diffusion models have recently gained significant attention for their impressive generative abilities, and zero-shot classifiers based on diffusion models have been shown to perform competitively with CLIP in certain compositional tasks. We explore whether the generative Diffusion Classifier has improved compositional generalisation abilities compared to discriminative models. We assess three models—Diffusion Classifier, CLIP, and ViLT—on their ability to bind objects with attributes and relations in both zero-shot learning (ZSL) and generalised zero-shot learning (GZSL) settings. Our results show that the Diffusion Classifier and ViLT perform well at concept binding tasks, but that all models struggle significantly with the relational GZSL task, underscoring the broader challenges VLMs face with relational reasoning. Analysis of CLIP embeddings suggests that the difficulty may stem from overly similar representations of relational concepts such as left and right. Code and dataset are available at [link redacted for anonymity].
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2025.starsem-1.9
Other links https://github.com/otmive/diffusion_classifier_clip
Downloads
2025.starsem-1.9 (Final published version)
Permalink to this page
Back