IPO: Interpretable Prompt Optimization for Vision-Language Models

Open Access
Authors
Publication date 2025
Host editors
  • A. Globerson
  • L. Mackey
  • D. Belgrave
  • A. Fan
  • U. Paquet
  • J. Tomczak
  • C. Zhang
Book title 38th Conference on Neural Information Processing Systems (NeurIPS 2024)
Book subtitle 10-15 December 2024, Vancouver, Canada
ISBN (electronic)
  • 9798331314385
Series Advances in Neural Information Processing Systems
Event The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)
Pages (from-to) 126725-126766
Number of pages 42
Publisher Neural Information Processing Systems Foundation
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Pre-trained vision-language models like CLIP have remarkably adapted to various downstream tasks. Nonetheless, their performance heavily depends on the specificity of the input text prompts, which requires skillful prompt template engineering. Instead, current approaches to prompt optimization learn the prompts through gradient descent, where the prompts are treated as adjustable parameters. However, these methods tend to lead to overfitting of the base classes seen during training and produce prompts that are no longer understandable by humans. This paper introduces a simple but interpretable prompt optimizer (IPO), that utilizes large language models (LLMs) to generate textual prompts dynamically. We introduce a Prompt Optimization Prompt that not only guides LLMs in creating effective prompts but also stores past prompts with their performance metrics, providing rich in-context information. Additionally, we incorporate a large multimodal model (LMM) to condition on visual content by generating image descriptions, which enhance the interaction between textual and visual modalities. This allows for the creation of dataset-specific prompts that improve generalization performance, while maintaining human comprehension. Extensive testing across 11 datasets reveals that IPO not only improves the accuracy of existing gradient-descent-based prompt learning methods but also considerably enhances the interpretability of the generated prompts. By leveraging the strengths of LLMs, our approach ensures that the prompts remain human-understandable, thereby facilitating better transparency and oversight for vision-language models.
Document type Conference contribution
Note With supplementary ZIP-file
Language English
Published at https://doi.org/10.52202/079017-4025
Published at https://openreview.net/forum?id=WPPC7FHtaM https://papers.nips.cc/paper_files/paper/2024/hash/e52e4de8689a9955b6d3ff421d019387-Abstract-Conference.html
Other links https://www.proceedings.com/79017.html
Downloads
IPO (Accepted author manuscript)
079017-4025open (Final published version)
Supplementary materials
Permalink to this page
Back