Cache & Distil: Optimising API Calls to Large Language Models

Open Access
Authors
  • G. Ramírez
  • M. Lindemann
  • A. Birch
  • I. Titov
Publication date 2024
Host editors
  • L.-W. Ku
  • A. Martins
  • V. Srikumar
Book title The 62nd Annual Meeting of the Association for Computational Linguistics : Findings of the Association for Computational Linguistics: ACL 2024
Book subtitle ACL 2024 : August 11-16, 2024
ISBN (electronic)
  • 9798891760998
Event Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Pages (from-to) 11838–11853
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Large-scale deployment of generative AI tools often depends on costly API calls to a Large Language Model (LLM) to fulfil user queries, a process that also exposes the request stream to external providers. To curtail the frequency of these calls, one can employ a local smaller language model -a student- which is continuously trained on the responses of the LLM. This student gradually gains proficiency in independently handling an increasing number of user requests, a process we term neural caching. The crucial element in neural caching is a policy that decides which requests should be processed by the student alone and which should be redirected to the LLM, subsequently aiding the student’s learning. In this study, we focus on classification tasks, and we consider a range of classic Active Learning-based selection criteria as the policy. Our experiments suggest that Margin Sampling and Query by Committee bring consistent benefits over other policies and baselines across tasks and budgets.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2024.findings-acl.704
Downloads
2024.findings-acl.704 (Final published version)
Permalink to this page
Back