On the Consistency of Multilingual Context Utilization in Retrieval-Augmented Generation

Open Access
Authors
Publication date 2025
Host editors
  • D.I. Adelani
  • C. Arnett
  • D. Ataman
  • T.A. Chang
  • H. Gonen
  • R. Raja
  • F. Schmidt
  • D. Stap
  • J. Wang
Book title The 5th Workshop on Multilingual Representation Learning (MRL 2025) : proceedings of the workshop
Book subtitle MRL 2025 : November 8-9, 2025
ISBN (electronic)
  • 9798891763456
Event 5th Workshop on Multilingual Representation Learning
Pages (from-to) 199–225
Number of pages 27
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Retrieval-augmented generation (RAG) with large language models (LLMs) has demonstrated strong performance in multilingual question-answering (QA) tasks by leveraging relevant passages retrieved from corpora. In multilingual RAG (mRAG), the retrieved passages can be written in languages other than that of the query entered by the user, making it challenging for LLMs to effectively utilize the provided information. Recent research suggests that retrieving passages from multilingual corpora can improve RAG performance, particularly for low-resource languages. However, the extent to which LLMs can leverage different kinds of multilingual contexts to generate accurate answers, independently from retrieval quality, remains understudied. In this paper, we conduct an extensive assessment of LLMs’ ability to (i) make consistent use of a relevant passage regardless of its language, (ii) respond in the expected language, and (iii) focus on the relevant passage even when multiple ‘distracting passages’ in different languages are provided in the context. Our experiments with four LLMs across three QA datasets covering 48 languages reveal a surprising ability of LLMs to extract relevant information from passages in a different language than the query, but a much weaker ability to produce a full answer in the correct language. Our analysis, based on both accuracy and feature attribution techniques, further shows that distracting passages negatively impact answer quality regardless of their language. However, distractors in the query language exert a slightly stronger influence. Taken together, our findings deepen the understanding of how LLMs utilize context in mRAG systems, providing directions for future improvements. All codes and data are released at https://github.com/Betswish/mRAG-Context-Consistency.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2025.mrl-main.15
Other links https://github.com/Betswish/mRAG-Context-Consistency
Downloads
2025.mrl-main.15 (Final published version)
Permalink to this page
Back