How Far can 100 Samples Go? Unlocking Zero-Shot Translation with Tiny Multi-Parallel Data

Open Access
Authors
Publication date 2024
Host editors
  • L.-W. Ku
  • A. Martins
  • V. Srikumar
Book title The 62nd Annual Meeting of the Association for Computational Linguistics : Findings of the Association for Computational Linguistics: ACL 2024
Book subtitle ACL 2024 : August 11-16, 2024
ISBN (electronic)
  • 9798891760998
Event Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Pages (from-to) 15092-15108
Number of pages 17
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Zero-shot translation aims to translate between language pairs not seen during training in Multilingual Machine Translation (MMT) and is widely considered an open problem. A common, albeit resource-consuming, solution is to add as many related translation directions as possible to the training corpus. In this paper, we show that for an English-centric model, surprisingly large zero-shot improvements can be achieved by simply fine-tuning with a very small amount of multi-parallel data. For example, on the EC30 dataset, we obtain up to +21.7 ChrF++ non-English overall improvements (870 directions) by using only 100 multi-parallel samples while preserving English-centric translation quality. This performance exceeds M2M100 by an average of 5.9 ChrF++ in the involved non-English directions. When investigating the size effect of fine-tuning data on translation quality, we found that already a small, randomly sampled set of fine-tuning directions is sufficient to achieve comparable improvements. The resulting non-English performance is close to the complete translation upper bound. Even in a minimal setting—fine-tuning with only one single sample—the well-known off-target issue is almost completely resolved, explaining parts—but not all—of the observed improvements in translation quality.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2024.findings-acl.896
Downloads
2024.findings-acl.896 (Final published version)
Permalink to this page
Back