The MUNCH (Metaphor Understanding Challenge) Dataset

Creators
Publication date 2024
Description
For testing large language models' capabilities for understanding metaphors as cross-domain mappings. Overview: Input data for testing LLMs are in tasks/ - Prompts: prompts.md - Paraphrase generation: generation.json - Paraphrase judgement: o For word judgement, input sentences and options are formatted in the same way for all 3 conditions: word_judge.json o Sentence judgement: sent_judge_{implicit,msent,mword}.json o Please always shuffle the 2 given options when formatting your prompts. Gold labels (human annotations) are in correct_answers/ - For the paraphrase generation task: for_generation.csv - For the paraphrase judgement task: for_judgement.csv
Publisher GitHub
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Document type Dataset
Related publication Metaphor Understanding Challenge Dataset for LLMs
Other links https://github.com/xiaoyuisrain/metaphor-understanding-challenge
Permalink to this page
Back