Unverified Commit 6f7b4a05 authored by Julen Etxaniz's avatar Julen Etxaniz Committed by GitHub
Browse files

Add BertaQA dataset tasks (#1964)



* add bertaqa tasks

* rename basquetrivia-->bertaqa ; make template stub not .yaml

* add bertaqa entry to lm_eval/tasks/README.md

---------
Co-authored-by: default avatarhaileyschoelkopf <hailey@eleuther.ai>
parent d14b36e8
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
| [bbh](bbh/README.md) | Tasks focused on deep semantic understanding through hypothesization and reasoning. | English, German | | [bbh](bbh/README.md) | Tasks focused on deep semantic understanding through hypothesization and reasoning. | English, German |
| [belebele](belebele/README.md) | Language understanding tasks in a variety of languages and scripts. | Multiple (122 languages) | | [belebele](belebele/README.md) | Language understanding tasks in a variety of languages and scripts. | Multiple (122 languages) |
| benchmarks | General benchmarking tasks that test a wide range of language understanding capabilities. | | | benchmarks | General benchmarking tasks that test a wide range of language understanding capabilities. | |
| [bertaqa](bertaqa/README.md) | Local Basque cultural trivia QA tests in English and Basque languages. | English, Basque, Basque (MT) |
| [bigbench](bigbench/README.md) | Broad tasks from the BIG-bench benchmark designed to push the boundaries of large models. | Multiple | | [bigbench](bigbench/README.md) | Broad tasks from the BIG-bench benchmark designed to push the boundaries of large models. | Multiple |
| [blimp](blimp/README.md) | Tasks testing grammatical phenomena to evaluate language model's linguistic capabilities. | English | | [blimp](blimp/README.md) | Tasks testing grammatical phenomena to evaluate language model's linguistic capabilities. | English |
| [ceval](ceval/README.md) | Tasks that evaluate language understanding and reasoning in an educational context. | Chinese | | [ceval](ceval/README.md) | Tasks that evaluate language understanding and reasoning in an educational context. | Chinese |
......
# BertaQA
### Paper
Title: BertaQA: How Much Do Language Models Know About Local Culture?
Abstract: https://arxiv.org/abs/2406.07302
Large Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how well these models perform on topics relevant to other cultures, whose presence on the web is not that prominent. To address this gap, we introduce BertaQA, a multiple-choice trivia dataset that is parallel in English and Basque. The dataset consists of a local subset with questions pertinent to the Basque culture, and a global subset with questions of broader interest. We find that state-of-the-art LLMs struggle with local cultural knowledge, even as they excel on global topics. However, we show that continued pre-training in Basque significantly improves the models' performance on Basque culture, even when queried in English. To our knowledge, this is the first solid evidence of knowledge transfer from a low-resource to a high-resource language. Our analysis sheds light on the complex interplay between language and knowledge, and reveals that some prior findings do not fully hold when reassessed on local topics. Our dataset and evaluation code are available under open licenses at https://github.com/juletx/BertaQA.
Homepage: https://github.com/juletx/BertaQA
### Citation
```
@misc{etxaniz2024bertaqa,
title={BertaQA: How Much Do Language Models Know About Local Culture?},
author={Julen Etxaniz and Gorka Azkune and Aitor Soroa and Oier Lopez de Lacalle and Mikel Artetxe},
year={2024},
eprint={2406.07302},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
- `bertaqa`: Group of BertaQA tasks.
#### Tasks
- `bertaqa_eu`: Trivia questions in Basque.
- `bertaqa_en`: Trivia questions in English, human-translated from Basque.
- `bertaqa_en_mt_*`: Trivia questions in English, machine-translated from Basque with different models.
### Checklist
For adding novel benchmarks/datasets to the library:
- [ ] Is the task an existing benchmark in the literature?
- [ ] Have you referenced the original paper that introduced the task?
- [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- [ ] Is the "Main" variant of this task clearly denoted?
- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
group: bertaqa
dataset_path: HiTZ/BertaQA
dataset_name: null
validation_split: null
test_split: test
fewshot_split: test
output_type: multiple_choice
doc_to_choice: ["A", "B", "C"]
doc_to_target: answer
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
version: 0.0
task: bertaqa_en
include: _bertaqa_template
dataset_name: en
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_gemma-7b
include: _bertaqa_template
dataset_name: en_mt_gemma-7b
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_hitz
include: _bertaqa_template
dataset_name: en_mt_hitz
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_itzuli
include: _bertaqa_template
dataset_name: en_mt_itzuli
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_latxa-13b-v1.1
include: _bertaqa_template
dataset_name: en_mt_latxa-13b-v1.1
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_latxa-13b-v1
include: _bertaqa_template
dataset_name: en_mt_latxa-13b-v1
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_latxa-70b-v1.1
include: _bertaqa_template
dataset_name: en_mt_latxa-70b-v1.1
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_latxa-70b-v1
include: _bertaqa_template
dataset_name: en_mt_latxa-70b-v1
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_latxa-7b-v1.1
include: _bertaqa_template
dataset_name: en_mt_latxa-7b-v1.1
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_latxa-7b-v1
include: _bertaqa_template
dataset_name: en_mt_latxa-7b-v1
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_llama-2-13b
include: _bertaqa_template
dataset_name: en_mt_llama-2-13b
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_llama-2-70b
include: _bertaqa_template
dataset_name: en_mt_llama-2-70b
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_llama-2-7b
include: _bertaqa_template
dataset_name: en_mt_llama-2-7b
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_madlad
include: _bertaqa_template
dataset_name: en_mt_madlad
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_en_mt_nllb
include: _bertaqa_template
dataset_name: en_mt_nllb
doc_to_text: "Question: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nAnswer:"
task: bertaqa_eu
include: _bertaqa_template
dataset_name: eu
doc_to_text: "Galdera: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nErantzuna:"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment