Unverified Commit cb069004 authored by zxcvuser's avatar zxcvuser Committed by GitHub
Browse files

Add new benchmark: Catalan bench (#2154)



* Add catalan_bench

* added flores_ca.yaml

* Updated some task groupings and readme

* Fix create_yamls_flores_ca.py

---------
Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
parent c887796d
......@@ -26,6 +26,7 @@
| [bertaqa](bertaqa/README.md) | Local Basque cultural trivia QA tests in English and Basque languages. | English, Basque, Basque (MT) |
| [bigbench](bigbench/README.md) | Broad tasks from the BIG-bench benchmark designed to push the boundaries of large models. | Multiple |
| [blimp](blimp/README.md) | Tasks testing grammatical phenomena to evaluate language model's linguistic capabilities. | English |
| [catalan_bench](catalan_bench/README.md) | Collection of tasks in Catalan encompassing various evaluation areas. | Catalan |
| [ceval](ceval/README.md) | Tasks that evaluate language understanding and reasoning in an educational context. | Chinese |
| [cmmlu](cmmlu/README.md) | Multi-subject multiple choice question tasks for comprehensive academic assessment. | Chinese |
| code_x_glue | Tasks that involve understanding and generating code across multiple programming languages. | Go, Java, JS, PHP, Python, Ruby |
......@@ -125,4 +126,3 @@
| [xnli_eu](xnli_eu/README.md) | Cross-lingual Natural Language Inference tasks in Basque. | Basque |
| [xstorycloze](xstorycloze/README.md) | Cross-lingual narrative understanding tasks to predict story endings in multiple languages. | Russian, Simplified Chinese, Spanish, Arabic, Hindi, Indonesian, Telugu, Swahili, Basque, Burmese |
| [xwinograd](xwinograd/README.md) | Cross-lingual Winograd schema tasks for coreference resolution in multiple languages. | English, French, Japanese, Portuguese, Russian, Chinese |
# CatalanBench
### Paper
CatalanBench is a benchmark for evaluating language models in Catalan tasks. This is, it evaluates the ability of a language model to understand and generate Catalan text. CatalanBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All the details of CatalanBench will be published in a paper soon.
The new evaluation datasets included in CatalanBench are:
| Task | Category | Homepage |
|:-------------:|:-----:|:-----:|
| ARC_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/arc_ca |
| MGSM_ca | Math | https://huggingface.co/datasets/projecte-aina/mgsm_ca |
| OpenBookQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/openbookqa_ca |
| Parafraseja | Paraphrasing | https://huggingface.co/datasets/projecte-aina/Parafraseja |
| PIQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/piqa_ca |
| SIQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/siqa_ca |
| XStoryCloze_ca | Commonsense Reasoning | https://huggingface.co/datasets/projecte-aina/xstorycloze_ca |
The datasets included in CatalanBench that have been made public in previous pubications are:
| Task | Category | Paper title | Homepage |
|:-------------:|:-----:|:-------------:|:-----:|
| Belebele_ca | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele |
| caBREU | Summarization | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/caBreu |
| CatalanQA | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/catalanqa |
| CatCoLA | Linguistic Acceptability | CatCoLA: Catalan Corpus of Linguistic Acceptability | https://huggingface.co/datasets/nbel/CatCoLA |
| COPA-ca | Commonsense Reasoning | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/COPA-ca |
| CoQCat | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/CoQCat |
| FLORES_ca | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores |
| PAWS-ca | Paraphrasing | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/PAWS-ca |
| TE-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/teca |
| VeritasQA_ca | Truthfulness | VeritasQA: A Truthfulness Benchmark Aimed at Multilingual Transferability | TBA |
| WNLI-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/wnli-ca |
| XNLI-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/xnli-ca |
| XQuAD-ca | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/xquad-ca |
### Citation
Paper for CatalanBench coming soon.
<!--```bibtex
@inproceedings{baucells-2024-iberobench,
title = "IberoBench: A Benchmark for LLM Evaluation in Iberian Languages",
author = "Baucells, Irene and
AUTHORS, ADD",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
year = "2024",
publisher = "Association for Computational Linguistics",
}
```
-->
### Groups and Tasks
#### Groups
- `catalan_bench`: All tasks included in CatalanBench.
- `flores_ca`: All FLORES translation tasks from or to Catalan.
#### Tags
- `cabreu`: Three CaBREU tasks for each type of summary (extractive, abstractive and extreme).
- `phrases_va`: Two Phrases_va tasks for language adaptation between Catalan and Valencian.
#### Tasks
The following tasks evaluate tasks on CatalanBench dataset using various scoring methods.
- `arc_ca_challenge`
- `arc_ca_easy`
- `belebele_cat_Latn`
- `cabreu`
- `catalanqa`
- `catcola`
- `copa_ca`
- `coqcat`
- `flores_ca`
- `flores_ca-de`
- `flores_ca-en`
- `flores_ca-es`
- `flores_ca-eu`
- `flores_ca-fr`
- `flores_ca-gl`
- `flores_ca-it`
- `flores_ca-pt`
- `flores_de-ca`
- `flores_en-ca`
- `flores_es-ca`
- `flores_eu-ca`
- `flores_fr-ca`
- `flores_gl-ca`
- `flores_it-ca`
- `flores_pt-ca`
- `mgsm_direct_ca`
- `openbookqa_ca`
- `parafraseja`
- `paws_ca`
- `phrases_ca`
- `piqa_ca`
- `siqa_ca`
- `teca`
- `veritasqa_gen_ca`
- `veritasqa_mc1_ca`
- `veritasqa_mc2_ca`
- `wnli_ca`
- `xnli_ca`
- `xquad_ca`
- `xstorycloze_ca`
Some of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are:
- `belebele_cat_Latn`: Belebele Catalan
### Checklist
* [x] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation?
* [ ] Yes, original implementation contributed by author of the benchmark
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
tag: arc_ca
dataset_path: projecte-aina/arc_ca
output_type: multiple_choice
training_split: null
validation_split: validation
test_split: test
doc_to_text: "Pregunta: {{question}}\nResposta:"
doc_to_target: "{{choices.label.index(answerKey)}}"
doc_to_choice: "{{choices.text}}"
should_decontaminate: true
doc_to_decontamination_query: "Pregunta: {{question}}\nResposta:"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
tag: cabreu
dataset_path: projecte-aina/caBreu
dataset_name: null
output_type: generate_until
test_split: test
training_split: train
validation_split: validation
process_docs: !function utils.process_doc_cabreu
metric_list:
- metric: bleu
aggregation: bleu
higher_is_better: true
- metric: !function utils.rouge1
aggregation: !function utils.rouge1_agg
higher_is_better: true
metadata:
version: 1.0
task: arc_ca_challenge
dataset_name: ARC-Challenge
include: _arc_ca_common_yaml
task: arc_ca_easy
dataset_name: ARC-Easy
include: _arc_ca_common_yaml
include: _cabreu_common_yaml
task: cabreu_abstractive
description: "Examina el text següent i genera'n un resum abstractiu, expressant el significat del text original d'una manera més natural i concisa.\n"
doc_to_text: >-
Text: {{content}}
Resum:
doc_to_target: '{{summaries["abstractive"]["a1"]}}'
include: _cabreu_common_yaml
task: cabreu_extractive
description: "Examina el text següent i genera'n un resum extractiu, utilitzant les frases o oracions més rellevants del text original.\n"
doc_to_text: >-
Text: {{content}}
Resum:
doc_to_target: '{{summaries["extractive"]["a1"]}}'
include: _cabreu_common_yaml
task: cabreu_extreme
description: "Examina el text següent i genera'n un resum que sigui el més concís possible i que preservi el significat del text original.\n"
doc_to_text: >-
Text: {{content}}
Resum:
doc_to_target: '{{summaries["extreme"]["a1"]}}'
group: catalan_bench
task:
- belebele_cat_Latn
- xnli_ca
- catcola
- copa_ca
- openbookqa_ca
- parafraseja
- paws_ca
- piqa_ca
- siqa_ca
- teca
- wnli_ca
- arc_ca_easy
- arc_ca_challenge
- xstorycloze_ca
- xquad_ca
- catalanqa
- coqcat
- flores_ca
- cabreu
- mgsm_direct_ca
- phrases_va
metadata:
version: 1.0
task: catalanqa
dataset_path: projecte-aina/catalanqa
dataset_name: null
output_type: generate_until
training_split: train
validation_split: validation
test_split: test
doc_to_text: "Context: {{context}}\n\nPregunta: {{question}}\n\nResposta:"
doc_to_target: '{{answers[0]["text"]}}'
target_delimiter: ' '
process_results: !function utils.process_results_qa
generation_kwargs:
until:
- "\n"
do_sample: false
temperature: 0.0
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
- metric: f1
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
task: catcola
dataset_path: nbel/CatCoLA
output_type: multiple_choice
training_split: train
validation_split: validation
test_split: null
doc_to_text: "{{Sentence}}\nPregunta: sentit aquesta frase?\nResposta:"
doc_to_target: label
doc_to_choice: ["no", "sí"]
metric_list:
- metric: mcc
- metric: acc
metadata:
version: 1.0
task: copa_ca
dataset_path: projecte-aina/COPA-ca
dataset_name: null
output_type: multiple_choice
training_split: train
validation_split: validation
test_split: test
process_docs: !function utils.process_docs_copa_ca
doc_to_text: '{{premise[:-1].strip() + " " + {"cause": "perquè", "effect": "i per tant"}[question]}}'
doc_to_target: '{{choice1 if label == 0 else choice2}}'
doc_to_choice: '{{[choice1, choice2]}}'
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
task: coqcat
dataset_path: projecte-aina/CoQCat
output_type: generate_until
training_split: train
validation_split: validation
test_split: test
doc_to_text: '{{story+"\n\n"}}{% for i in range(questions|length-1) %}{{"Q: "+questions[i]+"\n\n"+"A: "+answers["input_text"][i]+"\n\n"}}{% endfor %}{{"Q: "+questions[-1]+"\n\n"+"A:"}}'
doc_to_target: '{{ answers["input_text"][questions|length - 1] }}'
process_results: !function utils.process_results_coqcat
should_decontaminate: true
doc_to_decontamination_query: "{{story}} {{question.input_text|join('\n')}}"
generation_kwargs:
until:
- "\nQ:"
metric_list:
- metric: "em"
aggregation: mean
higher_is_better: true
- metric: "f1"
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
dataset_path: facebook/flores
dataset_name: all
output_type: generate_until
training_split: dev
validation_split: dev
test_split: devtest
fewshot_split: dev
target_delimiter: ''
generation_kwargs:
until:
- "\n"
metric_list:
- metric: bleu
aggregation: bleu
higher_is_better: true
- metric: ter
aggregation: ter
higher_is_better: false
- metric: chrf
aggregation: chrf
higher_is_better: true
metadata:
version: 1.0
dataset_kwargs:
trust_remote_code: true
"""
Script to generate task YAMLs for the FLORES-200 dataset.
Based on `tasks/translation/utils.py`.
"""
import argparse
import yaml
from langcodes import Language
# constants
_LANGUAGES = [
"ace_Arab",
"bam_Latn",
"dzo_Tibt",
"hin_Deva",
"khm_Khmr",
"mag_Deva",
"pap_Latn",
"sot_Latn",
"tur_Latn",
"ace_Latn",
"ban_Latn",
"ell_Grek",
"hne_Deva",
"kik_Latn",
"mai_Deva",
"pbt_Arab",
"spa_Latn",
"twi_Latn",
"acm_Arab",
"bel_Cyrl",
"eng_Latn",
"hrv_Latn",
"kin_Latn",
"mal_Mlym",
"pes_Arab",
"srd_Latn",
"tzm_Tfng",
"acq_Arab",
"bem_Latn",
"epo_Latn",
"hun_Latn",
"kir_Cyrl",
"mar_Deva",
"plt_Latn",
"srp_Cyrl",
"uig_Arab",
"aeb_Arab",
"ben_Beng",
"est_Latn",
"hye_Armn",
"kmb_Latn",
"min_Arab",
"pol_Latn",
"ssw_Latn",
"ukr_Cyrl",
"afr_Latn",
"bho_Deva",
"eus_Latn",
"ibo_Latn",
"kmr_Latn",
"min_Latn",
"por_Latn",
"sun_Latn",
"umb_Latn",
"ajp_Arab",
"bjn_Arab",
"ewe_Latn",
"ilo_Latn",
"knc_Arab",
"mkd_Cyrl",
"prs_Arab",
"swe_Latn",
"urd_Arab",
"aka_Latn",
"bjn_Latn",
"fao_Latn",
"ind_Latn",
"knc_Latn",
"mlt_Latn",
"quy_Latn",
"swh_Latn",
"uzn_Latn",
"als_Latn",
"bod_Tibt",
"fij_Latn",
"isl_Latn",
"kon_Latn",
"mni_Beng",
"ron_Latn",
"szl_Latn",
"vec_Latn",
"amh_Ethi",
"bos_Latn",
"fin_Latn",
"ita_Latn",
"kor_Hang",
"mos_Latn",
"run_Latn",
"tam_Taml",
"vie_Latn",
"apc_Arab",
"bug_Latn",
"fon_Latn",
"jav_Latn",
"lao_Laoo",
"mri_Latn",
"rus_Cyrl",
"taq_Latn",
"war_Latn",
"arb_Arab",
"bul_Cyrl",
"fra_Latn",
"jpn_Jpan",
"lij_Latn",
"mya_Mymr",
"sag_Latn",
"taq_Tfng",
"wol_Latn",
"arb_Latn",
"cat_Latn",
"fur_Latn",
"kab_Latn",
"lim_Latn",
"nld_Latn",
"san_Deva",
"tat_Cyrl",
"xho_Latn",
"ars_Arab",
"ceb_Latn",
"fuv_Latn",
"kac_Latn",
"lin_Latn",
"nno_Latn",
"sat_Olck",
"tel_Telu",
"ydd_Hebr",
"ary_Arab",
"ces_Latn",
"gaz_Latn",
"kam_Latn",
"lit_Latn",
"nob_Latn",
"scn_Latn",
"tgk_Cyrl",
"yor_Latn",
"arz_Arab",
"cjk_Latn",
"gla_Latn",
"kan_Knda",
"lmo_Latn",
"npi_Deva",
"shn_Mymr",
"tgl_Latn",
"yue_Hant",
"asm_Beng",
"ckb_Arab",
"gle_Latn",
"kas_Arab",
"ltg_Latn",
"nso_Latn",
"sin_Sinh",
"tha_Thai",
"zho_Hans",
"ast_Latn",
"crh_Latn",
"glg_Latn",
"kas_Deva",
"ltz_Latn",
"nus_Latn",
"slk_Latn",
"tir_Ethi",
"zho_Hant",
"awa_Deva",
"cym_Latn",
"grn_Latn",
"kat_Geor",
"lua_Latn",
"nya_Latn",
"slv_Latn",
"tpi_Latn",
"zsm_Latn",
"ayr_Latn",
"dan_Latn",
"guj_Gujr",
"kaz_Cyrl",
"lug_Latn",
"oci_Latn",
"smo_Latn",
"tsn_Latn",
"zul_Latn",
"azb_Arab",
"deu_Latn",
"hat_Latn",
"kbp_Latn",
"luo_Latn",
"ory_Orya",
"sna_Latn",
"tso_Latn",
"azj_Latn",
"dik_Latn",
"hau_Latn",
"kea_Latn",
"lus_Latn",
"pag_Latn",
"snd_Arab",
"tuk_Latn",
"bak_Cyrl",
"dyu_Latn",
"heb_Hebr",
"khk_Cyrl",
"lvs_Latn",
"pan_Guru",
"som_Latn",
"tum_Latn",
]
LANGUAGE_PAIRS = [
(a, b) for idx, a in enumerate(_LANGUAGES) for b in _LANGUAGES[idx + 1 :]
]
LANGUAGES_OF_INTEREST = [
"cat_Latn",
"spa_Latn",
"eng_Latn",
"glg_Latn",
"eus_Latn",
"ita_Latn",
"deu_Latn",
"por_Latn",
"fra_Latn",
]
MAIN_LANG = "cat_Latn"
LANGUAGE_PAIRS = [
(a, b)
for (a, b) in LANGUAGE_PAIRS
if a in LANGUAGES_OF_INTEREST
and b in LANGUAGES_OF_INTEREST
and "cat_Latn" in (a, b)
]
# auxiliary functions
def code_to_language_name(code):
return Language.make(language=Language.get(code)["language"]).display_name()
def code_to_short_name(code):
return Language.get(code)["language"]
def jinja_var(s):
return "{{" + s + "}}"
def doc_to_text(src: str, tgt: str) -> str:
src_name, tgt_name = map(code_to_language_name, [src, tgt])
return f"""\
{src_name} sentence: {jinja_var('sentence_' + src)}
{tgt_name} sentence:"""
def doc_to_target(tgt: str) -> str:
return f"{jinja_var('sentence_' + tgt)}"
# main function
def gen_lang_yamls(output_dir: str, overwrite: bool) -> None:
"""
Generate a YAML file for each translation direction.
"""
err = []
for src, tgt in LANGUAGE_PAIRS:
# do both translation directions for each lang pair
for src, tgt in [(src, tgt), (tgt, src)]:
lang_pair_name = f"{code_to_short_name(src)}-{code_to_short_name(tgt)}"
yaml_file_name = f"flores_{lang_pair_name}.yaml"
try:
with open(
f"{output_dir}/{yaml_file_name}",
"w" if overwrite else "x",
encoding="utf-8",
) as outfile:
print(f"Creating {yaml_file_name}...")
outfile.write("# File generated by `create-yamls.py`\n")
yaml.dump(
{
# "group": [f"{BENCH_NAME}_bench", f"{BENCH_NAME}_bench_flores"],
# "group": "flores_ca",
"include": "_flores_common_yaml",
"task": f"flores_{lang_pair_name}",
"doc_to_text": doc_to_text(src, tgt),
"doc_to_target": doc_to_target(tgt),
},
outfile,
sort_keys=False,
)
except FileExistsError:
err.append(yaml_file_name)
if len(err) > 0:
raise FileExistsError(
"Files were not created because they already exist:"
f" {', '.join(err)}"
"\nUse flag --overwrite to overwrite them."
)
def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument(
"--overwrite",
default=False,
action="store_true",
help="Overwrite files if they already exist",
)
parser.add_argument(
"--output-dir", default=".", help="Directory to write yaml files to"
)
args = parser.parse_args()
gen_lang_yamls(output_dir=args.output_dir, overwrite=args.overwrite)
if __name__ == "__main__":
main()
# File generated by `create-yamls.py`
include: _flores_common_yaml
task: flores_ca-de
doc_to_text: 'Catalan sentence: {{sentence_cat_Latn}}
German sentence:'
doc_to_target: '{{sentence_deu_Latn}}'
# File generated by `create-yamls.py`
include: _flores_common_yaml
task: flores_ca-en
doc_to_text: 'Catalan sentence: {{sentence_cat_Latn}}
English sentence:'
doc_to_target: '{{sentence_eng_Latn}}'
# File generated by `create-yamls.py`
include: _flores_common_yaml
task: flores_ca-es
doc_to_text: 'Catalan sentence: {{sentence_cat_Latn}}
Spanish sentence:'
doc_to_target: '{{sentence_spa_Latn}}'
# File generated by `create-yamls.py`
include: _flores_common_yaml
task: flores_ca-eu
doc_to_text: 'Catalan sentence: {{sentence_cat_Latn}}
Basque sentence:'
doc_to_target: '{{sentence_eus_Latn}}'
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment