Unverified Commit 28bb45fb authored by JorgeDeCorte's avatar JorgeDeCorte Committed by GitHub
Browse files

Add multilingual HellaSwag task (#1228)



* add hellaswag_nl

* add other languages and update readme to hellaswag

* refactor as new task

* update readme

* add endline to yaml files and readme.md

* add group, change folder location and update yaml file

* rename default hellaswag yaml file

* fix whitespace error in some labels

* downgrade log level of whitespace checking

---------
Co-authored-by: default avatarJorgeDeCorte <jorge.decorte@ravago.be>
Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
parent e7c03d0c
......@@ -698,11 +698,11 @@ class ConfigurableTask(Task):
)
if delimiter_has_whitespace and choice_has_whitespace:
eval_logger.warning(
f'Both target_delimiter and target choice: "{choice}" have whitespace'
eval_logger.debug(
f'Both target_delimiter "{self.config.target_delimiter}" and target choice: "{choice}" have whitespace'
)
elif (not delimiter_has_whitespace) and (not choice_has_whitespace):
eval_logger.warning(
eval_logger.debug(
f'Both target_delimiter "{self.config.target_delimiter}" and target choice: "{choice}" do not have whitespace, ignore if the language you are evaluating on does not require/use whitespace'
)
......
# Multilingual HellaSwag
### Paper
Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
Abstract: https://arxiv.org/abs/2307.16039
A key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.
Homepage: `https://github.com/nlp-uoregon/Okapi`
### Citation
```
@article{dac2023okapi,
title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},
author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},
journal={arXiv e-prints},
pages={arXiv--2307},
year={2023}
}
```
### Groups and Tasks
#### Groups
- hellaswag_multilingual
#### Tasks
- `hellaswag_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
group:
- hellaswag_multilingual
dataset_path: null
dataset_name: null
output_type: multiple_choice
training_split: null
validation_split: validation
test_split: null
process_docs: !function utils.process_docs
doc_to_text: "query"
doc_to_target: "{{label.lstrip()}}"
doc_to_choice: "choices"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
include: _hellaswag_yaml
task: hellaswag_ar
dataset_path: alexandrainst/m_hellaswag
dataset_name: ar
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_bn
dataset_path: alexandrainst/m_hellaswag
dataset_name: bn
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_ca
dataset_path: alexandrainst/m_hellaswag
dataset_name: ca
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_da
dataset_path: alexandrainst/m_hellaswag
dataset_name: da
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_de
dataset_path: alexandrainst/m_hellaswag
dataset_name: de
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_es
dataset_path: alexandrainst/m_hellaswag
dataset_name: es
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_eu
dataset_path: alexandrainst/m_hellaswag
dataset_name: eu
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_fr
dataset_path: alexandrainst/m_hellaswag
dataset_name: fr
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_gu
dataset_path: alexandrainst/m_hellaswag
dataset_name: gu
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_hi
dataset_path: alexandrainst/m_hellaswag
dataset_name: hi
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_hr
dataset_path: alexandrainst/m_hellaswag
dataset_name: hr
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_hu
dataset_path: alexandrainst/m_hellaswag
dataset_name: hu
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_hy
dataset_path: alexandrainst/m_hellaswag
dataset_name: hy
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_id
dataset_path: alexandrainst/m_hellaswag
dataset_name: id
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_it
dataset_path: alexandrainst/m_hellaswag
dataset_name: it
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_kn
dataset_path: alexandrainst/m_hellaswag
dataset_name: kn
training_split: null
validation_split: val
include: _hellaswag_yaml
task: hellaswag_ml
dataset_path: alexandrainst/m_hellaswag
dataset_name: ml
training_split: null
validation_split: val
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment