README.md 1.96 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# DarijaHellaSwag

### Paper

Title: Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect

Abstract: [https://arxiv.org/abs/2409.17912](https://arxiv.org/abs/2409.17912)

DarijaHellaSwag is a challenging multiple-choice benchmark designed to evaluate machine reading comprehension and commonsense reasoning in Moroccan Darija. It is a translated version of the HellaSwag validation set, which presents scenarios where models must choose the most plausible continuation of a passage from four options.


Homepage: [https://huggingface.co/datasets/MBZUAI-Paris/DarijaHellaSwag](https://huggingface.co/datasets/MBZUAI-Paris/DarijaHellaSwag)


### Citation

```
@article{shang2024atlaschatadaptinglargelanguage,
19
      title={Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect},
20
21
22
23
24
      author={Guokan Shang and Hadi Abdine and Yousef Khoubrane and Amr Mohamed and Yassine Abbahaddou and Sofiane Ennadir and Imane Momayiz and Xuguang Ren and Eric Moulines and Preslav Nakov and Michalis Vazirgiannis and Eric Xing},
      year={2024},
      eprint={2409.17912},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
25
      url={https://arxiv.org/abs/2409.17912},
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
}
```

### Groups and Tasks

#### Groups

- Not part of a group yet

#### Tasks

- `darijahellaswag`

### Checklist

For adding novel benchmarks/datasets to the library:

* [X] Is the task an existing benchmark in the literature?
  * [X] Have you referenced the original paper that introduced the task?
  * [X] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?

If other tasks on this dataset are already supported:

* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?