new_task_guide.md 25.5 KB
Newer Older
1
2
# New Task Guide

3
`lm-evaluation-harness` is a framework that strives to support a wide range of zero- and few-shot evaluation tasks on autoregressive language models (LMs).
4

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
5
This documentation page provides a walkthrough to get started creating your own task, in `lm-eval` versions v0.4.0 and later.
6

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
7
A more interactive tutorial is available as a Jupyter notebook [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/examples/lm-eval-overview.ipynb).
8

9
10
11
12
13
14
15
16
17
18
19
20
## Setup

If you haven't already, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:

```sh
# After forking...
git clone https://github.com/<YOUR-USERNAME>/lm-evaluation-harness.git
cd lm-evaluation-harness
git checkout -b <task-name>
pip install -e ".[dev]"
```

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
21
In this document, we'll walk through the basics of implementing a static benchmark evaluation in two formats: a *generative* task which requires sampling text from a model, such as [`gsm8k`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k.yaml), and a *discriminative*, or *multiple choice*, task where the model picks the most likely of several fixed answer choices, such as [`sciq`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/sciq/sciq.yaml).
22
23
24

## Creating a YAML file

25
To implement a new standard task, we'll need to write a YAML file which configures our task logic. We start by making a new empty YAML file. This file can have any name, but we recommend placing it in a subfolder of `lm_eval/tasks` titled by the dataset or task's shorthand name: for example,
26
27

```sh
28
touch lm_eval/tasks/<dataset_name>/<my_new_task_name>.yaml
29
```
Kiersten Stokes's avatar
Kiersten Stokes committed
30

31
Or, copy the template subfolder we provide from `templates/new_yaml_task`:
Kiersten Stokes's avatar
Kiersten Stokes committed
32

33
34
35
```sh
cp -r templates/new_yaml_task lm_eval/tasks/
```
Kiersten Stokes's avatar
Kiersten Stokes committed
36

37
and rename the folders and YAML file(s) as desired.
38
39
40

### Selecting and configuring a dataset

41
All data downloading and management is handled through the HuggingFace (**HF**) [`datasets`](https://github.com/huggingface/datasets) API. So, the first thing you should do is check to see if your task's dataset is already provided in their catalog [here](https://huggingface.co/datasets). If it's not in there, please consider adding it to their Hub to make it accessible to a wider user base by following their [new dataset guide](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md)
42
.
Lintang Sutawika's avatar
Lintang Sutawika committed
43
44
> [!TIP]
> To test your task, we recommend using verbose logging using `export LOGLEVEL = DEBUG` in your shell before running the evaluation script. This will help you debug any issues that may arise.
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Once you have a HuggingFace dataset prepared for your task, we want to assign our new YAML to use this dataset:

```yaml
dataset_path: ... # the name of the dataset on the HF Hub.
dataset_name: ... # the dataset configuration to use. Leave `null` if your dataset does not require a config to be passed. See https://huggingface.co/docs/datasets/load_hub#configurations for more info.
dataset_kwargs: null # any extra keyword arguments that should be passed to the dataset constructor, e.g. `data_dir`.
```

Next, we'd like to tell our task what the dataset's train, validation, and test splits are named, if they exist:

```yaml
training_split: <split name of training set, or `null`>
validation_split: <split name of val. set, or `null`>
test_split: <split name of test set, or `null`>
```
Kiersten Stokes's avatar
Kiersten Stokes committed
60

61
62
63
Tests will run on the `test_split` if it is available, and otherwise evaluate on the `validation_split`.

We can also specify from which split the task should retrieve few-shot examples via:
Kiersten Stokes's avatar
Kiersten Stokes committed
64

65
66
67
```yaml
fewshot_split: <split name to draw fewshot examples from, or `null`>
```
Kiersten Stokes's avatar
Kiersten Stokes committed
68

69
or by hardcoding them, either using the following in the yaml file:
Kiersten Stokes's avatar
Kiersten Stokes committed
70

71
72
73
74
75
76
77
78
```yaml
fewshot_config:
  sampler: first_n
  samples: [
    {<sample 1>},
    {<sample 2>},
  ]
```
Kiersten Stokes's avatar
Kiersten Stokes committed
79

80
or by adding the function `list_fewshot_samples` in the associated utils.py file:
Kiersten Stokes's avatar
Kiersten Stokes committed
81

82
83
84
85
```python
def list_fewshot_samples() -> list[dict]:
  return [{<sample 1>}, {<sample 2>}]
```
Kiersten Stokes's avatar
Kiersten Stokes committed
86

87
88
89
90
91
See `lm_eval/tasks/minerva_math/minerva_math_algebra.yaml` for an example of the latter, and `lm_eval/tasks/gsm8k/gsm8k-cot.yaml` for an example of the former.

In this case, each sample must contain the same fields as the samples in the above sets--for example, if `doc_to_text` expects an `input` field when rendering input prompts, these provided samples must include an `input` key.

If neither above options are not set, we will default to train/validation/test sets, in that order.
92

93
94
95
Finally, our dataset may not be already in the exact format we want. Maybe we have to strip whitespace and special characters via a regex from our dataset's "question" field! Or maybe we just want to rename its columns to match a convention we'll be using for our prompts.

Let's create a python file in the directory where we're writing our YAML file:
Kiersten Stokes's avatar
Kiersten Stokes committed
96

97
98
99
```bash
touch lm_eval/tasks/<dataset_name>/utils.py
```
Kiersten Stokes's avatar
Kiersten Stokes committed
100

101
Now, in `utils.py` we'll write a function to process each split of our dataset (the following example is drawn from [the `hellaswag` task](../lm_eval/tasks/hellaswag/utils.py)):
102

103
```python
104
105
106
107
108
109
110
111
112
113
114
def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:
    def _process_doc(doc):
        ctx = doc["ctx_a"] + " " + doc["ctx_b"].capitalize()
        out_doc = {
            "query": preprocess(doc["activity_label"] + ": " + ctx),
            "choices": [preprocess(ending) for ending in doc["endings"]],
            "gold": int(doc["label"]),
        }
        return out_doc

    return dataset.map(_process_doc)
115
116
117
```

Now, in our YAML config file we'll use the `!function` constructor, and tell the config where our imported Python function will come from. At runtime, before doing anything else we will preprocess our dataset according to this function!
Kiersten Stokes's avatar
Kiersten Stokes committed
118

119
120
121
122
```yaml
process_docs: !function utils.process_docs
```

123
124
125
126
### Using Local Datasets

To load a local dataset for evaluation, you can specify data files in the `dataset_kwargs` field, such as the following for JSON files:

Kiersten Stokes's avatar
Kiersten Stokes committed
127
```yaml
128
129
130
131
132
dataset_path: json
dataset_name: null
dataset_kwargs:
  data_files: /path/to/my/json
```
Kiersten Stokes's avatar
Kiersten Stokes committed
133

134
135
Or with files already split into separate directories:

Kiersten Stokes's avatar
Kiersten Stokes committed
136
```yaml
137
138
139
140
141
142
143
144
145
dataset_path: arrow
dataset_kwargs:
  data_files:
    train: /path/to/arrow/train/data-00000-of-00001.arrow
    validation: /path/to/arrow/validation/data-00000-of-00001.arrow
```

Alternatively, if you have previously downloaded a dataset from huggingface hub (using `save_to_disk()`) and wish to use the local files, you will need to use `data_dir` under `dataset_kwargs` to point to where the directory is.

Kiersten Stokes's avatar
Kiersten Stokes committed
146
```yaml
147
148
149
150
151
152
153
dataset_path: hellaswag
dataset_kwargs:
  data_dir: hellaswag_local/
```

You can also set `dataset_path` as a directory path in your local system. This will assume that there is a loading script with the same name as the directory. [See datasets docs](https://huggingface.co/docs/datasets/loading#local-loading-script).

154
## Writing a Prompt Template
155
156
157

The next thing we need to do is decide what format to use when presenting the data to the LM. This is our **prompt**, where we'll define both an input and output format.

158
To write a prompt, users will use `doc_to_text`, `doc_to_target`, and `doc_to_choice` (Optional when certain conditions are met).
159

omahs's avatar
omahs committed
160
`doc_to_text` defines the input string a model will be given while `doc_to_target` and `doc_to_choice` will be used to generate the target text. `doc_to_target` can be either a text string that refers to the target string or an integer that refers to the index of the correct label. When it is set as an index, `doc_to_choice` must also be set with the appropriate list of possible choice strings.
161
162

### Basic prompts
lintangsutawika's avatar
format  
lintangsutawika committed
163

164
If a dataset is straightforward enough, users can enter the feature name directly. This assumes that no preprocessing is required. For example in [Swag](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/swag/swag.yaml#L10-L11), `doc_to_text` and `doc_to_target` given the name of one of the feature each.
Kiersten Stokes's avatar
Kiersten Stokes committed
165

166
167
168
169
```yaml
doc_to_text: startphrase
doc_to_target: label
```
Kiersten Stokes's avatar
Kiersten Stokes committed
170

171
Hard-coding is also possible as is the case in [SciQ](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/sciq/sciq.yaml#L11).
Kiersten Stokes's avatar
Kiersten Stokes committed
172

173
```yaml
174
doc_to_target: 3
175
```
Kiersten Stokes's avatar
Kiersten Stokes committed
176

177
`doc_to_choice` can be directly given a list of text as option (See [Toxigen](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/toxigen/toxigen.yaml#L11))
Kiersten Stokes's avatar
Kiersten Stokes committed
178

179
180
181
182
```yaml
doc_to_choice: ['No', 'Yes']
```

183
if a dataset feature is already a list, you can set the name of the feature as `doc_to_choice` (See [Hellaswag](https://github.com/EleutherAI/lm-evaluation-harness/blob/e0eda4d3ffa10e5f65e0976161cd134bec61983a/lm_eval/tasks/hellaswag/hellaswag.yaml#L13))
Kiersten Stokes's avatar
Kiersten Stokes committed
184
185

```yaml
186
187
188
doc_to_choice: choices
```

189
190
191
192
### Writing a prompt with Jinja 2

We support the [Jinja 2](https://jinja.palletsprojects.com/en/3.1.x/) templating language for writing prompts. In practice, this means you can take your dataset's columns and do many basic string manipulations to place each document into prompted format.

omahs's avatar
omahs committed
193
Take for example the dataset `super_glue/boolq`. As input, we'd like to use the features `passage` and `question` and string them together so that for a sample line `doc`, the model sees something in the format of:
Kiersten Stokes's avatar
Kiersten Stokes committed
194
195

```text
lintangsutawika's avatar
lintangsutawika committed
196
197
doc["passage"]
Question: doc["question"]?
198
199
Answer:
```
Kiersten Stokes's avatar
Kiersten Stokes committed
200

lintangsutawika's avatar
lintangsutawika committed
201
We do this by [writing](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/super_glue/boolq/default.yaml#L9C1-L9C61)
Kiersten Stokes's avatar
Kiersten Stokes committed
202

203
```yaml
lintangsutawika's avatar
lintangsutawika committed
204
doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:"
205
```
Kiersten Stokes's avatar
Kiersten Stokes committed
206

lintangsutawika's avatar
lintangsutawika committed
207
Such that `{{passage}}` will be replaced by `doc["passage"]` and `{{question}}` with `doc["question"]` when rendering the prompt template.
208
209

Our intended output is for the model to predict a single whitespace, and then the answer to the question. We do this via:
Kiersten Stokes's avatar
Kiersten Stokes committed
210

211
212
213
214
```yaml
doc_to_target: "{{answer}}"
```

Baber Abbasi's avatar
Baber Abbasi committed
215
216
> [!WARNING]
> We add `target_delimiter` between input and target which defaults to " ", such that the full input-output string is `doc_to_text(doc) + target_delimiter + doc_to_target(doc)`. `doc_to_text` and `doc_to_target` should not contain trailing right or left whitespace, respectively. For multiple choice the target will be each choice index concatenated with the delimiter.
haileyschoelkopf's avatar
haileyschoelkopf committed
217

218
219
220
221
222
223
224
225
#### Multiple choice format

For tasks which are multiple choice (a fixed, finite set of label words per each document) and evaluated via comparing loglikelihoods of all label words (the `multiple_choice` task output type) we enforce a particular convention on prompt format.

An annotated example in the case of SciQ is as follows:

```yaml
doc_to_text: "{{support.lstrip()}}\nQuestion: {{question}}\nAnswer:" # This is the input portion of the prompt for this doc. It will have " {{choice}}" appended to it as target for each choice in answer_choices.
226
227
doc_to_target: 3 # this contains the index into the answer choice list of the correct answer.
doc_to_choice: "{{[distractor1, distractor2, distractor3, correct_answer]}}"
228
```
Kiersten Stokes's avatar
Kiersten Stokes committed
229

230
231
Task implementers are thus able to decide what the answer choices should be for a document, and what prompt format to use.

Baber Abbasi's avatar
Baber Abbasi committed
232
The label index can also be sourced from a feature directly. For example in `superglue/boolq`, the label index if defined in the feature `label`. We can set `doc_to_target` as simply `label`. The options or verbalizers can be written in the form of a list `["no", "yes"]` that will correspond to the label index.
233

lintangsutawika's avatar
lintangsutawika committed
234
235
236
237
238
```yaml
doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:"
doc_to_target: label
doc_to_choice: ["no", "yes"]
```
239

240
241
242
243
244
### Using Python Functions for Prompts

There may be cases where the prompt we want to implement is easier expressed in Python instead of Jinja 2. For this, we can use Python helper functions that are defined in the YAML config. It should be noted that the function script must be in the same directory as the yaml.

A good example is WikiText that requires a lot of regex rules to clean the samples.
Kiersten Stokes's avatar
Kiersten Stokes committed
245
246

```python
247
248
249
250
251
252
253
254
255
256
257
def wikitext_detokenizer(doc):
    string = doc["page"]
    # contractions
    string = string.replace("s '", "s'")
    string = re.sub(r"/' [0-9]/", r"/'[0-9]/", string)
    ...
    string = string.replace(" 's", "'s")

    return string
```

258
We can load this function in `doc_to_target` by using a `!function` operator after `doc_to_target` and followed by `<file name>.<function name>`. In the file [wikitext.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/wikitext/wikitext.yaml) we write:
Kiersten Stokes's avatar
Kiersten Stokes committed
259
260

```yaml
261
262
263
264
265
doc_to_target: !function preprocess_wikitext.wikitext_detokenizer
```

### Importing a Prompt from Promptsource

lintangsutawika's avatar
lintangsutawika committed
266
267
268
[Promptsource](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource) is a great repository for crowdsourced prompts for many datasets. We can load these prompts easily by using the `use_prompt` argument and filling it with the format `"promptsource:<name of prompt template>"`. To use this, `doc_to_text` and `doc_to_target` should be left undefined. This will fetch the template of the dataset defined in the YAML file.

For example, For Super Glue BoolQ, if we want to use the prompt template `GPT-3 Style` we can add this to the YAML file.
Kiersten Stokes's avatar
Kiersten Stokes committed
269
270

```yaml
lintangsutawika's avatar
lintangsutawika committed
271
272
use_prompt: "promptsource:GPT-3 Style"
```
273

lintangsutawika's avatar
lintangsutawika committed
274
If you would like to run evaluation on all prompt templates, you can simply call it this way.
Kiersten Stokes's avatar
Kiersten Stokes committed
275
276

```yaml
lintangsutawika's avatar
lintangsutawika committed
277
278
use_prompt: "promptsource:*"
```
279
280
281
282

### Setting metrics

You're almost done! Now we need to choose how to score our task.
Kiersten Stokes's avatar
Kiersten Stokes committed
283

284
- *If this is a multiple choice task:* do you just want to check your model's accuracy in choosing the correct answer choice?
285
286
287
288
289
290
291
292
293
- *If this is a generation task:* do you just want to check how often your model outputs *exactly the ground-truth output string provided*?

If the answer to the above is no: you'll need to record what scoring metrics to use! Metrics can be listed in the following format:

```yaml
metric_list:
  - metric: <name of the metric here>
    aggregation: <name of the aggregation fn here>
    higher_is_better: <true or false>
lintangsutawika's avatar
lintangsutawika committed
294
  - metric: !function script.function
295
296
297
    aggregation: ...
    higher_is_better: ...
```
Kiersten Stokes's avatar
Kiersten Stokes committed
298

lintangsutawika's avatar
lintangsutawika committed
299
`aggregation` and `higher_is_better` can optionally be left out to default to the manually-set defaults if using a natively supported metric, otherwise it must be defined explicitly (for example, when using a custom metric implemented as a function).
300

301
For a full list of natively supported metrics and aggregation functions see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md). All metrics supported in [HuggingFace Evaluate](https://github.com/huggingface/evaluate/tree/main/metrics) can also be used, and will be loaded if a given metric name is not one natively supported in `lm-eval` or `hf_evaluate` is set to `true`.
302

haileyschoelkopf's avatar
haileyschoelkopf committed
303
### Optional, More Advanced Setup
304
305
306
307

Some tasks may require more advanced processing logic than is described in this guide.

As a heuristic check:
Kiersten Stokes's avatar
Kiersten Stokes committed
308
309
310
311
312
313

- Does your task require generating multiple free-form outputs per input document?
- Does your task require complex, multi-step post-processing of generated model outputs?
- Does your task require subsetting documents on the fly based on their content?
- Do you expect to compute metrics after applying multiple such processing steps on your model outputs?
- Does your task rely on metrics that need a custom implementation?
314

omahs's avatar
omahs committed
315
For more detail on the task system and advanced features, see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md). If none of the above sounds like they apply to your task, it's time to continue onto checking your task performance!
316

Lintang Sutawika's avatar
Lintang Sutawika committed
317
### Task name + tags (registering a task)
318
319
320
321
322
323
324
325

To test a task conveniently, it helps to *register* the task--that is, to give it a name and make the `lm-eval` library aware it exists!

If you're writing your YAML file inside the `lm_eval/tasks` folder, you just need to give your task a name! You can do this inside your YAML file:

```yaml
task: <name of the task>
```
Kiersten Stokes's avatar
Kiersten Stokes committed
326

327
328
Including a task name is mandatory.

Lintang Sutawika's avatar
Lintang Sutawika committed
329
It is often also convenient to label your task with several `tag` values, though this field is optional:
330
331

```yaml
Lintang Sutawika's avatar
Lintang Sutawika committed
332
333
334
tag:
  - tag1
  - tag2
335
336
```

Kiersten Stokes's avatar
Kiersten Stokes committed
337
This will add your task to the `tag1` and `tag2` tags, enabling people to know how to categorize your task, and if desired run all tasks in one of these groups at once, your task along with them.
338
339
340

If your task is not in the `lm_eval/tasks` folder, you'll need to tell the Eval Harness where to look for YAML files.

341
You can do this via the `--include_path` argument in `__main__.py`. This command will be used to initialize the `TaskManager` object which you can also use for your custom scripts.
342
343

```python
344
task_manager = TaskManager(args.verbosity, include_path=args.include_path)
345
346
347
348
```

Passing `--tasks /path/to/yaml/file` is also accepted.

349
350
### Advanced Group Configs

Lintang Sutawika's avatar
Lintang Sutawika committed
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
While `tag` values are helpful when you want to be able to quickly and conveniently run a set of related tasks via `--tasks my_tag_name`, often, we wish to implement more complex logic. For example, the MMLU benchmark contains 57 *subtasks* that must all be *averaged* together in order to report a final 'MMLU score'.

Groupings of tasks might also use particular variants of a task--for example, we might want to default to evaluating a task as 5-shot when called as part of a given grouping, but not have a preference for number of shots when evaluating it as a standalone.

We implement this via **groups**, which are distinct from tags. Groups can be implemented via *group config* YAML files, which are laid out similarly but slightly differently to tasks' YAML configs.

The most basic form of group can be defined via a YAML config similar to the following:

```yaml
group: nli_tasks
task:
  - cb
  - anli_r1
  - rte
metadata:
  version: 1.0
```

This will behave almost identically to a `tag` that includes these 3 tasks, but with one key distinction: we'll print the `nli_tasks` group as a row (with no associated metrics) in our table of outputs, and visually show that these 3 tasks appear under its subheader.

Now, let's assume we actually want to report an aggregate score for `nli_tasks`. We would instead use a YAML config like the following:

```yaml
group: nli_tasks
task:
  - cb
  - anli_r1
  - rte
aggregate_metric_list:
  - metric: acc
    aggregation: mean
    weight_by_size: true # defaults to `true`. Set this to `false` to do a "macro" average (taking each subtask's average accuracy, and summing those accuracies and dividing by 3)--by default we do a "micro" average (retain all subtasks' per-document accuracies, and take the mean over all documents' accuracies to get our aggregate mean).
metadata:
  version: 1.0
```

Similar to our `metric_list` for listing out the metrics we want to calculate for a given task, we use an `aggregate_metric_list` field to specify which metric name to aggregate across subtasks, what aggregation function to use, and whether we should micro- or macro- average these metrics. See [./task_guide.md](./task_guide.md) for a full list of related sub-keys.

**[!Tip]: currently, we predominantly only support the aggregation of group metrics that use `mean` (either micro- or macro- averaged) over their subtasks. If you require even more complex aggregation rules, you may want to perform aggregation offline.**

Group configs can be fairly complex! We can do various operations, such as defining new subtask(s) inline in our group YAML, overriding an existing task's specific config value, or nesting existing groups within our
392
393
394
395
396
397
398
399
400
401
402

For example, let's build a config for evaluating MMLU and a few natural language inference tasks. For MMLU, we can write the name for the benchmark as a subtask written under `task`. You can configure the parameters such as `num_fewshot`. If the task being configured is a group such as `mmlu` or `super_glue`, the parameter set will be applied to all of the subtasks.

```yaml
group: nli_and_mmlu
task:
  - group: nli_tasks
    task:
      - cb
      - anli_r1
      - rte
Lintang Sutawika's avatar
Lintang Sutawika committed
403
404
405
406
    aggregate_metric_list:
      - metric: acc
        aggregation: mean
        higher_is_better: true
407
408
409
410
411
412
  - task: mmlu
    num_fewshot: 2
```

### Configuring python classes

omahs's avatar
omahs committed
413
There can be occasions when yaml-based tasks cannot accommodate how a task is handled. LM-Eval supports the manually implementing tasks as was previously done before `0.4.x`. To register the task, you can simply make a yaml with the name of the task in `task` and the class object in `class` using the `!function` prefix.
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433

```yaml
task: squadv2
class: !function task.SQuAD2
```

This also applies to building group configurations with subtasks that are python classes.

```yaml
group: scrolls
task:
  - task: scrolls_qasper
    class: !function task.Qasper
  - task: scrolls_quality
    class: !function task.QuALITY
  - task: scrolls_narrativeqa
    class: !function task.NarrativeQA
  ...
```

434
435
436
437
438
439
440
441
442
443
444
You can also pass a custom argument to your class by accepting `config` in the custom class constructor.
Here's how to do it:

```yaml
task: 20_newsgroups
class: !function task.Unitxt
recipe: card=cards.20_newsgroups,template=templates.classification.multi_class.title
```

In this example, `recipe` is the custom argument for the `Unitxt` class.

445
446
## Beautifying Table Display

Lintang Sutawika's avatar
Lintang Sutawika committed
447
To avoid conflict, each task needs to be registered with a unique name. Because of this, slight variations of task are still counted as unique tasks and need to be named uniquely. This could be done by appending an additional naming that may refer to the variation such as in MMLU where the template used to evaluated for flan are differentiated from the default by the prefix `mmlu_flan_*`. Printing the full task names can easily clutter the results table at the end of the evaluation especially when you have a long list of tasks or are using a benchmark that comprises of many tasks. To make it more legible, you can use `task_alias` and `group_alias` to provide an alternative task name and group name that will be printed. For example in `mmlu_abstract_algebra.yaml` we set `task_alias` to `abstract_algebra`. In group configs, a `group_alias` for a group can also be set.
448

Kiersten Stokes's avatar
Kiersten Stokes committed
449
```yaml
450
451
452
453
454
455
456
457
"dataset_name": "abstract_algebra"
"description": "The following are multiple choice questions (with answers) about abstract\
  \ algebra.\n\n"
"include": "_default_template_yaml"
"task": "mmlu_abstract_algebra"
"task_alias": "abstract_algebra"
```

458
459
## Checking validity

haileyschoelkopf's avatar
haileyschoelkopf committed
460
After registering your task, you can now check on your data downloading and verify that the few-shot samples look as intended. Run the following command with your desired args:
461

haileyschoelkopf's avatar
haileyschoelkopf committed
462
463
464
465
466
467
468
469
470
471
472
473
```bash
python -m scripts.write_out \
    --output_base_path <path> \
    --tasks <your-task-name> \
    --sets <train | val | test> \
    --num_fewshot K \
    --num_examples N \
```

Open the file specified at the `--output_base_path <path>` and ensure it passes
a simple eye test.

474
475
## Versioning

Lintang Sutawika's avatar
Lintang Sutawika committed
476
One key feature in LM Evaluation Harness is the ability to version tasks and groups--that is, mark them with a specific version number that can be bumped whenever a breaking change is made.
477

Lintang Sutawika's avatar
Lintang Sutawika committed
478
This version info can be provided by adding the following to your new task or group config file:
479

Kiersten Stokes's avatar
Kiersten Stokes committed
480
```yaml
481
482
483
484
485
486
487
488
489
490
metadata:
  version: 0
```

Now, whenever a change needs to be made to your task in the future, please increase the version number by 1 so that users can differentiate the different task iterations and versions.

If you are incrementing a task's version, please also consider adding a changelog to the task's README.md noting the date, PR number, what version you have updated to, and a one-liner describing the change.

for example,

Kiersten Stokes's avatar
Kiersten Stokes committed
491
- \[Dec 25, 2023\] (PR #999) Version 0.0 -> 1.0: Fixed a bug with answer extraction that led to underestimated performance.
492

haileyschoelkopf's avatar
haileyschoelkopf committed
493
494
## Checking performance + equivalence

495
It's now time to check models' performance on your task! In the evaluation harness, we intend to support a wide range of evaluation tasks and setups, but prioritize the inclusion of already-proven benchmarks following the precise evaluation setups in the literature where possible.
496

haileyschoelkopf's avatar
haileyschoelkopf committed
497
To enable this, we provide a checklist that should be completed when contributing a new task, to enable accurate book-keeping and to ensure that tasks added to the library are well-tested and, where applicable, precedented.
498

haileyschoelkopf's avatar
haileyschoelkopf committed
499
### Task Validity Checklist
500

501
The checklist is the following:
502

haileyschoelkopf's avatar
haileyschoelkopf committed
503
For adding novel benchmarks/datasets to the library:
504

Kiersten Stokes's avatar
Kiersten Stokes committed
505
506
507
- [ ] Is the task an existing benchmark in the literature?
  - [ ] Have you referenced the original paper that introduced the task?
  - [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
508

haileyschoelkopf's avatar
haileyschoelkopf committed
509
If other tasks on this dataset are already supported:
Kiersten Stokes's avatar
Kiersten Stokes committed
510
511
512
513

- [ ] Is the "Main" variant of this task clearly denoted?
- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
514

515
516
It is recommended to include a filled-out copy of this checklist in the README.md for the subfolder you are creating, if you have created a new subfolder in `lm_eval/tasks`.

omahs's avatar
omahs committed
517
**Finally, please add a short description of your task(s), along with a link to its subfolder in lm_eval/tasks, to [`lm_eval/tasks/README.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/README.md) so that users can discover your task in the library, and follow the link to your README for more information about the variants supported, their task names, and the original source of the dataset and/or evaluation setup.**
518

519
520
## Submitting your task

521
You're all set! Now push your work and make a pull request to the `main` branch! Thanks for the contribution :). If there are any questions, please leave a message in the `#lm-thunderdome` channel on the EAI discord!