task_guide.md 11.9 KB
Newer Older
Jonathan Tow's avatar
Jonathan Tow committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# `Task` Guide

The `Task` class is the foundation of all natural language tasks in the `lm-evaluation-harness` (harness). It encompasses everything you’d need to perform few-shot evaluation of an autoregressive language model. Here we’ll provide a step-by-step guide on how to subclass `Task` to create your very own task/s.

## Setup

If you haven't already, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:

```sh
# After forking...
git clone https://github.com/<YOUR-USERNAME>/lm-evaluation-harness.git
cd lm-evaluation-harness
git checkout -b <task-name>
pip install -r requirements.txt
```

17
## Creating Your Task File
Jonathan Tow's avatar
Jonathan Tow committed
18
19
20
21
22
23
24
25

The first step in creating a task is to create a Python file in `lm_eval/tasks/`  with the task's name:

```sh
cd lm_eval/tasks
touch <task-name>.py
```

26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Then open the file and create a multiline docstring on the first line with the name of the paper associated with your task/s on one line, the paper’s url on the next line, and its BibTeX Code on another. For example, take the QuAC dataset. You’d write:

```python
"""
QuAC: Question Answering in Context
https://arxiv.org/abs/1808.07036

@article{choi2018quac,
  title={Quac: Question answering in context},
  author={Choi, Eunsol and He, He and Iyyer, Mohit and Yatskar, Mark and Yih, Wen-tau and Choi, Yejin and Liang, Percy and Zettlemoyer, Luke},
  journal={arXiv preprint arXiv:1808.07036},
  year={2018}
}
"""
```

Jonathan Tow's avatar
Jonathan Tow committed
42
Now let's walk through the actual implementation - from data handling to evaluation.
Jonathan Tow's avatar
Jonathan Tow committed
43

44
45
46
## Data Handling

### Downloading your Data
Jonathan Tow's avatar
Jonathan Tow committed
47
48
49
50
51
52
53
54
55
56
57
58

There are 2 standard approaches we follow for downloading data:

1. Firstly, you should always check to see if your task's dataset is already provided by HuggingFace (__HF__); check their `datasets` catalog [here](https://huggingface.co/datasets). Is it in there? If yes, continue reading here, else go to 2. In the case that it’s there, things are a bit easier.  You can inherit from the `HFTask` class as so:

    ```python
    from . common import HFTask

    class TaskName(HFTask):
        DATASET_PATH = "..."
        DATASET_NAME = "..."
    ```
Jonathan Tow's avatar
Jonathan Tow committed
59
	where `DATASET_PATH` is the name of the benchmark/task dataset as listed by HF and `DATASET_NAME` is the name of, what HF calls, a “data instance” of the benchmark. If your task is not a benchmark containing any data instances just set `DATASET_NAME = None`.
Jonathan Tow's avatar
Jonathan Tow committed
60
61
62
63
64
65

2. Your task's dataset is not in HF's catalog, so you'll have to override a few abstract methods of the `Task` base class. First let's define our benchmark/task and inherit from `Task`.

    ```python
    from lm_eval.base import Task
    from pathlib import Path
Jonathan Tow's avatar
Jonathan Tow committed
66

Jonathan Tow's avatar
Jonathan Tow committed
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
    class TaskName(Task):
        DATASET_PATH = Path("data/<task-name>")
    ```
    where `DATASET_PATH` is the local directory we'll download into.
    Now we need to override the following methods:

    ```python
    def download(self):
    ```
    This should download the dataset into the relative path specified by `DATASET_PATH`. The preferred approach is to use EleutherAI's [best-download](https://github.com/EleutherAI/best-download) package which provides a `download_file` function that lets you validate complete data transmission through a checksum argument.  The overall logic should be something like: If the `DATASET_PATH` already exists then don’t download anything and exit the method, otherwise create the `DATASET_PATH` directory and actually download into it.  See this [task](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/logiqa.py#L9-L21) for an example.

   Next up, we have to set some “flags”:

    ```python
    def has_training_docs(self):
        return # True/False
    def has_validation_docs(self):
        return # True/False
    def has_test_docs(self):
        return # True/False
    ```
88
   These methods return `True`/`False` whether or not your task dataset provides documents for each split type. __Note__: if the test set doesn't have publicly available labels, please do not put it down as having a test set.
Jonathan Tow's avatar
Jonathan Tow committed
89

90
	Lastly, we need to load the documents. In our terminology, a document (`doc`) is a single natural language data example stored in a Python `dict`. E.g.: `{“question”: “What is the capital of France?”, “answer”: “Paris”}`. Override the following methods to load your data splits from their storage location in `DATASET_PATH`:
Jonathan Tow's avatar
Jonathan Tow committed
91
92
93
    ```python
    def training_docs(self):
        return #...
94
    def validation_docs(self):
Jonathan Tow's avatar
Jonathan Tow committed
95
96
97
98
99
100
        return #...
    def test_docs(self):
        return #...
    ```
	These should return a Python iterable (`list` or `generator`) of `dict`s that can be queried for individual `doc` examples. __NOTE__: If your task doesn't have a train/validation/test set, remember to raise a `NotImplementedError` for that specific split.

101
102
103
### Formatting your Few-Shot Examples

The harness is designed to facilitate task evaluations under the few-shot setting. Here we’ll format such examples.
Jonathan Tow's avatar
Jonathan Tow committed
104
105
106

<br>

107
108
109
⚠️  **Multiple-Choice Formatting**

If your task is **multiple-choice**, just inherit from the `MultipleChoiceTask` class we provide.
Jonathan Tow's avatar
Jonathan Tow committed
110
111
112

```python
from lm_eval.base import MultipleChoiceTask
113

Jonathan Tow's avatar
Jonathan Tow committed
114
115
class TaskName(..., MultipleChoiceTask):
```
Leo Gao's avatar
Leo Gao committed
116

Jonathan Tow's avatar
Jonathan Tow committed
117
This will require you to format your documents such that they contain `gold` and `choices` fields. They can also have other fields, but those will be ignored by `MultipleChoiceTask`. `choices` should be a list of possible continuations, and `gold` should be an integer specifying the index of the correct completion.
Leo Gao's avatar
Leo Gao committed
118

Jonathan Tow's avatar
Jonathan Tow committed
119
See [this task](https://github.com/EleutherAI/lm-evaluation-harness/blob/105fa9741ff660f6a62c2eef0d2facfde36dda41/lm_eval/tasks/sat.py#L56) for an example. When used in combination with `HFTask`, it may be useful to override [`_convert_standard`](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/common.py#L28), which will be applied to every document in the HF dataset. See [this task](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/headqa.py) for an example of this.
Jonathan Tow's avatar
Jonathan Tow committed
120

Jonathan Tow's avatar
Jonathan Tow committed
121
122
123
You can now skip ahead to <a href="#Registering-Your-Task">registering your task</a>.

⚠️  **End Multiple-Choice Formatting**
Jonathan Tow's avatar
Jonathan Tow committed
124

Jonathan Tow's avatar
Jonathan Tow committed
125
<br>
126

127
In the case your task is _not_ multiple-choice, override the following methods for your task class:
Jonathan Tow's avatar
Jonathan Tow committed
128

129
Format your document into a single query prompt __without the answer__ here. This method takes a single `doc` example of type `dict` with `str` key-value members. You should concatenate these `doc` item values together into a neatly formatted prompt.
Jonathan Tow's avatar
Jonathan Tow committed
130
131
132
133
134

```python
def doc_to_text(self, doc):
    return ""
```
Jonathan Tow's avatar
Jonathan Tow committed
135
136

Put the target answer of the prompt here, in the form: `" " + <answer>`.
Jonathan Tow's avatar
Jonathan Tow committed
137
138
139
140
141
142
143
144

```python
def doc_to_target(self, doc):
    return ""
```

Understand that the strings from `doc_to_text` and `doc_to_target` will be concatenated together to build up labeled examples in the k-shot setting where k > 0. Design with that in mind 👍.

145
### Registering Your Task
Jonathan Tow's avatar
Jonathan Tow committed
146
147
148

Now's a good time to register your task to expose it for usage. All you'll need to do is import your task module in `lm_eval/tasks/__init__.py` and provide an entry in the `TASK_REGISTRY`  dictionary with the key as the name of your benchmark task (in the form it'll be referred to in the command line) and the value as the task class. See how it's done for other tasks in the [file](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/__init__.py).

149
### Checking the Data
Jonathan Tow's avatar
Jonathan Tow committed
150
151
152
153
154
155

After registering your task, you can now check on your data downloading and verify that the few-shot samples look as intended. Run the following command with your desired args:

```bash
python -m scripts.write_out \
    --output_base_path <path> \
156
    --tasks <your-task> \
Jonathan Tow's avatar
Jonathan Tow committed
157
158
    --sets <train | val | test> \
    --num_fewshot K \
159
160
    --num_examples N \ 
    --description_dict_path <path>
Jonathan Tow's avatar
Jonathan Tow committed
161
162
```

Jonathan Tow's avatar
Jonathan Tow committed
163
164
Open the file specified at the `--output_base_path <path>` and ensure it passes
a simple eye test.
Jonathan Tow's avatar
Jonathan Tow committed
165

166
## Evaluation
Jonathan Tow's avatar
Jonathan Tow committed
167

168
**🛑**  If your task is a single-true multiple-choice task and you've correctly inherited from `MultipleChoiceTask` then your job here is done; <a href="#Checking-the-Task-Performance">go ‘head and check on the task performance!</a> 🛑
Jonathan Tow's avatar
Jonathan Tow committed
169
170
171
172
173

Now comes evaluation. The methods you'll need to implement are:

```python
def construct_requests(self, doc, ctx):
Jonathan Tow's avatar
Jonathan Tow committed
174
    """ Uses RequestFactory to construct Requests and returns an iterable of
Jonathan Tow's avatar
Jonathan Tow committed
175
176
177
178
179
    Requests which will be sent to the LM.

    :param doc:
        The document as returned from training_docs, validation_docs, or test_docs.
    :param ctx: str
Jonathan Tow's avatar
Jonathan Tow committed
180
        The context string, generated by fewshot_context. This includes the natural
Jonathan Tow's avatar
Jonathan Tow committed
181
182
183
184
185
186
187
188
189
        language description, as well as the few shot examples, and the question
        part of the document for `doc`.
    """
    return ...
```
If your task requires generating text you'll need to return a `rf.greedy_until` request otherwise an `rf.loglikelihood` across all labels in a classification tasks will do.

```python
def process_results(self, doc, results):
Jonathan Tow's avatar
Jonathan Tow committed
190
191
    """Take a single document and the LM results and evaluates, returning a
    dict where keys are the names of submetrics and values are the values of
Jonathan Tow's avatar
Jonathan Tow committed
192
193
194
195
196
197
198
199
200
201
202
203
204
205
    the metric for that one document

    :param doc:
        The document as returned from training_docs, validation_docs, or test_docs.
    :param results:
        The results of the requests created in construct_requests.
    """
    return {}
```

```python
def aggregation(self):
    """
    :returns: {str: [float] -> float}
Jonathan Tow's avatar
Jonathan Tow committed
206
        A dictionary where keys are the names of submetrics and values are
Jonathan Tow's avatar
Jonathan Tow committed
207
        functions that aggregate a list of metrics
Jonathan Tow's avatar
Jonathan Tow committed
208
    """
Jonathan Tow's avatar
Jonathan Tow committed
209
210
211
212
213
214
215
216
217
    return {}
```

See `lm_eval/metrics.py` for a few "built-in" aggregate metrics you can easily import.

```python
def higher_is_better(self):
    """
    :returns: {str: bool}
Jonathan Tow's avatar
Jonathan Tow committed
218
        A dictionary where keys are the names of submetrics and values are
Jonathan Tow's avatar
Jonathan Tow committed
219
220
221
222
223
        whether a higher value of the submetric is better
    """
    return {}
```

Leo Gao's avatar
Leo Gao committed
224
225
Some tasks that are good examples of various ways evaluation can be implemented can be found here: [LAMBADA](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/lambada.py), [TriviaQA](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/triviaqa.py), [SQuAD](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/squad.py).

Leo Gao's avatar
Leo Gao committed
226
Tip: Feel free to create your own helper-methods for your task!
227
228

### Checking the Task Performance
Jonathan Tow's avatar
Jonathan Tow committed
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248

```sh
python main.py \
	--model gpt2 \
	--model_args device=<device-name> \
	--tasks <task-name> \
	--num_fewshot K
```

Set the limit size, `N`, to a smallish number (e.g. 10) and try out the task under different `K`-shot settings. If you have an Nvidia GPU at your disposal, add the argument
`--model_args device=cuda:0`. If you have access to an OpenAI API key, you can also evaluate GPT-3 on various tasks with the following command:

```sh
export OPENAI_API_SECRET_KEY=YOUR_KEY_HERE
python main.py \
	--model gpt3 \
	--tasks <task-name> \
	--num_fewshot K
```

249
### Running Unit Tests
Jonathan Tow's avatar
Jonathan Tow committed
250

Leo Gao's avatar
Leo Gao committed
251
252
253
254
255
256
257
258
259
260
To run the entire test suite, use:

```sh
pytest
```

This is usually overkill; to run only the tests for your task, do:
```sh
pytest -k <task name>
```
Jonathan Tow's avatar
Jonathan Tow committed
261

262
263
264
265
266
267
268
269
270
## Versioning

Lastly, we need to "version control". Tasks in the harness can always evolve. Metrics get updated, data sources change, etc. It’s important to mark each task with a version attribute so users can document which implementation version was used to obtain their results. Add a `VERSION` attribute to your task right below the class name and set it to `0` (this is the first version/implementation of your task):

```python
class TaskName(...):
	VERSION = 0
```

Jonathan Tow's avatar
Jonathan Tow committed
271
272
273
274
## Submitting your Task

Although we currently do not work behind a specific style guide, we'd appreciate if you tidy up your file/s with the `black` formatter (which should've been install through the `requirements.txt`). Keep things clean…ish 🙂.

Leo Gao's avatar
Leo Gao committed
275
Now push your work and make a pull request! Thanks for the contribution 👍. If there are any questions, leave a message in the `#lm-thunderdome` channel on the EAI discord.