README.md 8.91 KB
Newer Older
Leo Gao's avatar
Leo Gao committed
1
# Language Model Evaluation Harness
Anish Thite's avatar
Anish Thite committed
2

Stella Biderman's avatar
Stella Biderman committed
3
4
## Notice to Users
(as of 6/15/23)
Stella Biderman's avatar
Stella Biderman committed
5
We have a revamp of the Evaluation Harness library internals staged on the [big-refactor](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor) branch! It is far along in progress, but before we start to move the `master` branch of the repository over to this new design with a new version release, we'd like to ensure that it's been tested by outside users and there are no glaring bugs.
Stella Biderman's avatar
Stella Biderman committed
6
7
8

We’d like your help to test it out! you can help by:
1. Trying out your current workloads on the big-refactor branch, and seeing if anything breaks or is counterintuitive,
Stella Biderman's avatar
Stella Biderman committed
9
2. Porting tasks supported in the previous version of the harness to the new YAML configuration format. Please check out our [task implementation guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/new_task_guide.md) for more information.
Stella Biderman's avatar
Stella Biderman committed
10

Stella Biderman's avatar
Stella Biderman committed
11
If you choose to port a task not yet completed according to [our checklist](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/lm_eval/tasks/README.md), then you can contribute it by opening a PR containing [Refactor] in the name with: 
Stella Biderman's avatar
Stella Biderman committed
12
- A shell command to run the task in the `master` branch, and what the score is
Stella Biderman's avatar
Stella Biderman committed
13
- A shell command to run the task in your PR branch to `big-refactor`, and what the resulting score is, to show that we achieve equality between the two implementations.
Stella Biderman's avatar
Stella Biderman committed
14

Stella Biderman's avatar
Stella Biderman committed
15
Lastly, we'll no longer be accepting new feature requests beyond those that are already open to the master branch as we carry out this switch to the new version over the next week, though we will be accepting bugfixes to `master` branch and PRs to `big-refactor`. Feel free to reach out in the #lm-thunderdome channel of the EAI discord for more information.
Stella Biderman's avatar
Stella Biderman committed
16

Leo Gao's avatar
Leo Gao committed
17

Fabrizio Milo's avatar
Fabrizio Milo committed
18
## Overview
Anish Thite's avatar
Anish Thite committed
19

Stella Biderman's avatar
Stella Biderman committed
20
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
Leo Gao's avatar
Leo Gao committed
21

Stella Biderman's avatar
Stella Biderman committed
22
Features:
Leo Gao's avatar
Leo Gao committed
23

jon-tow's avatar
jon-tow committed
24
- 200+ tasks implemented. See the [task-table](./docs/task_table.md) for a complete list.
Stella Biderman's avatar
Stella Biderman committed
25
26
- Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface.
- Support for commercial APIs including [OpenAI](https://openai.com), [goose.ai](https://goose.ai), and [TextSynth](https://textsynth.com/).
Zach Nussbaum's avatar
Zach Nussbaum committed
27
- Support for evaluation on adapters (e.g. LoRa) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).
Stella Biderman's avatar
Stella Biderman committed
28
29
- Evaluating with publicly available prompts ensures reproducibility and comparability between papers.
- Task versioning to ensure reproducibility when tasks are updated.
30

Leo Gao's avatar
Leo Gao committed
31
32
## Install

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
33
To install the `lm-eval` refactor branch from the github repository, run:
34

Leo Gao's avatar
Leo Gao committed
35
```bash
36
37
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
38
git checkout big-refactor
39
pip install -e .
Leo Gao's avatar
Leo Gao committed
40
```
Leo Gao's avatar
Leo Gao committed
41

42
To install additional multilingual tokenization and text segmentation packages, you must install the package with the `multilingual` extra:
jon-tow's avatar
jon-tow committed
43
44

```bash
45
pip install -e ".[multilingual]"
jon-tow's avatar
jon-tow committed
46
47
```

Stella Biderman's avatar
Stella Biderman committed
48
49
50
51
52
53
To support loading GPTQ quantized models, install the package with the `auto-gptq` extra:

```bash
pip install -e ".[auto-gptq]"
```

Leo Gao's avatar
Leo Gao committed
54
55
## Basic Usage

Stella Biderman's avatar
Stella Biderman committed
56
57
58
### Hugging Face `transformers`

To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command:
jon-tow's avatar
jon-tow committed
59

Leo Gao's avatar
Leo Gao committed
60
61
62

```bash
python main.py \
Stella Biderman's avatar
Stella Biderman committed
63
64
    --model hf-causal \
    --model_args pretrained=EleutherAI/gpt-j-6B \
Stella Biderman's avatar
Stella Biderman committed
65
    --tasks hellaswag \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
66
67
    --device cuda:0 \
    --batch_size 8
Leo Gao's avatar
Leo Gao committed
68
69
```

Stella Biderman's avatar
Stella Biderman committed
70
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
Leo Gao's avatar
Leo Gao committed
71
72
73

```bash
python main.py \
Stella Biderman's avatar
Stella Biderman committed
74
    --model hf-causal \
Stella Biderman's avatar
Stella Biderman committed
75
    --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
jon-tow's avatar
jon-tow committed
76
    --tasks lambada_openai,hellaswag \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
77
78
79
80
81
82
83
84
85
86
87
88
89
    --device cuda:0 \
    --batch_size 8
```

### Multi-GPU Evaluation with Hugging Face `transformers`

To parallelize evaluation across multiple GPUs, we allow for launching evaluation via the `accelerate` library as follows:

```
accelerate launch main.py \
    --model hf-causal \
    --tasks lambada_openai,arc_easy \
    --batch_size 16 \
Leo Gao's avatar
Leo Gao committed
90
91
```

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
92
93
94
### Evaluation of Seq2Seq Models

To evaluate models that are loaded via `AutoSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface, you instead use `--model hf-seq2seq`. Support for this model type is currently pending.
Stella Biderman's avatar
Stella Biderman committed
95
96
97

> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring.

Stella Biderman's avatar
Stella Biderman committed
98
### Commercial APIs
Zach Nussbaum's avatar
Zach Nussbaum committed
99

Stella Biderman's avatar
Stella Biderman committed
100
Our library also supports language models served via the OpenAI API:
Leo Gao's avatar
Leo Gao committed
101
102
103
104

```bash
export OPENAI_API_SECRET_KEY=YOUR_KEY_HERE
python main.py \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
105
    --model openai \
jon-tow's avatar
jon-tow committed
106
107
    --model_args engine=davinci \
    --tasks lambada_openai,hellaswag
Leo Gao's avatar
Leo Gao committed
108
109
```

lintangsutawika's avatar
lintangsutawika committed
110
While this functionality is only officially maintained for the official OpenAI API, it tends to also work for other hosting services that use the same API such as [goose.ai](goose.ai) with minor modification. We also have an implementation for the [TextSynth](https://textsynth.com/index.html) API, using `--model textsynth`.
Stella Biderman's avatar
Stella Biderman committed
111
112

To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:
113
114
115

```bash
python main.py \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
116
    --model openai \
jon-tow's avatar
jon-tow committed
117
118
119
    --model_args engine=davinci \
    --tasks lambada_openai,hellaswag \
    --check_integrity
120
```
jon-tow's avatar
jon-tow committed
121

Stella Biderman's avatar
Stella Biderman committed
122
123
124
### Other Frameworks

A number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).
Jason Phang's avatar
Jason Phang committed
125

jon-tow's avatar
jon-tow committed
126
127
128
129
130
131
132
133
134
135
136
137
💡 **Tip**: You can inspect what the LM inputs look like by running the following command:

```bash
python write_out.py \
    --tasks all_tasks \
    --num_fewshot 5 \
    --num_examples 10 \
    --output_base_path /path/to/output/folder
```

This will write out one text file for each task.

Stella Biderman's avatar
Stella Biderman committed
138
139
140
141
142
## Advanced Usage

For models loaded with the HuggingFace  `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument:
```bash
python main.py \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
143
    --model hf-causal \
Stella Biderman's avatar
Stella Biderman committed
144
145
146
147
    --model_args pretrained=EleutherAI/gpt-j-6b,peft=nomic-ai/gpt4all-j-lora \
    --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
    --device cuda:0
```
148

Stella Biderman's avatar
Stella Biderman committed
149
GPTQ quantized models can be loaded by specifying their file names in `,quantized=NAME` (or `,quantized=True` for default names) in the `model_args` argument:
150
151

```bash
Stella Biderman's avatar
Stella Biderman committed
152
python main.py \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
153
    --model hf-causal \
Stella Biderman's avatar
Stella Biderman committed
154
155
    --model_args pretrained=model-name-or-path,quantized=model.safetensors,gptq_use_triton=True \
    --tasks hellaswag
156
157
```

Stella Biderman's avatar
Stella Biderman committed
158
159
160
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`.

We currently only support one prompt per task, which we strive to make the "standard" as defined by the benchmark's authors. If you would like to study how varying prompts causes changes in the evaluation score, check out the [BigScience fork](https://github.com/bigscience-workshop/lm-evaluation-harness) of this repo. We are currently working on upstreaming this capability to `main`.
161

Leo Gao's avatar
Leo Gao committed
162
163
## Implementing new tasks

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
164
To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md).
Leo Gao's avatar
Leo Gao committed
165

Leo Gao's avatar
Leo Gao committed
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
## Cite as

```
@software{eval-harness,
  author       = {Gao, Leo and
                  Tow, Jonathan and
                  Biderman, Stella and
                  Black, Sid and
                  DiPofi, Anthony and
                  Foster, Charles and
                  Golding, Laurence and
                  Hsu, Jeffrey and
                  McDonell, Kyle and
                  Muennighoff, Niklas and
                  Phang, Jason and
                  Reynolds, Laria and
                  Tang, Eric and
                  Thite, Anish and
                  Wang, Ben and
                  Wang, Kevin and
                  Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = sep,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {v0.0.1},
  doi          = {10.5281/zenodo.5371628},
  url          = {https://doi.org/10.5281/zenodo.5371628}
}
```