README.md 23 KB
Newer Older
Leo Gao's avatar
Leo Gao committed
1
# Language Model Evaluation Harness
Anish Thite's avatar
Anish Thite committed
2

3
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10256836.svg)](https://doi.org/10.5281/zenodo.10256836)
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
4

Stella Biderman's avatar
Stella Biderman committed
5
## Announcement
lintangsutawika's avatar
lintangsutawika committed
6
**A new v0.4.0 release of lm-evaluation-harness is available** !
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
7
8
9
10
11

New updates and features include:

- Internal refactoring
- Config-based task creation and configuration
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
12
- Easier import and sharing of externally-defined task config YAMLs
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
13
14
15
16
17
18
19
20
- Support for Jinja2 prompt design, easy modification of prompts + prompt imports from Promptsource
- More advanced configuration options, including output post-processing, answer extraction, and multiple LM generations per document, configurable fewshot settings, and more
- Speedups and new modeling libraries supported, including: faster data-parallel HF model usage, vLLM support, MPS support with HuggingFace, and more
- Logging and usability changes
- New tasks including CoT BIG-Bench-Hard, Belebele, user-defined task groupings, and more

Please see our updated documentation pages in `docs/` for more details.

Anjor Kanekar's avatar
Anjor Kanekar committed
21
Development will be continuing on the `main` branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub, or in the [EleutherAI discord](https://discord.gg/eleutherai)!
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
22

Fabrizio Milo's avatar
Fabrizio Milo committed
23
## Overview
Anish Thite's avatar
Anish Thite committed
24

Stella Biderman's avatar
Stella Biderman committed
25
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
Leo Gao's avatar
Leo Gao committed
26

Stella Biderman's avatar
Stella Biderman committed
27
**Features:**
Stella Biderman's avatar
Stella Biderman committed
28
- Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented.
Stella Biderman's avatar
Stella Biderman committed
29
- Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface.
30
- Support for fast and memory-efficient inference with [vLLM](https://github.com/vllm-project/vllm).
31
- Support for commercial APIs including [OpenAI](https://openai.com), and [TextSynth](https://textsynth.com/).
Stella Biderman's avatar
Stella Biderman committed
32
33
34
- Support for evaluation on adapters (e.g. LoRA) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).
- Support for local models and benchmarks.
- Evaluation with publicly available prompts ensures reproducibility and comparability between papers.
Stella Biderman's avatar
Stella Biderman committed
35
- Easy support for custom prompts and evaluation metrics.
Stella Biderman's avatar
Stella Biderman committed
36

Stella Biderman's avatar
Stella Biderman committed
37
The Language Model Evaluation Harness is the backend for 🤗 Hugging Face's popular [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), has been used in [hundreds of papers](https://scholar.google.com/scholar?oi=bibs&hl=en&authuser=2&cites=15052937328817631261,4097184744846514103,1520777361382155671,17476825572045927382,18443729326628441434,14801318227356878622,7890865700763267262,12854182577605049984,15641002901115500560,5104500764547628290), and is used internally by dozens of organizations including NVIDIA, Cohere, BigScience, BigCode, Nous Research, and Mosaic ML.
38

Leo Gao's avatar
Leo Gao committed
39
40
## Install

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
41
To install the `lm-eval` package from the github repository, run:
42

Leo Gao's avatar
Leo Gao committed
43
```bash
44
45
46
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
Leo Gao's avatar
Leo Gao committed
47
```
Leo Gao's avatar
Leo Gao committed
48

baberabb's avatar
typo  
baberabb committed
49
We also provide a number of optional dependencies for extended functionality. Extras can be installed via `pip install -e ".[NAME]"`
Stella Biderman's avatar
Stella Biderman committed
50

Stella Biderman's avatar
Stella Biderman committed
51
| Name          | Use                                   |
52
|---------------|---------------------------------------|
Stella Biderman's avatar
Stella Biderman committed
53
| anthropic     | For using Anthropic's models          |
54
| dev           | For linting PRs and contributions     |
Stella Biderman's avatar
Stella Biderman committed
55
| gptq          | For loading models with GPTQ          |
56
57
58
| ifeval        | For running the IFEval task           |
| mamba         | For loading Mamba SSM models          |
| math          | For running math task answer checking |
Stella Biderman's avatar
Stella Biderman committed
59
60
| multilingual  | For multilingual tokenizers           |
| openai        | For using OpenAI's models             |
61
| optimum       | For running Intel OpenVINO models     |
62
| promptsource  | For using PromptSource prompts        |
Stella Biderman's avatar
Stella Biderman committed
63
| sentencepiece | For using the sentencepiece tokenizer |
64
| testing       | For running library test suite        |
Stella Biderman's avatar
Stella Biderman committed
65
| vllm          | For loading models with vLLM          |
66
| zeno          | For visualizing results with Zeno     |
67
68
|---------------|---------------------------------------|
| all           | Loads all extras (not recommended)    |
haileyschoelkopf's avatar
haileyschoelkopf committed
69

Leo Gao's avatar
Leo Gao committed
70
71
## Basic Usage

Stella Biderman's avatar
Stella Biderman committed
72
73
### Hugging Face `transformers`

Anjor Kanekar's avatar
Anjor Kanekar committed
74
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command (this assumes you are using a CUDA-compatible GPU):
jon-tow's avatar
jon-tow committed
75

Leo Gao's avatar
Leo Gao committed
76
```bash
Stella Biderman's avatar
Stella Biderman committed
77
lm_eval --model hf \
Stella Biderman's avatar
Stella Biderman committed
78
    --model_args pretrained=EleutherAI/gpt-j-6B \
Stella Biderman's avatar
Stella Biderman committed
79
    --tasks hellaswag \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
80
81
    --device cuda:0 \
    --batch_size 8
Leo Gao's avatar
Leo Gao committed
82
83
```

Stella Biderman's avatar
Stella Biderman committed
84
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
Leo Gao's avatar
Leo Gao committed
85
86

```bash
Stella Biderman's avatar
Stella Biderman committed
87
lm_eval --model hf \
Stella Biderman's avatar
Stella Biderman committed
88
    --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
jon-tow's avatar
jon-tow committed
89
    --tasks lambada_openai,hellaswag \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
90
91
92
93
    --device cuda:0 \
    --batch_size 8
```

Lenni Justen's avatar
Lenni Justen committed
94
Models that are loaded via both `transformers.AutoModelForCausalLM` (autoregressive, decoder-only GPT style models) and `transformers.AutoModelForSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface are supported.
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
95

96
97
98
Batch size selection can be automated by setting the  ```--batch_size``` flag to ```auto```. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append ```:N``` to above flag to automatically recompute the largest batch size ```N``` times. For example, to recompute the batch size 4 times, the command would be:

```bash
Stella Biderman's avatar
Stella Biderman committed
99
lm_eval --model hf \
100
101
102
103
104
105
    --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
    --tasks lambada_openai,hellaswag \
    --device cuda:0 \
    --batch_size auto:4
```

106
The full list of supported arguments are provided [here](./docs/interface.md), and on the terminal by calling `lm_eval -h`. Alternatively, you can use `lm-eval` instead of `lm_eval`.
107

Stella Biderman's avatar
Stella Biderman committed
108
> [!Note]
109
110
111
> Just like you can provide a local path to `transformers.AutoModel`, you can also provide a local path to `lm_eval` via `--model_args pretrained=/path/to/model`

#### Multi-GPU Evaluation with Hugging Face `accelerate`
112

113
114
115
We support two main ways of using Hugging Face's [accelerate 🚀](https://github.com/huggingface/accelerate) library for multi-GPU evaluation.

To perform *data-parallel evaluation* (where each GPU loads a **separate full copy** of the model), we leverage the `accelerate` launcher as follows:
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
116
117

```
Stella Biderman's avatar
Stella Biderman committed
118
accelerate launch -m lm_eval --model hf \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
119
    --tasks lambada_openai,arc_easy \
120
    --batch_size 16
Leo Gao's avatar
Leo Gao committed
121
```
122
123
124
(or via `accelerate launch --no-python lm_eval`).

For cases where your model can fit on a single GPU, this allows you to evaluate on K GPUs K times faster than on one.
Leo Gao's avatar
Leo Gao committed
125

126
**WARNING**: This setup does not work with FSDP model sharding, so in `accelerate config` FSDP must be disabled, or the NO_SHARD FSDP option must be used.
Stella Biderman's avatar
Stella Biderman committed
127

128
The second way of using `accelerate` for multi-GPU evaluation is when your model is *too large to fit on a single GPU.*
129

130
131
132
133
134
135
136
137
138
139
140
141
In this setting, run the library *outside of the `accelerate` launcher*, but passing `parallelize=True` to `--model_args` as follows:

```
lm_eval --model hf \
    --tasks lambada_openai,arc_easy \
    --model_args parallelize=True \
    --batch_size 16
```

This means that your model's weights will be split across all available GPUs.

For more advanced users or even larger models, we allow for the following arguments when `parallelize=True` as well:
142
143
144
145
146
- `device_map_option`: How to split model weights across available GPUs. defaults to "auto".
- `max_memory_per_gpu`: the max GPU memory to use per GPU in loading the model.
- `max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to RAM.
- `offload_folder`: a folder where model weights will be offloaded to disk if needed.

147
These two options (`accelerate launch` and `parallelize=True`) are mutually exclusive.
Zach Nussbaum's avatar
Zach Nussbaum committed
148

149
### Tensor + Data Parallel and Optimized Inference with `vLLM`
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
150

151
We also support vLLM for faster inference on [supported model types](https://docs.vllm.ai/en/latest/models/supported_models.html), especially faster when splitting a model across multiple GPUs. For single-GPU or multi-GPU — tensor parallel, data parallel, or a combination of both — inference, for example:
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
152
153

```bash
Stella Biderman's avatar
Stella Biderman committed
154
lm_eval --model vllm \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
155
    --model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \
156
    --tasks lambada_openai \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
157
158
159
160
    --batch_size auto
```
For a full list of supported vLLM configurations, please reference our vLLM integration and the vLLM documentation.

161
vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a [script](./scripts/model_comparator.py) for checking the validity of vllm results against HF.
162

163
### Model APIs and Inference Servers
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
164
165

Our library also supports the evaluation of models served via several commercial APIs, and we hope to implement support for the most commonly used performant local/self-hosted inference servers.
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
166

167
168
169
To call a hosted model, use:

```bash
170
export OPENAI_API_KEY=YOUR_KEY_HERE
171
lm_eval --model openai-completions \
Anjor Kanekar's avatar
Anjor Kanekar committed
172
    --model_args model=davinci \
173
174
175
    --tasks lambada_openai,hellaswag
```

176
We also support using your own local inference server with servers that mirror the OpenAI Completions and ChatCompletions APIs.
177
178
179
180

```bash
lm_eval --model local-chat-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1
```
181
182
Note that for externally hosted models, configs such as `--device` and `--batch_size` should not be used and do not function. Just like you can use `--model_args` to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model API for hosted models. See the documentation of the hosting service for information on what arguments they support.

183
184
| API or Inference Server                                                                                                   | Implemented?                    | `--model <xxx>` name                                                | Models supported:                                                                             | Request Types:                                             |
|---------------------------------------------------------------------------------------------------------------------------|---------------------------------|---------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------------------------------------------------|
185
| OpenAI Completions                                                                                                        | :heavy_check_mark:              | `openai-completions`, `local-completions` | All OpenAI Completions API models                                            | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
186
187
188
189
| OpenAI ChatCompletions                                                                                                    | :heavy_check_mark:        | `openai-chat-completions`, `local-chat-completions`                                                               | [All ChatCompletions API models](https://platform.openai.com/docs/guides/gpt)                 | `generate_until` (no logprobs)                             |
| Anthropic                                                                                                                 | :heavy_check_mark:              | `anthropic`                                                         | [Supported Anthropic Engines](https://docs.anthropic.com/claude/reference/selecting-a-model)  | `generate_until` (no logprobs)                             |
| Textsynth                                                                                                                 | :heavy_check_mark:                   | `textsynth`                                                         | [All supported engines](https://textsynth.com/documentation.html#engines)                     | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Cohere                                                                                                                    | [:hourglass: - blocked on Cohere API bug](https://github.com/EleutherAI/lm-evaluation-harness/pull/395) | N/A                                                                 | [All `cohere.generate()` engines](https://docs.cohere.com/docs/models)                        | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
190
| [Llama.cpp](https://github.com/ggerganov/llama.cpp) (via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) | :heavy_check_mark:              | `gguf`, `ggml`                                                      | [All models supported by llama.cpp](https://github.com/ggerganov/llama.cpp)                   | `generate_until`, `loglikelihood`, (perplexity evaluation not yet implemented) |
191
| vLLM                                                                                                                      | :heavy_check_mark:       | `vllm`                                                              | [Most HF Causal Language Models](https://docs.vllm.ai/en/latest/models/supported_models.html) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
192
| Mamba                       | :heavy_check_mark:       | `mamba_ssm`                                                                      | [Mamba architecture Language Models via the `mamba_ssm` package](https://huggingface.co/state-spaces) | `generate_until`, `loglikelihood`, `loglikelihood_rolling`                             |
193
194
| Huggingface Optimum (Causal LMs)    | ✔️         | `openvino`                                 |     Any decoder-only AutoModelForCausalLM converted with Huggingface Optimum into OpenVINO™ Intermediate Representation (IR) format                           |  `generate_until`, `loglikelihood`, `loglikelihood_rolling`                         | ...                                                      |
| Your local inference server!                                                                                              | :heavy_check_mark:                             | `local-completions` or `local-chat-completions` (using `openai-chat-completions` model type)    | Any server address that accepts GET requests using HF models and mirror's OpenAI's Completions or ChatCompletions interface                                  | `generate_until`                                           |                                | ...                |
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
195

196
Models which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while local models, or APIs that supply logprobs/logits of their prompts, can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.
197
198

For more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md#interface).
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
199

Stella Biderman's avatar
Stella Biderman committed
200
201
### Other Frameworks

lintangsutawika's avatar
lintangsutawika committed
202
A number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).
Jason Phang's avatar
Jason Phang committed
203

204
205
To create your own custom integration you can follow instructions from [this tutorial](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md#external-library-usage).

Stella Biderman's avatar
Stella Biderman committed
206
207
### Additional Features

baberabb's avatar
baberabb committed
208
If you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing `--device cuda:0` with `--device mps` (requires PyTorch version 2.1 or higher).
Stella Biderman's avatar
Stella Biderman committed
209

210
211
212
213
214
215
216
217
218
219
> [!Note]
> You can inspect what the LM inputs look like by running the following command:
> ```bash
> python write_out.py \
>     --tasks all_tasks \
>     --num_fewshot 5 \
>     --num_examples 10 \
>     --output_base_path /path/to/output/folder
> ```
> This will write out one text file for each task.
jon-tow's avatar
jon-tow committed
220

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
221
222
223
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:

```bash
Stella Biderman's avatar
Stella Biderman committed
224
lm_eval --model openai \
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
225
226
227
228
229
    --model_args engine=davinci \
    --tasks lambada_openai,hellaswag \
    --check_integrity
```

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
230
## Advanced Usage Tips
Stella Biderman's avatar
Stella Biderman committed
231
232
233

For models loaded with the HuggingFace  `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument:
```bash
Stella Biderman's avatar
Stella Biderman committed
234
lm_eval --model hf \
235
    --model_args pretrained=EleutherAI/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \
Stella Biderman's avatar
Stella Biderman committed
236
237
238
    --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
    --device cuda:0
```
239

240
[GPTQ](https://github.com/PanQiWei/AutoGPTQ) quantized models can be loaded by specifying their file names in `,autogptq=NAME` (or `,autogptq=True` for default names) in the `model_args` argument:
241
242

```bash
Stella Biderman's avatar
Stella Biderman committed
243
lm_eval --model hf \
244
    --model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \
Stella Biderman's avatar
Stella Biderman committed
245
    --tasks hellaswag
246
247
```

Stella Biderman's avatar
Stella Biderman committed
248
249
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`.

250
251
To save evaluation results provide an `--output_path`. We also support logging model responses with the `--log_samples` flag for post-hoc analysis.

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
252
Additionally, one can provide a directory with `--use_cache` to cache the results of prior runs. This allows you to avoid repeated execution of the same (model, task) pairs for re-scoring.
253

254
For a full list of supported arguments, check out the [interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md) guide in our documentation!
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
255

256
257
258
> [!Tip]
> Running lm-evaluation-harness as an external library and can't find (almost) any tasks available? run `lm_eval.tasks.initialize_tasks()` to load the library's stock tasks before calling `lm_eval.evaluate()` or `lm_eval.simple_evaluate()` !

259
260
261
262
## Visualizing Results

You can use [Zeno](https://zenoml.com) to visualize the results of your eval harness runs.

Anjor Kanekar's avatar
Anjor Kanekar committed
263
First, head to [hub.zenoml.com](https://hub.zenoml.com) to create an account and get an API key [on your account page](https://hub.zenoml.com/account).
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
Add this key as an environment variable:

```bash
export ZENO_API_KEY=[your api key]
```

You'll also need to install the `lm_eval[zeno]` package extra.

To visualize the results, run the eval harness with the `log_samples` and `output_path` flags.
We expect `output_path` to contain multiple folders that represent individual model names.
You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.

```bash
lm_eval \
    --model hf \
    --model_args pretrained=EleutherAI/gpt-j-6B \
    --tasks hellaswag \
    --device cuda:0 \
    --batch_size 8 \
    --log_samples \
    --output_path output/gpt-j-6B
```

Then, you can upload the resulting data using the `zeno_visualize` script:

```bash
python scripts/zeno_visualize.py \
    --data_path output \
    --project_name "Eleuther Project"
```

This will use all subfolders in `data_path` as different models and upload all tasks within these model folders to Zeno.
If you run the eval harness on multiple tasks, the `project_name` will be used as a prefix and one project will be created per task.

298
299
You can find an example of this workflow in [examples/visualize-zeno.ipynb](examples/visualize-zeno.ipynb).

Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
300
301
## How to Contribute or Learn More?

302
For more information on the library and how everything fits together, check out all of our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)! We plan to post a larger roadmap of desired + planned library improvements soon, with more information on how contributors can help.
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
303

Stella Biderman's avatar
Stella Biderman committed
304
305
306
307
### Implementing new tasks

To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md).

308
In general, we follow this priority list for addressing concerns about prompting and other eval details:
Stella Biderman's avatar
Stella Biderman committed
309
310
311
312
1. If there is widespread agreement among people who train LLMs, use the agreed upon procedure.
2. If there is a clear and unambiguous official implementation, use that procedure.
3. If there is widespread agreement among people who evaluate LLMs, use the agreed upon procedure.
4. If there are multiple common implementations but not universal or widespread agreement, use our preferred option among the common implementations. As before, prioritize choosing from among the implementations found in LLM training papers.
Stella Biderman's avatar
Stella Biderman committed
313

Stella Biderman's avatar
Stella Biderman committed
314
These are guidelines and not rules, and can be overruled in special circumstances.
Stella Biderman's avatar
Stella Biderman committed
315

baberabb's avatar
baberabb committed
316
We try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from [Language Models are Few Shot Learners](https://arxiv.org/abs/2005.14165) as our original goal was specifically to compare results with that paper.
Hailey Schoelkopf's avatar
Hailey Schoelkopf committed
317

318
319
### Support

baberabb's avatar
baberabb committed
320
The best way to get support is to open an issue on this repo or join the [EleutherAI Discord server](https://discord.gg/eleutherai). The `#lm-thunderdome` channel is dedicated to developing this project and the `#release-discussion` channel is for receiving support for our releases. If you've used the library and have had a positive (or negative) experience, we'd love to hear from you!
321

Leo Gao's avatar
Leo Gao committed
322
323
324
## Cite as

```
Stella Biderman's avatar
Stella Biderman committed
325
326
327
328
329
330
331
332
333
@misc{eval-harness,
  author       = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = 12,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {v0.4.0},
  doi          = {10.5281/zenodo.10256836},
  url          = {https://zenodo.org/records/10256836}
Leo Gao's avatar
Leo Gao committed
334
335
}
```