Unverified Commit 4de8a74e authored by Stella Biderman's avatar Stella Biderman Committed by GitHub
Browse files

Merge branch 'master' into json-task

parents 3226ed64 bda68845
name: Pull Request
on: [pull_request]
jobs:
pre-commit:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: 3.8
- uses: pre-commit/action@v2.0.3
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Build
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Cache
uses: actions/cache@v2.1.3
with:
# A list of files, directories, and wildcard patterns to cache and restore
path: |
~/.cache
# An explicit key for restoring and saving the cache
key: evaldata-cache-4
- name: Set up Python 3.9
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest pytest-cov
pip install -e .[dev,multilingual]
# Install optional git dependencies
pip install bleurt@https://github.com/google-research/bleurt/archive/b610120347ef22b494b6d69b4316e303f5932516.zip#egg=bleurt
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest -vv --cov=lm_eval/ tests/
- name: Upload to codecov
run: |
bash <(curl -s https://codecov.io/bash) -t $CODECOV_TOKEN
......@@ -32,7 +32,7 @@ repos:
rev: 22.3.0
hooks:
- id: black
language_version: python3.8
language_version: python3.9
- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
hooks:
......
# Language Model Evaluation Harness
![](https://github.com/EleutherAI/lm-evaluation-harness/workflows/Build/badge.svg)
[![codecov](https://codecov.io/gh/EleutherAI/lm-evaluation-harness/branch/master/graph/badge.svg?token=JSG3O2427J)](https://codecov.io/gh/EleutherAI/lm-evaluation-harness)
## Overview
This project provides a unified framework to test autoregressive language models (GPT-2, GPT-3, GPTNeo, etc) on a large number of different evaluation tasks.
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
Features:
- 200+ tasks implemented. See the [task-table](./docs/task_table.md) for a complete list.
- Support for GPT-2, GPT-3, GPT-Neo, GPT-NeoX, and GPT-J, with flexible tokenization-agnostic interface.
- Task versioning to ensure reproducibility.
- Support for models loaded via [transformers](https://github.com/huggingface/transformers/), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface.
- Support for commercial APIs including [OpenAI](https://openai.com), [goose.ai](https://goose.ai), and [TextSynth](https://textsynth.com/).
- Support for evaluation on adapters (e.g. LoRa) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).
- Evaluating with publicly available prompts ensures reproducibility and comparability between papers.
- Task versioning to ensure reproducibility when tasks are updated.
## Install
To install `lm-eval` from the github repository main branch, run:
```bash
pip install lm-eval
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
To install additional multlingual tokenization and text segmenation packages, you must install the package with the `multilingual` extra:
To install additional multilingual tokenization and text segmentation packages, you must install the package with the `multilingual` extra:
```bash
pip install "lm-eval[multilingual]"
pip install -e ".[multilingual]"
```
## Basic Usage
> **Note**: When reporting results from eval harness, please include the task versions (shown in `results["versions"]`) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the [Task Versioning](#task-versioning) section for more info.
To evaluate a model (e.g. GPT-2) on NLP tasks such as SuperGLUE WiC, you can run the following command:
### Hugging Face `transformers`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command:
```bash
python main.py \
--model gpt2 \
--tasks lambada_openai,hellaswag \
--device 0
--model hf-causal \
--model_args pretrained=EleutherAI/gpt-j-6B \
--tasks hellaswag \
--device cuda:0
```
This example uses gpt2-117M by default as per HF defaults.
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most importantly, the `gpt2` model can be used to load an arbitrary HuggingFace CausalLM. For example, to run GPTNeo use the following:
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints:
```bash
python main.py \
--model gpt2 \
--model_args pretrained=EleutherAI/gpt-neo-2.7B \
--model hf-causal \
--model_args pretrained=EleutherAI/pythia-160m,revision=step100000 \
--tasks lambada_openai,hellaswag \
--device 0
--device cuda:0
```
If you have access to the OpenAI API, you can also evaluate GPT-3:
To evaluate models that are loaded via `AutoSeq2SeqLM` in Huggingface, you instead use `hf-seq2seq`. *To evaluate (causal) models across multiple GPUs, use `--model hf-causal-experimental`*
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring.
### Commercial APIs
Our library also supports language models served via the OpenAI API:
```bash
export OPENAI_API_SECRET_KEY=YOUR_KEY_HERE
......@@ -61,7 +72,9 @@ python main.py \
--tasks lambada_openai,hellaswag
```
And if you want to verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:
While this functionality is only officially maintained for the official OpenAI API, it tends to also work for other hosting services that use the same API such as [goose.ai](goose.ai) with minor modification. We also have an implementation for the [TextSynth](https://textsynth.com/index.html) API, using `--model textsynth`.
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:
```bash
python main.py \
......@@ -71,7 +84,9 @@ python main.py \
--check_integrity
```
To evaluate mesh-transformer-jax models that are not available on HF, please invoke eval harness through [this script](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).
### Other Frameworks
A number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).
💡 **Tip**: You can inspect what the LM inputs look like by running the following command:
......@@ -85,6 +100,22 @@ python write_out.py \
This will write out one text file for each task.
## Advanced Usage
For models loaded with the HuggingFace `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument:
```bash
python main.py \
--model hf-causal-experimental \
--model_args pretrained=EleutherAI/gpt-j-6b,peft=nomic-ai/gpt4all-j-lora \
--tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
--device cuda:0
```
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`.
We currently only support one prompt per task, which we strive to make the "standard" as defined by the benchmark's authors. If you would like to study how varying prompts causes changes in the evaluation score, check out the [BigScience fork](https://github.com/bigscience-workshop/lm-evaluation-harness) of this repo. We are currently working on upstreaming this capability to `main`.
## Implementing new tasks
To implement a new task in the eval harness, see [this guide](./docs/task_guide.md).
......@@ -97,6 +128,8 @@ When reporting eval harness results, please also report the version of each task
## Test Set Decontamination
To address concerns about train / test contamination, we provide utilities for comparing results on a benchmark using only the data points nto found in the model training set. Unfortunately, outside of models trained on the Pile and C4, its very rare that people who train models disclose the contents of the training data. However this utility can be useful to evaluate models you have trained on private data, provided you are willing to pre-compute the necessary indices. We provide computed indices for 13-gram exact match deduplication against the Pile, and plan to add additional precomputed dataset indices in the future (including C4 and min-hash LSH deduplication).
For details on text decontamination, see the [decontamination guide](./docs/decontamination.md).
Note that the directory provided to the `--decontamination_ngrams_path` argument should contain the ngram files and info.json. See the above guide for ngram generation for the pile, this could be adapted for other training sets.
......@@ -106,7 +139,7 @@ python main.py \
--model gpt2 \
--tasks sciq \
--decontamination_ngrams_path path/containing/training/set/ngrams \
--device 0
--device cuda:0
```
## Cite as
......
......@@ -271,6 +271,19 @@ python main.py \
--num_fewshot K
```
### Checking the Model Outputs
The `--write_out.py` script mentioned previously can be used to verify that the prompts look as intended. If you also want to save model outputs, you can use the `--write_out` parameter in `main.py` to dump JSON with prompts and completions. The output path can be chosen with `--output_base_path`. It is helpful for debugging and for exploring model outputs.
```sh
python main.py \
--model gpt2 \
--model_args device=<device-name> \
--tasks <task-name> \
--num_fewshot K \
--write_out \
--output_base_path <path>
```
### Running Unit Tests
To run the entire test suite, use:
......
This diff is collapsed.
ROUGE
rouge
nin
maka
mor
te
......@@ -11,6 +11,7 @@ from sqlitedict import SqliteDict
from tqdm import tqdm
import torch
import torch.nn.functional as F
from accelerate import find_executable_batch_size
from lm_eval.metrics import mean, weighted_perplexity, weighted_mean, bits_per_byte
from lm_eval import utils
......@@ -186,7 +187,27 @@ class BaseLM(LM):
def loglikelihood_rolling(self, requests):
# TODO: Implement caching once we've confirmed the perplexity implementation
# TODO: automatic batch size detection for vectorization
# automatic batch size detection for vectorization
adaptive_batch_size = None
if self.batch_size == "auto":
# using rolling window with maximum context
print("Passed argument batch_size = auto. Detecting largest batch size")
@find_executable_batch_size(
starting_batch_size=512
) # if OOM, then halves batch_size and tries again
def forward_batch(batch_size):
test_batch = torch.ones(
(batch_size, self.max_length), device=self.device
).long()
for _ in range(5):
_ = F.log_softmax(self._model_call(test_batch), dim=-1).cpu()
return batch_size
batch_size = forward_batch()
print(f"Determined Largest batch size: {batch_size}")
adaptive_batch_size = batch_size
loglikelihoods = []
for (string,) in tqdm(requests):
......@@ -207,7 +228,9 @@ class BaseLM(LM):
# TODO: extract out this call so it only gets called once and also somehow figure out partial caching for
# that
string_nll = self._loglikelihood_tokens(
rolling_token_windows, disable_tqdm=True
rolling_token_windows,
disable_tqdm=True,
override_bs=adaptive_batch_size,
)
# discard is_greedy
......@@ -218,7 +241,7 @@ class BaseLM(LM):
return loglikelihoods
def _loglikelihood_tokens(self, requests, disable_tqdm=False):
def _loglikelihood_tokens(self, requests, disable_tqdm=False, override_bs=None):
# TODO: implement some kind of efficient-request-middleware that lumps together requests with the same context
res = []
......@@ -233,10 +256,36 @@ class BaseLM(LM):
toks = x[1] + x[2]
return -len(toks), tuple(toks)
# TODO: automatic (variable) batch size detection for vectorization
re_ord = utils.Reorderer(requests, _collate)
# automatic (variable) batch size detection for vectorization
# pull longest context sample from request
if len(re_ord.get_reordered()) > 0:
_, context_enc, continuation_enc = re_ord.get_reordered()[0]
max_context = len((context_enc + continuation_enc)[-(self.max_length + 1) :][:-1])
if (self.batch_size == 'auto'):
if override_bs is None:
print('Passed argument batch_size = auto. Detecting largest batch size')
@find_executable_batch_size(starting_batch_size=512) # if OOM, then halves batch_size and tries again
def forward_batch(batch_size):
test_batch = torch.ones((batch_size, max_context), device=self.device).long()
for _ in range(5):
out = F.log_softmax(self._model_call(test_batch), dim = -1).cpu()
return batch_size
batch_size = forward_batch()
print(f"Determined largest batch size: {batch_size}")
adaptive_batch_size = batch_size
else:
adaptive_batch_size = override_bs
else:
adaptive_batch_size = 0 if override_bs is None else override_bs
for chunk in utils.chunks(
tqdm(re_ord.get_reordered(), disable=disable_tqdm), self.batch_size
tqdm(re_ord.get_reordered(), disable=disable_tqdm),
self.batch_size if self.batch_size != "auto" else adaptive_batch_size,
):
inps = []
cont_toks_list = []
......@@ -342,18 +391,25 @@ class BaseLM(LM):
re_ord = utils.Reorderer(requests, _collate)
for context, until in tqdm(re_ord.get_reordered()):
for context, request_args in tqdm(re_ord.get_reordered()):
until = request_args["until"]
if isinstance(until, str):
until = [until]
(primary_until,) = self.tok_encode(until[0])
if until:
(primary_until,) = self.tok_encode(until[0])
else:
primary_until = None
context_enc = torch.tensor(
[self.tok_encode(context)[self.max_gen_toks - self.max_length :]]
).to(self.device)
max_gen_tokens = min(
self.max_gen_toks, request_args.get("max_length", self.max_gen_toks)
)
cont = self._model_generate(
context_enc, context_enc.shape[1] + self.max_gen_toks, primary_until
context_enc, context_enc.shape[1] + max_gen_tokens, primary_until
)
s = self.tok_decode(cont[0].tolist()[context_enc.shape[1] :])
......
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""GPT-3 Arithmetic Test Dataset."""
import json
import datasets
_CITATION = """\
@inproceedings{NEURIPS2020_1457c0d6,
author = {Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winter, Clemens and Hesse, Chris and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {1877--1901},
publisher = {Curran Associates, Inc.},
title = {Language Models are Few-Shot Learners},
url = {https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf},
volume = {33},
year = {2020}
}
"""
_DESCRIPTION = """\
A small battery of 10 tests that involve asking language models a simple arithmetic
problem in natural language.
"""
_HOMEPAGE = "https://github.com/openai/gpt-3/tree/master/data"
# TODO: Add the licence for the dataset here if you can find it
_LICENSE = ""
class ArithmeticConfig(datasets.BuilderConfig):
"""BuilderConfig for GPT3 Arithmetic Test Dataset."""
def __init__(self, url, features, **kwargs):
"""BuilderConfig for GPT3 Arithmetic dataset.
Args:
url: *string*, the url to the specific subset of the GPT3 Arithmetic dataset.
features: *list[string]*, list of the features that will appear in the
feature dict.
"""
# Version history:
super().__init__(version=datasets.Version("0.0.1"), **kwargs)
self.url = url
self.features = features
class Arithmetic(datasets.GeneratorBasedBuilder):
"""A small battery of 10 tests involving simple arithmetic problems."""
BUILDER_CONFIGS = [
ArithmeticConfig(
name="arithmetic_2da",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/two_digit_addition.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="2-digit addition",
),
ArithmeticConfig(
name="arithmetic_2ds",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/two_digit_subtraction.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="2-digit subtraction",
),
ArithmeticConfig(
name="arithmetic_3da",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/three_digit_addition.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="3-digit addition",
),
ArithmeticConfig(
name="arithmetic_3ds",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/three_digit_subtraction.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="3-digit subtraction",
),
ArithmeticConfig(
name="arithmetic_4da",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/four_digit_addition.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="4-digit addition",
),
ArithmeticConfig(
name="arithmetic_4ds",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/four_digit_subtraction.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="4-digit subtraction",
),
ArithmeticConfig(
name="arithmetic_5da",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/five_digit_addition.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="5-digit addition",
),
ArithmeticConfig(
name="arithmetic_5ds",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/five_digit_subtraction.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="5-digit subtraction",
),
ArithmeticConfig(
name="arithmetic_2dm",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/two_digit_multiplication.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="2-digit multiplication",
),
ArithmeticConfig(
name="arithmetic_1dc",
url="https://raw.githubusercontent.com/openai/gpt-3/master/data/single_digit_three_ops.jsonl",
features=datasets.Features(
{
"context": datasets.Value("string"),
"completion": datasets.Value("string"),
}
),
description="Single digit 3 operations",
),
]
def _info(self):
return datasets.DatasetInfo(
description=f"{_DESCRIPTION}\n{self.config.description}",
features=self.config.features,
homepage=_HOMEPAGE,
license=_LICENSE,
citation=_CITATION,
)
def _split_generators(self, dl_manager):
urls = self.config.url
data_dir = dl_manager.download_and_extract(urls)
return [
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": data_dir,
"split": datasets.Split.VALIDATION,
},
),
]
# method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
def _generate_examples(self, filepath, split):
with open(filepath, encoding="utf-8") as f:
for key, row in enumerate(f):
data = json.loads(row)
context = (
data["context"]
.strip()
.replace("\n\n", "\n")
.replace("Q:", "Question:")
.replace("A:", "Answer:")
)
completion = data["completion"]
yield key, {"context": context, "completion": completion}
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment