Unverified Commit a4752ccd authored by its-alpesh's avatar its-alpesh Committed by GitHub
Browse files

Add humaneval_infilling task (#3299)



* Add humaneval_infilling task

* pacify pre-commit

---------
Co-authored-by: default avatarBaber Abbasi <92168766+baberabb@users.noreply.github.com>
parent 6b8ec144
......@@ -79,6 +79,7 @@ provided to the individual README.md files for each subfolder.
| [histoires_morales](histoires_morales/README.md) | A dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations. | French (Some MT) |
| [hrm8k](hrm8k/README.md) | A challenging bilingual math reasoning benchmark for Korean and English. | Korean (Some MT), English (Some MT) |
| [humaneval](humaneval/README.md) | Code generation task that measure functional correctness for synthesizing programs from docstrings. | Python |
| [humaneval_infilling](humaneval_infilling/README.md) | Code generation task that measure fill-in-the-middle capability for synthesizing programs from docstrings. | Python |
| [icelandic_winogrande](icelandic_winogrande/README.md) | Manually translated and localized version of the [WinoGrande](winogrande/README.md) commonsense reasoning benchmark for Icelandic. | Icelandic |
| [ifeval](ifeval/README.md) | Interactive fiction evaluation tasks for narrative understanding and reasoning. | English |
| [inverse_scaling](inverse_scaling/README.md) | Multiple-choice tasks from the Inverse Scaling Prize, designed to find settings where larger language models perform worse. | English |
......@@ -86,7 +87,7 @@ provided to the individual README.md files for each subfolder.
| [jsonschema_bench](jsonschema_bench/README.md) | Evaluate the ability of LLMs to generate JSON objects that conform to a given JSON schema, including API, configuration files, and other structured data formats. | JSON |
| [kbl](kbl/README.md) | Korean Benchmark for Legal Language Understanding. | Korean |
| [kmmlu](kmmlu/README.md) | Knowledge-based multi-subject multiple choice questions for academic evaluation. | Korean |
| [kobest](kobest/README.md) | A collection of tasks designed to evaluate understanding in Korean language{Fecha: language. | Korean |
| [kobest](kobest/README.md) | A collection of tasks designed to evaluate understanding in Korean language{Fecha: language. | Korean |
| [kormedmcqa](kormedmcqa/README.md) | Medical question answering tasks in Korean to test specialized domain knowledge. | Korean |
| [lambada](lambada/README.md) | Tasks designed to predict the endings of text passages, testing language prediction skills. | English |
| [lambada_cloze](lambada_cloze/README.md) | Cloze-style LAMBADA dataset. | English |
......
# Humaneval-Infilling
### Paper
Title: Efficient Training of Language Models to Fill in the Middle
Abstract: https://arxiv.org/pdf/2207.14255
We show that autoregressive language models can learn to infill text after we apply a straightforward transformation to the dataset, which simply moves a span of text from the middle of a document to its end. While this data augmentation has garnered much interest in recent years, we provide extensive evidence that training models with a large fraction of data transformed in this way does not harm the original left-to-right generative capability, as measured by perplexity and sampling evaluations across a wide range of scales. Given the usefulness, simplicity, and efficiency of training models to fill-in-the-middle (FIM), we suggest that future autoregressive language models be trained with FIM by default. To this end, we run a series of ablations on key hyperparameters, such as the data transformation frequency, the structure of the transformation, and the method of selecting the infill span. We use these ablations to prescribe strong default settings and best practices to train FIM models. We have released our best infilling model trained with best practices in our API, and release our infilling benchmarks to aid future research.
Homepage: https://github.com/openai/human-eval-infilling
### Citation
```
@article{bavarian2022efficient,
title={Efficient Training of Language Models to Fill in the Middle},
author={Bavarian, Mohammad and Jun, Heewoo and Tezak, Nikolas and Schulman, John and McLeavey, Christine and Tworek, Jerry and Chen, Mark},
journal={arXiv preprint arXiv:2207.14255},
year={2022}
}
```
### Groups and Tasks
#### Groups
- `humaneval_infilling`
This dataset has 4 subsets: HumanEval-MultiLineInfilling, HumanEval-SingleLineInfilling, HumanEval-RandomSpanInfilling, HumanEval-RandomSpanInfillingLight. The single-line, multi-line, random span infilling and its light version have 1033, 5815, 1640 and 164 tasks, respectively.
#### Tasks
- `humaneval_single_line_infilling`
- `humaneval_multi_line_infilling`
- `humaneval_random_span_infilling`
- `humaneval_random_span_infilling_light`
### Checklist
For adding novel benchmarks/datasets to the library:
- [ ] Is the task an existing benchmark in the literature?
- [ ] Have you referenced the original paper that introduced the task?
- [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- [ ] Is the "Main" variant of this task clearly denoted?
- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
group: humaneval_infilling
task:
- humaneval_multi_line_infilling
- humaneval_single_line_infilling
- humaneval_random_span_infilling
- humaneval_random_span_infilling_light
aggregate_metric_list:
- metric: pass@1
aggregation: mean
weight_by_size: false
metadata:
version: 1.0
task: humaneval_multi_line_infilling
dataset_path: loubnabnl/humaneval_infilling
dataset_name: HumanEval-MultiLineInfilling
unsafe_code: true
output_type: generate_until
test_split: test
doc_to_text: "{{suffix}}\n\n{{prompt}}"
doc_to_target: "{{test}}\ncheck({{entry_point}})"
metric_list:
- metric: !function utils.pass_at_k
aggregation: mean
higher_is_better: true
k: [1]
generation_kwargs:
max_gen_toks: 1024
do_sample: false
repeats: 1
num_fewshot: 0
filter_list:
- name: "create_test"
filter:
- function: "custom"
filter_fn: !function utils.build_predictions
metadata:
version: 1.0
include: multi_line_infilling.yaml
task: humaneval_random_span_infilling
dataset_name: HumanEval-RandomSpanInfilling
include: multi_line_infilling.yaml
task: humaneval_single_line_infilling_light
dataset_name: HumanEval-RandomSpanInfillingLight
include: multi_line_infilling.yaml
task: humaneval_single_line_infilling
dataset_name: HumanEval-SingleLineInfilling
generation_kwargs:
until:
- "\n"
max_gen_toks: 1024
do_sample: false
import evaluate as hf_evaluate
try:
compute_ = hf_evaluate.load("code_eval")
test_cases = ["assert add(2, 3)==5"]
candidates = [["def add(a,b): return a*b"]]
results = compute_.compute(references=test_cases, predictions=candidates, k=[1])
except Exception as e:
raise e
def pass_at_k(references: list[str], predictions: list[list[str]], k: list[int] = None):
global compute_
assert k is not None
if isinstance(k, int):
k = [k]
res = compute_.compute(
references=references,
predictions=predictions,
k=k,
)
return res[0]
def build_predictions(resps: list[list[str]], docs: list[dict]) -> list[list[str]]:
return [
[doc["prompt"] + r + doc["suffix"] for r in resp]
for resp, doc in zip(resps, docs)
]
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment