This project provides a unified framework to test autoregressive language models (GPT-2, GPT-3, GPTNeo, etc) on a large number of different evaluation tasks.
...
...
@@ -403,7 +403,7 @@ the ngram files and info.json. See the above guide for ngram generation for the
@@ -420,9 +420,9 @@ Both LMs (`lm_eval.models`) and Tasks (`lm_eval.tasks`) are kept in a registry d
The [GPT-3 Evaluations Project](https://github.com/EleutherAI/lm_evaluation_harness/projects/1) tracks our progress implementing new tasks. Right now, we are focused on getting all the datasets loaded so that we can dedupe against the training data. Implementing the actual evaluations is nice but not necessary at the current moment.
### Task Versioning
### Task Versioning
To help improve reproducibility, all tasks have a VERSION field. When run from the command line, this is reported in a column in the table, or in the "version" field in the evaluator return dict. The purpose of the version is so that if the task definition changes (i.e to fix a bug), then we can know exactly which metrics were computed using the old buggy implementation to avoid unfair comparisons. To enforce this, there are unit tests that make sure the behavior of all tests remains the same as when they were first implemented. Task versions start at 0, and each time a breaking change is made, the version is incremented by one.
To help improve reproducibility, all tasks have a VERSION field. When run from the command line, this is reported in a column in the table, or in the "version" field in the evaluator return dict. The purpose of the version is so that if the task definition changes (i.e to fix a bug), then we can know exactly which metrics were computed using the old buggy implementation to avoid unfair comparisons. To enforce this, there are unit tests that make sure the behavior of all tests remains the same as when they were first implemented. Task versions start at 0, and each time a breaking change is made, the version is incremented by one.
When reporting eval harness results, please also report the version of each task. This can be done either with a separate column in the table, or by reporting the task name with the version appended as such: taskname-v0.
@@ -22,14 +22,14 @@ The basis for our decontamination procedure can be found in Appendix C of "Langu
## Implementation
Contamination detection can be found in "lm_eval/decontaminate.py" with supporting code in "lm_eval/decontamination/".
Contamination detection can be found in "lm_eval/decontaminate.py" with supporting code in "lm_eval/decontamination/".
decontaminate.py does the following:
1. Build dictionaries of all ngrams and their corresponding evaluation/document ids.
2. Scan through sorted files containing training set n-grams.
3. If a match is found, the corresponding evaluation/document combinations are marked as contaminated.
"lm_eval/evaluator.py" can then produce a clean version of the benchmark by excluding the results of contaminated documents. For each metric, a clean version will be shown in the results with a "decontaminate" suffix.
"lm_eval/evaluator.py" can then produce a clean version of the benchmark by excluding the results of contaminated documents. For each metric, a clean version will be shown in the results with a "decontaminate" suffix.
This is disabled by default for new tasks, to support decontamination on a task override the "should_decontaminate" and "doc_to_decontamination_query" methods. For more details see the [task guide](task_guide.md).
@@ -52,7 +52,7 @@ For example, take the QuAC dataset. We have:
QuAC: Question Answering in Context
https://arxiv.org/abs/1808.07036
Question Answering in Context (QuAC) is a dataset for modeling, understanding, and
Question Answering in Context (QuAC) is a dataset for modeling, understanding, and
participating in information seeking dialog. Data instances consist of an interactive
dialog between two crowd workers: (1) a student who poses a sequence of freeform
questions to learn as much as possible about a hidden Wikipedia text, and (2)
...
...
@@ -72,7 +72,7 @@ Now let's walk through the actual implementation - from data handling to evaluat
### Downloading your Data
All data downloading and management is handled through the HuggingFace (**HF**) [`datasets`](https://github.com/huggingface/datasets) API. So, the first thing you should do is check to see if your task's dataset is already provided in their catalog [here](https://huggingface.co/datasets). If it's not in there, please consider adding it to their Hub to make it accessible to a wider user base by following their [new dataset guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)
.
.
Now, that you have your HF dataset, you need to assign its path and name to your `Task` in the following fields:
```python
...
...
@@ -116,7 +116,7 @@ These should return a Python iterable (`list` or `generator`) of `dict`s that ca
#### Processing Documents
At this point, you can also process each individual document to, for example, strip whitespace or "detokenize" its fields. Put the processing logic into `_process_doc` and map the functions across training/validation/test docs inside of the respective functions.
At this point, you can also process each individual document to, for example, strip whitespace or "detokenize" its fields. Put the processing logic into `_process_doc` and map the functions across training/validation/test docs inside of the respective functions.
🔠 If your task is **multiple-choice**, we require you to format your documents such that they contain `gold` and `choices` fields. They can also have other fields, but those will be ignored by `MultipleChoiceTask`. `choices` should be a list of possible continuations, and `gold` should be an integer specifying the index of the correct completion.
See [this task](https://github.com/EleutherAI/lm-evaluation-harness/blob/6caa0afd96a7a7efb2ec4c1f24ad1756e48f3aa7/lm_eval/tasks/sat.py#L60) for an example. 🔠
...
...
@@ -154,7 +154,7 @@ Finally, be aware that the strings from `doc_to_text` and `doc_to_target` will b
### Decontamination
For background on decontamination please see [this](./decontamination.md).
If you wish to support decontamination studies for your task simply override the "should_decontaminate" method and return true.
If you wish to support decontamination studies for your task simply override the "should_decontaminate" method and return true.
You also need to override "doc_to_decontamination_query" and return the data you wish to compare against the training set. This doesn't necessarily need to be the full document or request, and we leave this up to the implementor. For a multi-choice evaluation you could for example just return the question.
To reiterate, a `doc` is just a `Dict` object that contains information about a document from your corpus. It can contain things like a prompt, question type information, answers and anything else you think will be needed in order to assess your model for a given task. Keep in mind that the fields of this can be basically whatever you want (you can sort this out in `training_docs`\ `validation_docs`\ `test_docs` if you need to customise things - see above), just remember to be consistent with them throughout the rest of the `Task` you write up.
A `Request` is an object that takes the text prompt you want to present to a model and computes one of a few different types of response. These are evaluated lazily (meaning, only when the result is actually needed). If your task requires generating text you'll need to return a `rf.greedy_until` request otherwise an `rf.loglikelihood` across all labels in a classification tasks will do.
A `Request` is an object that takes the text prompt you want to present to a model and computes one of a few different types of response. These are evaluated lazily (meaning, only when the result is actually needed). If your task requires generating text you'll need to return a `rf.greedy_until` request otherwise an `rf.loglikelihood` across all labels in a classification tasks will do.
The function `construct_requests` can return a list of `Request`s or an iterable; it's perfectly fine to `yield` them from something or other. This is particularly handy if you are creating more than one request per `doc` (usually because you're up to something like multi-task learning). The objects this function returns then get consumed one by one and turned into result objects.
...
...
@@ -232,7 +232,7 @@ def aggregation(self):
```
In `process_results`, model outputs are converted into metrics. These metrics are per document metrics, however; the `aggregation` function is used to work out what to do with them to create a corpus-level metric. Imagine you have a bunch of documents, for each of which you have calculated an F1 score. What should that mean overall? Should they be summed, averaged, the min/max found? This function handles that problem.
The contents of the function itself are pretty straightforward; it should simply return a dict that maps from each metric label that could be returned by `process_results` to a function that can be used to aggregate that metric. That is to say, if the metrics that `process_results` could return are given by `{'a', 'b', 'c'}`, then all of these keys should be present in the dict returned by `aggregation`.
The contents of the function itself are pretty straightforward; it should simply return a dict that maps from each metric label that could be returned by `process_results` to a function that can be used to aggregate that metric. That is to say, if the metrics that `process_results` could return are given by `{'a', 'b', 'c'}`, then all of these keys should be present in the dict returned by `aggregation`.
__NOTE__: See `lm_eval/metrics.py` for a few "built-in" aggregate metrics you can easily import. The standard metrics available in this package are generally based on `sklearn` functions, so if you are in any doubt for how to set things up the documentation over there can be of assistance. If you need to write a custom metric for some reason, start by looking at the existing ones in `lm_eval/metrics.py` for an idea about what the function signature needs to be.
{"asdiv":{"description":"ASDiv (Academia Sinica Diverse MWP Dataset) is a diverse (in terms of both language\npatterns and problem types) English math word problem (MWP) corpus for evaluating\nthe capability of various MWP solvers. Existing MWP corpora for studying AI progress\nremain limited either in language usage patterns or in problem types. We thus present\na new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem\ntypes taught in elementary school. Each MWP is annotated with its problem type and grade\nlevel (for indicating the level of difficulty).\n","citation":"@misc{miao2021diverse,\n title={A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers},\n author={Shen-Yun Miao and Chao-Chun Liang and Keh-Yih Su},\n year={2021},\n eprint={2106.15772},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n","homepage":"https://github.com/chaochun/nlu-asdiv-dataset","license":"","features":{"body":{"dtype":"string","id":null,"_type":"Value"},"question":{"dtype":"string","id":null,"_type":"Value"},"solution_type":{"dtype":"string","id":null,"_type":"Value"},"answer":{"dtype":"string","id":null,"_type":"Value"},"formula":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"as_div","config_name":"asdiv","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":501489,"num_examples":2305,"dataset_name":"as_div"}},"download_checksums":{"https://github.com/chaochun/nlu-asdiv-dataset/archive/55790e5270bb91ccfa5053194b25732534696b50.zip":{"num_bytes":440966,"checksum":"8f1fe4f6d5f170ec1e24ab78c244153c14c568b1bb2b1dad0324e71f37939a2d"}},"download_size":440966,"post_processing_size":null,"dataset_size":501489,"size_in_bytes":942455}}
\ No newline at end of file
{"asdiv":{"description":"ASDiv (Academia Sinica Diverse MWP Dataset) is a diverse (in terms of both language\npatterns and problem types) English math word problem (MWP) corpus for evaluating\nthe capability of various MWP solvers. Existing MWP corpora for studying AI progress\nremain limited either in language usage patterns or in problem types. We thus present\na new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem\ntypes taught in elementary school. Each MWP is annotated with its problem type and grade\nlevel (for indicating the level of difficulty).\n","citation":"@misc{miao2021diverse,\n title={A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers},\n author={Shen-Yun Miao and Chao-Chun Liang and Keh-Yih Su},\n year={2021},\n eprint={2106.15772},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n","homepage":"https://github.com/chaochun/nlu-asdiv-dataset","license":"","features":{"body":{"dtype":"string","id":null,"_type":"Value"},"question":{"dtype":"string","id":null,"_type":"Value"},"solution_type":{"dtype":"string","id":null,"_type":"Value"},"answer":{"dtype":"string","id":null,"_type":"Value"},"formula":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"as_div","config_name":"asdiv","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":501489,"num_examples":2305,"dataset_name":"as_div"}},"download_checksums":{"https://github.com/chaochun/nlu-asdiv-dataset/archive/55790e5270bb91ccfa5053194b25732534696b50.zip":{"num_bytes":440966,"checksum":"8f1fe4f6d5f170ec1e24ab78c244153c14c568b1bb2b1dad0324e71f37939a2d"}},"download_size":440966,"post_processing_size":null,"dataset_size":501489,"size_in_bytes":942455}}
{"coqa":{"description":"CoQA is a large-scale dataset for building Conversational Question Answering\nsystems. The goal of the CoQA challenge is to measure the ability of machines to\nunderstand a text passage and answer a series of interconnected questions that\nappear in a conversation.\n","citation":"@misc{reddy2018coqa,\n title={CoQA: A Conversational Question Answering Challenge},\n author={Siva Reddy and Danqi Chen and Christopher D. Manning},\n year={2018},\n eprint={1808.07042},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n","homepage":"https://stanfordnlp.github.io/coqa/","license":"","features":{"id":{"dtype":"string","id":null,"_type":"Value"},"source":{"dtype":"string","id":null,"_type":"Value"},"story":{"dtype":"string","id":null,"_type":"Value"},"questions":{"feature":{"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"answers":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"additional_answers":{"0":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"1":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"2":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"}}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"coqa","config_name":"coqa","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"train":{"name":"train","num_bytes":26250528,"num_examples":7199,"dataset_name":"coqa"},"validation":{"name":"validation","num_bytes":3765933,"num_examples":500,"dataset_name":"coqa"}},"download_checksums":{"https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json":{"num_bytes":49001836,"checksum":"b0fdb2bc1bd38dd3ca2ce5fa2ac3e02c6288ac914f241ac409a655ffb6619fa6"},"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json":{"num_bytes":9090845,"checksum":"dfa367a9733ce53222918d0231d9b3bedc2b8ee831a2845f62dfc70701f2540a"}},"download_size":58092681,"post_processing_size":null,"dataset_size":30016461,"size_in_bytes":88109142}}
\ No newline at end of file
{"coqa":{"description":"CoQA is a large-scale dataset for building Conversational Question Answering\nsystems. The goal of the CoQA challenge is to measure the ability of machines to\nunderstand a text passage and answer a series of interconnected questions that\nappear in a conversation.\n","citation":"@misc{reddy2018coqa,\n title={CoQA: A Conversational Question Answering Challenge},\n author={Siva Reddy and Danqi Chen and Christopher D. Manning},\n year={2018},\n eprint={1808.07042},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n","homepage":"https://stanfordnlp.github.io/coqa/","license":"","features":{"id":{"dtype":"string","id":null,"_type":"Value"},"source":{"dtype":"string","id":null,"_type":"Value"},"story":{"dtype":"string","id":null,"_type":"Value"},"questions":{"feature":{"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"answers":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"additional_answers":{"0":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"1":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"},"2":{"feature":{"span_start":{"dtype":"int32","id":null,"_type":"Value"},"span_end":{"dtype":"int32","id":null,"_type":"Value"},"span_text":{"dtype":"string","id":null,"_type":"Value"},"input_text":{"dtype":"string","id":null,"_type":"Value"},"turn_id":{"dtype":"int32","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"}}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"coqa","config_name":"coqa","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"train":{"name":"train","num_bytes":26250528,"num_examples":7199,"dataset_name":"coqa"},"validation":{"name":"validation","num_bytes":3765933,"num_examples":500,"dataset_name":"coqa"}},"download_checksums":{"https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json":{"num_bytes":49001836,"checksum":"b0fdb2bc1bd38dd3ca2ce5fa2ac3e02c6288ac914f241ac409a655ffb6619fa6"},"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json":{"num_bytes":9090845,"checksum":"dfa367a9733ce53222918d0231d9b3bedc2b8ee831a2845f62dfc70701f2540a"}},"download_size":58092681,"post_processing_size":null,"dataset_size":30016461,"size_in_bytes":88109142}}
{"drop":{"description":"DROP is a QA dataset which tests comprehensive understanding of paragraphs. In \nthis crowdsourced, adversarially-created, 96k question-answering benchmark, a \nsystem must resolve multiple references in a question, map them onto a paragraph,\nand perform discrete operations over them (such as addition, counting, or sorting).\n","citation":"@misc{dua2019drop,\n title={DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs}, \n author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner},\n year={2019},\n eprint={1903.00161},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n","homepage":"https://allenai.org/data/drop","license":"","features":{"section_id":{"dtype":"string","id":null,"_type":"Value"},"passage":{"dtype":"string","id":null,"_type":"Value"},"question":{"dtype":"string","id":null,"_type":"Value"},"query_id":{"dtype":"string","id":null,"_type":"Value"},"answer":{"number":{"dtype":"string","id":null,"_type":"Value"},"date":{"day":{"dtype":"string","id":null,"_type":"Value"},"month":{"dtype":"string","id":null,"_type":"Value"},"year":{"dtype":"string","id":null,"_type":"Value"}},"spans":{"feature":{"dtype":"string","id":null,"_type":"Value"},"length":-1,"id":null,"_type":"Sequence"},"worker_id":{"dtype":"string","id":null,"_type":"Value"},"hit_id":{"dtype":"string","id":null,"_type":"Value"}},"validated_answers":{"feature":{"number":{"dtype":"string","id":null,"_type":"Value"},"date":{"day":{"dtype":"string","id":null,"_type":"Value"},"month":{"dtype":"string","id":null,"_type":"Value"},"year":{"dtype":"string","id":null,"_type":"Value"}},"spans":{"feature":{"dtype":"string","id":null,"_type":"Value"},"length":-1,"id":null,"_type":"Sequence"},"worker_id":{"dtype":"string","id":null,"_type":"Value"},"hit_id":{"dtype":"string","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"drop","config_name":"drop","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"train":{"name":"train","num_bytes":108858121,"num_examples":77409,"dataset_name":"drop"},"validation":{"name":"validation","num_bytes":12560739,"num_examples":9536,"dataset_name":"drop"}},"download_checksums":{"https://s3-us-west-2.amazonaws.com/allennlp/datasets/drop/drop_dataset.zip":{"num_bytes":8308692,"checksum":"39d2278a29fd729de301b111a45f434c24834f40df8f4ff116d864589e3249d6"}},"download_size":8308692,"post_processing_size":null,"dataset_size":121418860,"size_in_bytes":129727552}}
\ No newline at end of file
{"drop":{"description":"DROP is a QA dataset which tests comprehensive understanding of paragraphs. In \nthis crowdsourced, adversarially-created, 96k question-answering benchmark, a \nsystem must resolve multiple references in a question, map them onto a paragraph,\nand perform discrete operations over them (such as addition, counting, or sorting).\n","citation":"@misc{dua2019drop,\n title={DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs}, \n author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner},\n year={2019},\n eprint={1903.00161},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n","homepage":"https://allenai.org/data/drop","license":"","features":{"section_id":{"dtype":"string","id":null,"_type":"Value"},"passage":{"dtype":"string","id":null,"_type":"Value"},"question":{"dtype":"string","id":null,"_type":"Value"},"query_id":{"dtype":"string","id":null,"_type":"Value"},"answer":{"number":{"dtype":"string","id":null,"_type":"Value"},"date":{"day":{"dtype":"string","id":null,"_type":"Value"},"month":{"dtype":"string","id":null,"_type":"Value"},"year":{"dtype":"string","id":null,"_type":"Value"}},"spans":{"feature":{"dtype":"string","id":null,"_type":"Value"},"length":-1,"id":null,"_type":"Sequence"},"worker_id":{"dtype":"string","id":null,"_type":"Value"},"hit_id":{"dtype":"string","id":null,"_type":"Value"}},"validated_answers":{"feature":{"number":{"dtype":"string","id":null,"_type":"Value"},"date":{"day":{"dtype":"string","id":null,"_type":"Value"},"month":{"dtype":"string","id":null,"_type":"Value"},"year":{"dtype":"string","id":null,"_type":"Value"}},"spans":{"feature":{"dtype":"string","id":null,"_type":"Value"},"length":-1,"id":null,"_type":"Sequence"},"worker_id":{"dtype":"string","id":null,"_type":"Value"},"hit_id":{"dtype":"string","id":null,"_type":"Value"}},"length":-1,"id":null,"_type":"Sequence"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"drop","config_name":"drop","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"train":{"name":"train","num_bytes":108858121,"num_examples":77409,"dataset_name":"drop"},"validation":{"name":"validation","num_bytes":12560739,"num_examples":9536,"dataset_name":"drop"}},"download_checksums":{"https://s3-us-west-2.amazonaws.com/allennlp/datasets/drop/drop_dataset.zip":{"num_bytes":8308692,"checksum":"39d2278a29fd729de301b111a45f434c24834f40df8f4ff116d864589e3249d6"}},"download_size":8308692,"post_processing_size":null,"dataset_size":121418860,"size_in_bytes":129727552}}
{"gsm8k":{"description":"State-of-the-art language models can match human performance on many tasks, but \nthey still struggle to robustly perform multi-step mathematical reasoning. To \ndiagnose the failures of current models and support research, we introduce GSM8K,\na dataset of 8.5K high quality linguistically diverse grade school math word problems.\nWe find that even the largest transformer models fail to achieve high test performance, \ndespite the conceptual simplicity of this problem distribution.\n","citation":"@misc{cobbe2021training,\n title={Training Verifiers to Solve Math Word Problems},\n author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},\n year={2021},\n eprint={2110.14168},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n","homepage":"https://github.com/openai/grade-school-math","license":"","features":{"question":{"dtype":"string","id":null,"_type":"Value"},"answer":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"gsm8_k","config_name":"gsm8k","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"train":{"name":"train","num_bytes":3963202,"num_examples":7473,"dataset_name":"gsm8_k"},"test":{"name":"test","num_bytes":713732,"num_examples":1319,"dataset_name":"gsm8_k"}},"download_checksums":{"https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/train.jsonl":{"num_bytes":4166206,"checksum":"17f347dc51477c50d4efb83959dbb7c56297aba886e5544ee2aaed3024813465"},"https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl":{"num_bytes":749738,"checksum":"3730d312f6e3440559ace48831e51066acaca737f6eabec99bccb9e4b3c39d14"}},"download_size":4915944,"post_processing_size":null,"dataset_size":4676934,"size_in_bytes":9592878}}
\ No newline at end of file
{"gsm8k":{"description":"State-of-the-art language models can match human performance on many tasks, but \nthey still struggle to robustly perform multi-step mathematical reasoning. To \ndiagnose the failures of current models and support research, we introduce GSM8K,\na dataset of 8.5K high quality linguistically diverse grade school math word problems.\nWe find that even the largest transformer models fail to achieve high test performance, \ndespite the conceptual simplicity of this problem distribution.\n","citation":"@misc{cobbe2021training,\n title={Training Verifiers to Solve Math Word Problems},\n author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},\n year={2021},\n eprint={2110.14168},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n","homepage":"https://github.com/openai/grade-school-math","license":"","features":{"question":{"dtype":"string","id":null,"_type":"Value"},"answer":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"gsm8_k","config_name":"gsm8k","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"train":{"name":"train","num_bytes":3963202,"num_examples":7473,"dataset_name":"gsm8_k"},"test":{"name":"test","num_bytes":713732,"num_examples":1319,"dataset_name":"gsm8_k"}},"download_checksums":{"https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/train.jsonl":{"num_bytes":4166206,"checksum":"17f347dc51477c50d4efb83959dbb7c56297aba886e5544ee2aaed3024813465"},"https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl":{"num_bytes":749738,"checksum":"3730d312f6e3440559ace48831e51066acaca737f6eabec99bccb9e4b3c39d14"}},"download_size":4915944,"post_processing_size":null,"dataset_size":4676934,"size_in_bytes":9592878}}