{"gsm8k":{"description":"State-of-the-art language models can match human performance on many tasks, but \nthey still struggle to robustly perform multi-step mathematical reasoning. To \ndiagnose the failures of current models and support research, we introduce GSM8K,\na dataset of 8.5K high quality linguistically diverse grade school math word problems.\nWe find that even the largest transformer models fail to achieve high test performance, \ndespite the conceptual simplicity of this problem distribution.\n","citation":"@misc{cobbe2021training,\n title={Training Verifiers to Solve Math Word Problems},\n author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},\n year={2021},\n eprint={2110.14168},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n","homepage":"https://github.com/openai/grade-school-math","license":"","features":{"question":{"dtype":"string","id":null,"_type":"Value"},"answer":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"gsm8_k","config_name":"gsm8k","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"train":{"name":"train","num_bytes":3963202,"num_examples":7473,"dataset_name":"gsm8_k"},"test":{"name":"test","num_bytes":713732,"num_examples":1319,"dataset_name":"gsm8_k"}},"download_checksums":{"https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/train.jsonl":{"num_bytes":4166206,"checksum":"17f347dc51477c50d4efb83959dbb7c56297aba886e5544ee2aaed3024813465"},"https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl":{"num_bytes":749738,"checksum":"3730d312f6e3440559ace48831e51066acaca737f6eabec99bccb9e4b3c39d14"}},"download_size":4915944,"post_processing_size":null,"dataset_size":4676934,"size_in_bytes":9592878}}
@@ -71,54 +71,64 @@ class HendrycksEthics(datasets.GeneratorBasedBuilder):
...
@@ -71,54 +71,64 @@ class HendrycksEthics(datasets.GeneratorBasedBuilder):
EthicsConfig(
EthicsConfig(
name="commonsense",
name="commonsense",
prefix="cm",
prefix="cm",
features=datasets.Features({
features=datasets.Features(
{
"label":datasets.Value("int32"),
"label":datasets.Value("int32"),
"input":datasets.Value("string"),
"input":datasets.Value("string"),
"is_short":datasets.Value("bool"),
"is_short":datasets.Value("bool"),
"edited":datasets.Value("bool"),
"edited":datasets.Value("bool"),
}),
}
description="The Commonsense subset contains examples focusing on moral standards and principles that most people intuitively accept."
),
description="The Commonsense subset contains examples focusing on moral standards and principles that most people intuitively accept.",
),
),
EthicsConfig(
EthicsConfig(
name="deontology",
name="deontology",
prefix="deontology",
prefix="deontology",
features=datasets.Features({
features=datasets.Features(
{
"group_id":datasets.Value("int32"),
"group_id":datasets.Value("int32"),
"label":datasets.Value("int32"),
"label":datasets.Value("int32"),
"scenario":datasets.Value("string"),
"scenario":datasets.Value("string"),
"excuse":datasets.Value("string"),
"excuse":datasets.Value("string"),
}),
}
),
description="The Deontology subset contains examples focusing on whether an act is required, permitted, or forbidden according to a set of rules or constraints",
description="The Deontology subset contains examples focusing on whether an act is required, permitted, or forbidden according to a set of rules or constraints",
),
),
EthicsConfig(
EthicsConfig(
name="justice",
name="justice",
prefix="justice",
prefix="justice",
features=datasets.Features({
features=datasets.Features(
{
"group_id":datasets.Value("int32"),
"group_id":datasets.Value("int32"),
"label":datasets.Value("int32"),
"label":datasets.Value("int32"),
"scenario":datasets.Value("string"),
"scenario":datasets.Value("string"),
}),
}
),
description="The Justice subset contains examples focusing on how a character treats another person",
description="The Justice subset contains examples focusing on how a character treats another person",
),
),
EthicsConfig(
EthicsConfig(
name="utilitarianism",
name="utilitarianism",
prefix="util",
prefix="util",
features=datasets.Features({
features=datasets.Features(
{
"activity":datasets.Value("string"),
"activity":datasets.Value("string"),
"baseline":datasets.Value("string"),
"baseline":datasets.Value("string"),
"rating":datasets.Value("string"),# Empty rating.
"rating":datasets.Value("string"),# Empty rating.
}),
}
),
description="The Utilitarianism subset contains scenarios that should be ranked from most pleasant to least pleasant for the person in the scenario",
description="The Utilitarianism subset contains scenarios that should be ranked from most pleasant to least pleasant for the person in the scenario",
),
),
EthicsConfig(
EthicsConfig(
name="virtue",
name="virtue",
prefix="virtue",
prefix="virtue",
features=datasets.Features({
features=datasets.Features(
{
"group_id":datasets.Value("int32"),
"group_id":datasets.Value("int32"),
"label":datasets.Value("int32"),
"label":datasets.Value("int32"),
"scenario":datasets.Value("string"),
"scenario":datasets.Value("string"),
"trait":datasets.Value("string"),
"trait":datasets.Value("string"),
}),
}
),
description="The Virtue subset contains scenarios focusing on whether virtues or vices are being exemplified",
description="The Virtue subset contains scenarios focusing on whether virtues or vices are being exemplified",
),
),
]
]
...
@@ -140,7 +150,12 @@ class HendrycksEthics(datasets.GeneratorBasedBuilder):
...
@@ -140,7 +150,12 @@ class HendrycksEthics(datasets.GeneratorBasedBuilder):
name=datasets.Split.TRAIN,
name=datasets.Split.TRAIN,
# These kwargs will be passed to _generate_examples
# These kwargs will be passed to _generate_examples
{"original":{"description":"LAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\ntexts sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole text, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nThe LAMBADA dataset","citation":"@misc{\n author={Paperno, Denis and Kruszewski, Germ\u00e1n and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fern\u00e1ndez, Raquel}, \n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n","homepage":"https://zenodo.org/record/2630551#.X4Xzn5NKjUI","license":"","features":{"text":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"lambada","config_name":"original","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":1709449,"num_examples":5153,"dataset_name":"lambada"}},"download_checksums":{"http://eaidata.bmk.sh/data/lambada_test.jsonl":{"num_bytes":1819752,"checksum":"4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226"}},"download_size":1819752,"post_processing_size":null,"dataset_size":1709449,"size_in_bytes":3529201},"en":{"description":"LAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\ntexts sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole text, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nThe English translated LAMBADA dataset","citation":"@misc{\n author={Paperno, Denis and Kruszewski, Germ\u00e1n and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fern\u00e1ndez, Raquel}, \n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n","homepage":"https://zenodo.org/record/2630551#.X4Xzn5NKjUI","license":"","features":{"text":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"lambada","config_name":"en","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":1709449,"num_examples":5153,"dataset_name":"lambada"}},"download_checksums":{"http://eaidata.bmk.sh/data/lambada_test_en.jsonl":{"num_bytes":1819752,"checksum":"4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226"}},"download_size":1819752,"post_processing_size":null,"dataset_size":1709449,"size_in_bytes":3529201},"fr":{"description":"LAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\ntexts sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole text, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nThe French translated LAMBADA dataset","citation":"@misc{\n author={Paperno, Denis and Kruszewski, Germ\u00e1n and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fern\u00e1ndez, Raquel}, \n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n","homepage":"https://zenodo.org/record/2630551#.X4Xzn5NKjUI","license":"","features":{"text":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"lambada","config_name":"fr","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":1948795,"num_examples":5153,"dataset_name":"lambada"}},"download_checksums":{"http://eaidata.bmk.sh/data/lambada_test_fr.jsonl":{"num_bytes":2028703,"checksum":"941ec6a73dba7dc91c860bf493eb66a527cd430148827a4753a4535a046bf362"}},"download_size":2028703,"post_processing_size":null,"dataset_size":1948795,"size_in_bytes":3977498},"de":{"description":"LAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\ntexts sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole text, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nThe German translated LAMBADA dataset","citation":"@misc{\n author={Paperno, Denis and Kruszewski, Germ\u00e1n and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fern\u00e1ndez, Raquel}, \n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n","homepage":"https://zenodo.org/record/2630551#.X4Xzn5NKjUI","license":"","features":{"text":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"lambada","config_name":"de","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":1904576,"num_examples":5153,"dataset_name":"lambada"}},"download_checksums":{"http://eaidata.bmk.sh/data/lambada_test_de.jsonl":{"num_bytes":1985231,"checksum":"51c6c1795894c46e88e4c104b5667f488efe79081fb34d746b82b8caa663865e"}},"download_size":1985231,"post_processing_size":null,"dataset_size":1904576,"size_in_bytes":3889807},"it":{"description":"LAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\ntexts sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole text, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nThe Italian translated LAMBADA dataset","citation":"@misc{\n author={Paperno, Denis and Kruszewski, Germ\u00e1n and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fern\u00e1ndez, Raquel}, \n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n","homepage":"https://zenodo.org/record/2630551#.X4Xzn5NKjUI","license":"","features":{"text":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"lambada","config_name":"it","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":1813420,"num_examples":5153,"dataset_name":"lambada"}},"download_checksums":{"http://eaidata.bmk.sh/data/lambada_test_it.jsonl":{"num_bytes":1894613,"checksum":"86654237716702ab74f42855ae5a78455c1b0e50054a4593fb9c6fcf7fad0850"}},"download_size":1894613,"post_processing_size":null,"dataset_size":1813420,"size_in_bytes":3708033},"es":{"description":"LAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\ntexts sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole text, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nThe Spanish translated LAMBADA dataset","citation":"@misc{\n author={Paperno, Denis and Kruszewski, Germ\u00e1n and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fern\u00e1ndez, Raquel}, \n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n","homepage":"https://zenodo.org/record/2630551#.X4Xzn5NKjUI","license":"","features":{"text":{"dtype":"string","id":null,"_type":"Value"}},"post_processed":null,"supervised_keys":null,"task_templates":null,"builder_name":"lambada","config_name":"es","version":{"version_str":"0.0.1","description":null,"major":0,"minor":0,"patch":1},"splits":{"validation":{"name":"validation","num_bytes":1821735,"num_examples":5153,"dataset_name":"lambada"}},"download_checksums":{"http://eaidata.bmk.sh/data/lambada_test_es.jsonl":{"num_bytes":1902349,"checksum":"ffd760026c647fb43c67ce1bc56fd527937304b348712dce33190ea6caba6f9c"}},"download_size":1902349,"post_processing_size":null,"dataset_size":1821735,"size_in_bytes":3724084}}