This documentation page provides a walkthrough to get started creating your own task, on the `big-refactor` branch of the repository (which will be v0.5.0 in the future.)
A more interactive tutorial is available as a Jupyter notebook [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/examples/lm-eval-overview.ipynb).
## Setup
If you haven't already, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:
"With the vast amount of work done in the field today, it helps to have a tool that people can use easily to share their results and use to check others to ensure reported numbers are valid. The LM Evaluation Harness is one such tool the community has used extensively. We want to continue to support the community and with that in mind, we’re excited to announce a major update on the LM Evaluation Harness to further our goal for open and accessible AI research."
]
},
"language_info": {
"name": "python"
{
"cell_type": "markdown",
"metadata": {
"id": "0gDoM0AJAvEc"
},
"source": [
"Our refactor stems from our beliefs of the following that we think are best practices\n",
"\n",
"1. Never Copy Results from Other Papers\n",
"2. Always share your exact prompts\n",
"3. Always provide model outputs\n",
"4. Qualitatively review a small batch of outputs before running evaluation jobs at scale\n"
"With the vast amount of work done in the field today, it helps to have a tool that people can use easily to share their results and use to check others to ensure reported numbers are valid. The LM Evaluation Harness is one such tool the community has used extensively. We want to continue to support the community and with that in mind, we’re excited to announce a major update on the LM Evaluation Harness to further our goal for open and accessible AI research."
],
"metadata": {
"id": "Z7k2vq1iAdqr"
}
},
{
"cell_type": "markdown",
"source": [
"Our refactor stems from our beliefs of the following that we think are best practices\n",
"\n",
"1. Never Copy Results from Other Papers\n",
"2. Always share your exact prompts\n",
"3. Always provide model outputs\n",
"4. Qualitatively review a small batch of outputs before running evaluation jobs at scale\n"
],
"metadata": {
"id": "0gDoM0AJAvEc"
}
},
{
"cell_type": "markdown",
"source": [
"In this notebook we will be going through a short tutorial on how things work."
],
"metadata": {
"id": "nnwsOpjda_YW"
}
},
{
"cell_type": "markdown",
"source": [
"## Install LM-Eval"
],
"metadata": {
"id": "zAov81vTbL2K"
}
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
{
"cell_type": "markdown",
"metadata": {
"id": "nnwsOpjda_YW"
},
"source": [
"In this notebook we will be going through a short tutorial on how things work."
"## Make task evaluations with configurable tasks\n",
"\n",
"Even within the same task, many works have reported numbers based on different choices of evaluation. Some report on the test sets, validation sets, or even subset of the training sets. Others have specialized prompts and verbalizers. We introduce YAMLs to allow users to easily make different variations. By leveraging the YAML configs to configure evaluations, the refactored LM-Eval takes the methods of the `Tasks` object and makes them configurable by setting the appropriate attributes in the config file. There, users can set the tasks they want by setting the name of the HF dataset (local tasks are also possible) the dataset splits used and much more. Key configurations such as `doc_to_text`, previously implemented as a method of the same name, is now configurable with jinja2 to allow high-level scripting to transform a HF dataset to text string as input to the model.\n",
"\n"
],
"metadata": {
"id": "8rfUeX6n_wkK"
}
]
},
{
"cell_type": "markdown",
"source": [
"A core-feature to LM-Eval is to configure tasks with YAML configs. With configs, fill preset fields to easily setup a task."
],
"metadata": {
"id": "HYFUhhfOSJKe"
}
},
"source": [
"A core-feature to LM-Eval is to configure tasks with YAML configs. With configs, fill preset fields to easily setup a task."
"2023-11-27:08:14:53,517 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
"2023-11-27 08:14:54.499605: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"Oftenly, tasks are part of a larger group used to meausre different capabilities. The dynamism of the field today means new dimensions of evaluation can come about which would mix and match new and older tasks alike. In LM-Eval, We can also group tasks and call that the group name to evaluate on a set of tasks easily. In this instance, let's evaluate the group `yes_or_no_tasks` which comprise of the tasks `demo_boolq` and `demo_cola`; tasks which are multiple choice tasks with options `yes` and `no` as the name suggests.\n",
"\n",
...
...
@@ -714,13 +352,15 @@
"We also show the aggregate across samples besides only showing the aggregation between subtasks. This may come in handy when certain groups want to be aggregated as a single task. -->\n",
"2023-11-27:08:15:04,956 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
"2023-11-27 08:15:06.059715: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"The following is a yaml made to evaluate the specific subtask of `high_school_geography` from MMLU. It uses the standard prompt where the we choose the letters from the options with most likelihood as the model's prediction."
],
"metadata": {
"id": "XceRKCuuDtbn"
}
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "GTFvdt9kSlBG"
},
"outputs": [],
"source": [
"YAML_mmlu_geo_string = '''\n",
"group: mmlu\n",
...
...
@@ -864,26 +504,11 @@
"'''\n",
"with open('mmlu_high_school_geography.yaml', 'w') as f:\n",
"2023-11-27:08:16:50,338 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
"2023-11-27 08:16:51.182014: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"We could also evaluate this task in a different way. For example, instead of observing the loglikelihood of the letters, we can instead evaluate on the choices themselves as the continuation. This is done by simply changing `doc_to_choice` from a list of letters to the corresponding `choices` field from the HF dataset. We write `\"{{choices}}\"` so that the string field is interpreted as jinja string that acquires the list from the HF dataset directly.\n",
"\n",
"Another convinient feature here is since we're only modifying the `doc_to_choice` and the rest of config is the same as the task above, we can use the above configuration as a template by using `include: mmlu_high_school_geography.yaml` to load the config from that file. We'll need to add a unique task name as to not colide with the existing yaml config we're including. For this case we'll simply name this one `mmlu_high_school_geography_continuation`. `doc_to_text` is added here just for sake of clarity."
],
"metadata": {
"id": "jyKOfCsKb-xy"
}
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "lqElwU54TaK-"
},
"outputs": [],
"source": [
"YAML_mmlu_geo_string = '''\n",
"include: mmlu_high_school_geography.yaml\n",
...
...
@@ -951,15 +591,46 @@
"'''\n",
"with open('mmlu_high_school_geography_continuation.yaml', 'w') as f:\n",
"2023-11-27:08:17:52,429 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
"2023-11-27 08:17:53.714003: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2023-11-27 08:17:53.714080: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2023-11-27 08:17:53.714119: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2023-11-27 08:17:55.953811: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
"2023-11-27:08:17:59,577 INFO [__main__.py:124] Verbosity set to INFO\n",
"2023-11-27:08:18:07,654 WARNING [__main__.py:130] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
"2023-11-27:08:18:07,654 INFO [__main__.py:135] Including path: ./\n",
"2023-11-27:08:18:07,686 INFO [__main__.py:197] Selected Tasks: ['demo_mmlu_high_school_geography_continuation']\n",
"2023-11-27:08:18:07,735 INFO [huggingface.py:119] Using device 'cuda'\n",
"2023-11-27:08:18:25,609 INFO [task.py:353] Building contexts for task on rank 0...\n",
"2023-11-27:08:18:25,628 INFO [evaluator.py:290] Running loglikelihood requests\n",
"100% 40/40 [00:05<00:00, 7.68it/s]\n",
"fatal: not a git repository (or any of the parent directories): .git\n",
"To prepare a task we can simply fill in a YAML config with the relevant information.\n",
"\n",
"`output_type`\n",
"The current provided evaluation types comprise of the following:\n",
"1. `loglikelihood`: Evaluates the loglikeihood of a continuation\n",
"2. `loglikelihood_rolling`\n",
"3. `multiple_choice`: Evaluates loglikelihood among the a number of choices predicted by the model.\n",
"4. `greedy_until`: Model outputs greedy generation (can be configured to to use beam search and other generation-related paramaters)\n",
"\n",
"The core prompt revolves around 3 fields.\n",
"1. `doc_to_text`: Denotes the prompt template that will be used as input to the model.\n",
"2. `doc_to_choice`: Available choices that will be used as continuation for the model. This is used when the `output_type` is `multiple_choice`.\n",
"3. `doc_to_target`: When `output_type` is `multiple_choice` this can be an index that corresponds to the correct answer or the answer string itself (must be a subset of `doc_to_choice`). For other tasks, this is expected to be a string. You can fill this field with a feature name from the HF dataset so long as the resulting feature follows the conditioned described.\n",
"\n",
"<!-- Advanced notes:\n",
"In some cases, like Winograd, we want to alternate the left-hand side of a prompt and keep the answer the same -->\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6p0-KPwAgK5j"
},
"source": [
"## What if Jinja is not Sufficient?\n",
"\n",
"There can be times where Jinja is not enough to make the prompt we had in mind.\n",
"\n",
"1. Use `!function` operator for the prompt-related fields to pass a python function to build the prompt template.\n",
"2023-11-27:08:17:52,429 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
"2023-11-27 08:17:53.714003: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2023-11-27 08:17:53.714080: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2023-11-27 08:17:53.714119: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2023-11-27 08:17:55.953811: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
"2023-11-27:08:17:59,577 INFO [__main__.py:124] Verbosity set to INFO\n",
"2023-11-27:08:18:07,654 WARNING [__main__.py:130] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
"2023-11-27:08:18:07,654 INFO [__main__.py:135] Including path: ./\n",
"2023-11-27:08:18:07,686 INFO [__main__.py:197] Selected Tasks: ['demo_mmlu_high_school_geography_continuation']\n",
"2023-11-27:08:18:07,735 INFO [huggingface.py:119] Using device 'cuda'\n",
"2023-11-27:08:18:25,609 INFO [task.py:353] Building contexts for task on rank 0...\n",
"2023-11-27:08:18:25,628 INFO [evaluator.py:290] Running loglikelihood requests\n",
"100% 40/40 [00:05<00:00, 7.68it/s]\n",
"fatal: not a git repository (or any of the parent directories): .git\n",
"2023-11-27:08:18:37,991 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
"2023-11-27 08:18:39.666844: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2023-11-27 08:18:39.666894: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2023-11-27 08:18:39.666949: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2023-11-27 08:18:41.101009: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
"2023-11-27:08:18:44,452 INFO [__main__.py:124] Verbosity set to INFO\n",
"2023-11-27:08:18:54,278 WARNING [__main__.py:130] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
"2023-11-27:08:18:54,279 INFO [__main__.py:135] Including path: ./\n",
"2023-11-27:08:18:54,303 INFO [__main__.py:197] Selected Tasks: ['demo_mmlu_high_school_geography_function_prompt']\n",
"2023-11-27:08:18:54,333 INFO [huggingface.py:119] Using device 'cuda'\n",
"2023-11-27:08:19:12,078 INFO [task.py:353] Building contexts for task on rank 0...\n",
"2023-11-27:08:19:12,084 INFO [evaluator.py:290] Running loglikelihood requests\n",
"100% 40/40 [00:02<00:00, 16.47it/s]\n",
"fatal: not a git repository (or any of the parent directories): .git\n",
"To prepare a task we can simply fill in a YAML config with the relevant information.\n",
"\n",
"`output_type`\n",
"The current provided evaluation types comprise of the following:\n",
"1. `loglikelihood`: Evaluates the loglikeihood of a continuation\n",
"2. `loglikelihood_rolling`\n",
"3. `multiple_choice`: Evaluates loglikelihood among the a number of choices predicted by the model.\n",
"4. `greedy_until`: Model outputs greedy generation (can be configured to to use beam search and other generation-related paramaters)\n",
"\n",
"The core prompt revolves around 3 fields.\n",
"1. `doc_to_text`: Denotes the prompt template that will be used as input to the model.\n",
"2. `doc_to_choice`: Available choices that will be used as continuation for the model. This is used when the `output_type` is `multiple_choice`.\n",
"3. `doc_to_target`: When `output_type` is `multiple_choice` this can be an index that corresponds to the correct answer or the answer string itself (must be a subset of `doc_to_choice`). For other tasks, this is expected to be a string. You can fill this field with a feature name from the HF dataset so long as the resulting feature follows the conditioned described.\n",
"\n",
"<!-- Advanced notes:\n",
"In some cases, like Winograd, we want to alternate the left-hand side of a prompt and keep the answer the same -->\n"
],
"metadata": {
"id": "duBDqC6PAdjL"
}
},
{
"cell_type": "markdown",
"source": [
"## What if Jinja is not Sufficient?\n",
"\n",
"There can be times where Jinja is not enough to make the prompt we had in mind.\n",
"\n",
"1. Use `!function` operator for the prompt-related fields to pass a python function to build the prompt template.\n",
"2023-11-27:08:18:37,991 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
"2023-11-27 08:18:39.666844: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2023-11-27 08:18:39.666894: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2023-11-27 08:18:39.666949: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2023-11-27 08:18:41.101009: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
"2023-11-27:08:18:44,452 INFO [__main__.py:124] Verbosity set to INFO\n",
"2023-11-27:08:18:54,278 WARNING [__main__.py:130] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
"2023-11-27:08:18:54,279 INFO [__main__.py:135] Including path: ./\n",
"2023-11-27:08:18:54,303 INFO [__main__.py:197] Selected Tasks: ['demo_mmlu_high_school_geography_function_prompt']\n",
"2023-11-27:08:18:54,333 INFO [huggingface.py:119] Using device 'cuda'\n",
"2023-11-27:08:19:12,078 INFO [task.py:353] Building contexts for task on rank 0...\n",
"2023-11-27:08:19:12,084 INFO [evaluator.py:290] Running loglikelihood requests\n",
"100% 40/40 [00:02<00:00, 16.47it/s]\n",
"fatal: not a git repository (or any of the parent directories): .git\n",