Unverified Commit 8042479d authored by Stella Biderman's avatar Stella Biderman Committed by GitHub
Browse files

Update README.md

parent a2992d28
...@@ -234,7 +234,7 @@ You can also ask for help, or discuss new features with the maintainers in the # ...@@ -234,7 +234,7 @@ You can also ask for help, or discuss new features with the maintainers in the #
To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md). To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md).
As a start, we currently only support one prompt per task, which we strive to make the "standard" as defined by the benchmark's authors. If you would like to study how varying prompts causes changes in the evaluation score, we support prompts authored in the [Promptsource Library](https://github.com/bigscience-workshop/promptsource/tree/main) as described further in [the task guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/lm_eval/docs/new_task_guide.md) and [the advanced task guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/lm_eval/docs/advanced_task_guide.md) and welcome contributions of novel task templates and task variants. As a start, we currently only support one prompt per task, which we strive to make the "standard" as defined by the benchmark's authors. If you would like to study how varying prompts causes changes in the evaluation score, we support prompts authored in the [Promptsource Library](https://github.com/bigscience-workshop/promptsource/tree/main) as described further in [the task guide](./docs/new_task_guide.md) and [the advanced task guide](./docs/advanced_task_guide.md) and welcome contributions of novel task templates and task variants.
## Cite as ## Cite as
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment