@@ -59,6 +59,8 @@ Both LMs (`lm_eval.models`) and Tasks (`lm_eval.tasks`) are kept in a registry d
...
@@ -59,6 +59,8 @@ Both LMs (`lm_eval.models`) and Tasks (`lm_eval.tasks`) are kept in a registry d
**If you want to extend either models or tasks, simply add a new LM or Task subclass, and decorate with the registry decorator**.
**If you want to extend either models or tasks, simply add a new LM or Task subclass, and decorate with the registry decorator**.
The [GPT-3 Evaluations Project](https://github.com/EleutherAI/lm_evaluation_harness/projects/1) tracks our progress implementing new tasks. Right now, we are focused on getting all the datasets loaded so that we can dedupe against the training data. Implementing the actual evaluations is nice but not necessary at the current moment.