Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
bc50a9aa
Unverified
Commit
bc50a9aa
authored
Sep 24, 2024
by
Eldar Kurtic
Committed by
GitHub
Sep 24, 2024
Browse files
add a note for missing dependencies (#2336)
parent
d7734d19
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
0 deletions
+9
-0
lm_eval/tasks/leaderboard/README.md
lm_eval/tasks/leaderboard/README.md
+9
-0
No files found.
lm_eval/tasks/leaderboard/README.md
View file @
bc50a9aa
...
...
@@ -13,6 +13,15 @@ As we want to evaluate models across capabilities, the list currently contains:
Details on the choice of those evals can be found
[
here
](
https://huggingface.co/spaces/open-llm-leaderboard/blog
)
!
## Install
To install the
`lm-eval`
package with support for leaderboard evaluations, run:
```
bash
git clone
--depth
1 https://github.com/EleutherAI/lm-evaluation-harness
cd
lm-evaluation-harness
pip
install
-e
".[math,ifeval,sentencepiece]"
```
## BigBenchHard (BBH)
A suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH).
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment