Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
69826e2c
Unverified
Commit
69826e2c
authored
Apr 17, 2023
by
Zach Nussbaum
Committed by
GitHub
Apr 17, 2023
Browse files
docs: update readme
parent
94ce276c
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
10 additions
and
0 deletions
+10
-0
README.md
README.md
+10
-0
No files found.
README.md
View file @
69826e2c
...
...
@@ -11,6 +11,7 @@ Features:
-
200+ tasks implemented. See the
[
task-table
](
./docs/task_table.md
)
for a complete list.
-
Support for GPT-2, GPT-3, GPT-Neo, GPT-NeoX, and GPT-J, with flexible tokenization-agnostic interface.
-
Support for evaluation on adapters (e.g. LoRa) supported in
[
HuggingFace's PEFT library
](
https://github.com/huggingface/peft
)
.
-
Task versioning to ensure reproducibility.
## Install
...
...
@@ -58,6 +59,15 @@ To evaluate models that are called via `AutoSeq2SeqLM`, you instead use `hf-seq2
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring.
To use with
[
PEFT
](
https://github.com/huggingface/peft
)
, you can use the following command:
```
bash
python main.py
\
--model
hf-causal
\
--model_args
pretrained
=
EleutherAI/gpt-j-6b,peft
=
nomic-ai/gpt4all-j-lora
\
--tasks
openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq
\
--device
cuda:0
```
Our library also supports the OpenAI API:
```
bash
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment