Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
901ad392
Commit
901ad392
authored
May 09, 2024
by
Israel Abebe Azime
Browse files
remove print
parent
343880ab
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
3 additions
and
3 deletions
+3
-3
lm_eval/tasks/afrimmlu/fewshot.sh
lm_eval/tasks/afrimmlu/fewshot.sh
+2
-1
lm_eval/tasks/afrimmlu/utils.py
lm_eval/tasks/afrimmlu/utils.py
+1
-2
No files found.
lm_eval/tasks/afrimmlu/fewshot.sh
View file @
901ad392
...
...
@@ -4,4 +4,5 @@ lm_eval --model hf \
--device
cuda:0
\
--batch_size
1
\
--num_fewshot
0
\
--verbosity
DEBUG
\ No newline at end of file
--verbosity
DEBUG
\
--wandb_args
project
=
afrimmlu
\ No newline at end of file
lm_eval/tasks/afrimmlu/utils.py
View file @
901ad392
...
...
@@ -16,8 +16,8 @@ def doc_to_text(doc):
C: ''{choice3}'''
D: ''{choice4}'''
Answer: """
choices
=
eval
(
doc
[
"choices"
])
text
=
output
.
format
(
subject
=
doc
[
'subject'
],
question
=
doc
[
'question'
],
...
...
@@ -25,7 +25,6 @@ def doc_to_text(doc):
choice2
=
choices
[
1
],
choice3
=
choices
[
2
],
choice4
=
choices
[
3
])
print
(
text
)
return
text
def
weighted_f1_score
(
items
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment