Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
57c751fa
Commit
57c751fa
authored
Jan 05, 2021
by
nicholaskross
Browse files
got a start on the SAT eval
parent
6803e647
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
20 additions
and
2 deletions
+20
-2
lm_eval/tasks/sat.py
lm_eval/tasks/sat.py
+20
-2
No files found.
lm_eval/tasks/sat.py
View file @
57c751fa
...
@@ -4,6 +4,9 @@ import json
...
@@ -4,6 +4,9 @@ import json
import
random
import
random
import
os
import
os
from
lm_eval.base
import
Dataset
from
lm_eval.base
import
Dataset
from
tqdm
import
auto
as
tqdm_lib
from
.
common
import
simple_accuracy_metric
import
numpy
as
np
from
..utils
import
sh
from
..utils
import
sh
...
@@ -94,5 +97,20 @@ class SATAnalogies(Dataset):
...
@@ -94,5 +97,20 @@ class SATAnalogies(Dataset):
return
text
return
text
def
evaluate
(
self
,
docs
,
lm
):
def
evaluate
(
self
,
docs
,
lm
):
# TODO: Write evaluation function
golds
=
[
doc
[
"answer_key"
]
for
doc
in
docs
]
raise
NotImplementedError
()
preds
=
[]
for
doc
in
tqdm_lib
.
tqdm
(
docs
):
ctx
=
self
.
fewshot_context
(
doc
=
doc
,
num_fewshot
=
1
,
provide_description
=
None
,
# unless Dataset evaluate()s should get num_fewshot/ provide_description
)
probs_before_numpy
=
[]
for
choice
in
doc
[
"choices"
]:
this_choice
=
" "
+
choice
probs_before_numpy
.
append
(
lm
.
loglikelihood
(
ctx
,
this_choice
))
probs
=
np
.
array
(
probs_before_numpy
)
preds
.
append
(
np
.
argmax
(
probs
))
return
simple_accuracy_metric
(
preds
=
preds
,
golds
=
golds
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment