Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
aea63162
Commit
aea63162
authored
Feb 04, 2021
by
Leo Gao
Browse files
Fix SAT
parent
d7c08d5b
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
3 deletions
+3
-3
lm_eval/tasks/sat.py
lm_eval/tasks/sat.py
+3
-3
No files found.
lm_eval/tasks/sat.py
View file @
aea63162
import
json
import
random
import
os
from
lm_eval.base
import
Task
,
rf
,
mean
from
lm_eval.base
import
MultipleChoice
Task
,
rf
,
mean
from
tqdm
import
auto
as
tqdm_lib
from
.
common
import
simple_accuracy_metric
import
numpy
as
np
from
..utils
import
sh
class
SATAnalogies
(
Task
):
class
SATAnalogies
(
MultipleChoice
Task
):
NEEDS_MANUAL_DL
=
True
def
__init__
(
self
):
...
...
@@ -61,7 +61,7 @@ class SATAnalogies(Task):
doc
=
{
'source'
:
source
,
'query'
:
query
.
split
(
' '
)[:
2
],
'choices'
:
[
"
{} is to {}"
.
format
(
*
c
.
split
(
' '
)[:
2
])
for
c
in
choices
],
'choices'
:
[
"{} is to {}"
.
format
(
*
c
.
split
(
' '
)[:
2
])
for
c
in
choices
],
'gold'
:
[
'a'
,
'b'
,
'c'
,
'd'
,
'e'
].
index
(
answer_key
.
strip
()),
}
yield
doc
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment