Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
81e42932
Commit
81e42932
authored
Feb 12, 2021
by
Anthony DiPofi
Browse files
change mathqa to use numeric answer strings instead of a,b,c,d,e as choices
parent
c6c67272
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
4 deletions
+7
-4
lm_eval/tasks/mathqa.py
lm_eval/tasks/mathqa.py
+7
-4
No files found.
lm_eval/tasks/mathqa.py
View file @
81e42932
from
.
common
import
HFTask
from
.
common
import
HFTask
from
lm_eval.base
import
mean
,
rf
,
MultipleChoiceTask
from
lm_eval.base
import
mean
,
rf
,
MultipleChoiceTask
import
re
class
MathQA
(
HFTask
,
MultipleChoiceTask
):
class
MathQA
(
HFTask
,
MultipleChoiceTask
):
DATASET_PATH
=
"math_qa"
DATASET_PATH
=
"math_qa"
...
@@ -17,10 +17,13 @@ class MathQA(HFTask, MultipleChoiceTask):
...
@@ -17,10 +17,13 @@ class MathQA(HFTask, MultipleChoiceTask):
def
_convert_standard
(
self
,
doc
):
def
_convert_standard
(
self
,
doc
):
answer_idx
=
[
'a'
,
'b'
,
'c'
,
'd'
,
'e'
].
index
(
doc
[
'correct'
])
choices
=
[
c
[
4
:].
rstrip
(
" ,"
)
for
c
in
re
.
findall
(
r
"[abcd] \) .*?, |e .*?$"
,
doc
[
'options'
])]
out_doc
=
{
out_doc
=
{
"query"
:
"Question: "
+
doc
[
'Problem'
]
+
"
"
+
doc
[
"options"
]
+
"
\n
Answer:"
,
"query"
:
"Question: "
+
doc
[
'Problem'
]
+
"
\n
Answer:"
,
"choices"
:
[
'a'
,
'b'
,
'c'
,
'd'
,
'e'
]
,
"choices"
:
choices
,
"gold"
:
[
'a'
,
'b'
,
'c'
,
'd'
,
'e'
].
index
(
doc
[
'correct'
])
,
"gold"
:
answer_idx
,
}
}
return
out_doc
return
out_doc
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment