Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
232c9ab6
Commit
232c9ab6
authored
Feb 13, 2021
by
Charles Foster
Browse files
Fixes SQuAD v2 metric computation.
parent
4a64031c
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
4 deletions
+9
-4
lm_eval/tasks/squad.py
lm_eval/tasks/squad.py
+9
-4
No files found.
lm_eval/tasks/squad.py
View file @
232c9ab6
...
@@ -64,15 +64,20 @@ class SQuAD(HFTask):
...
@@ -64,15 +64,20 @@ class SQuAD(HFTask):
continuation
,
=
results
continuation
,
=
results
predictions
=
{
no_answer_probability
=
0.0
if
continuation
.
startswith
(
' unanswerable'
):
no_answer_probability
=
1.0
predictions
=
[{
'id'
:
doc
[
'id'
],
'id'
:
doc
[
'id'
],
'prediction_text'
:
continuation
,
'prediction_text'
:
continuation
,
}
'no_answer_probability'
:
no_answer_probability
,
}]
references
=
{
references
=
[
{
'id'
:
doc
[
'id'
],
'id'
:
doc
[
'id'
],
'answers'
:
doc
[
'answers'
],
'answers'
:
doc
[
'answers'
],
}
}
]
metrics
=
squad_metric
.
compute
(
predictions
=
predictions
,
references
=
references
)
metrics
=
squad_metric
.
compute
(
predictions
=
predictions
,
references
=
references
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment