Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
c4632655
Unverified
Commit
c4632655
authored
Mar 29, 2021
by
Leo Gao
Committed by
GitHub
Mar 29, 2021
Browse files
Merge pull request #168 from EleutherAI/translation-fix
fix translation scoring format
parents
5aa601f3
3a2b7df4
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
3 additions
and
1 deletion
+3
-1
.gitignore
.gitignore
+1
-0
lm_eval/metrics.py
lm_eval/metrics.py
+2
-1
No files found.
.gitignore
View file @
c4632655
...
...
@@ -2,3 +2,4 @@ env
*.pyc
data/
lm_cache
.idea
\ No newline at end of file
lm_eval/metrics.py
View file @
c4632655
import
math
from
collections
import
Iterable
from
pprint
import
pprint
import
numpy
as
np
import
sacrebleu
...
...
@@ -124,7 +125,7 @@ def _sacreformat(refs, preds):
# Must become List[List[str]] with the inner list corresponding to preds
if
not
is_non_str_iterable
(
refs
):
refs
=
list
(
refs
)
if
not
is_non_str_iterable
(
refs
):
if
not
is_non_str_iterable
(
refs
[
0
]
):
refs
=
[[
ref
]
for
ref
in
refs
]
refs
=
list
(
zip
(
*
refs
))
# Note the number of refs in each ref list much match the number of preds
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment