Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
c1822c02
Unverified
Commit
c1822c02
authored
May 03, 2021
by
Leo Gao
Committed by
GitHub
May 03, 2021
Browse files
Update metrics.py
parent
0d9cc5a8
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
lm_eval/metrics.py
lm_eval/metrics.py
+2
-2
No files found.
lm_eval/metrics.py
View file @
c1822c02
...
...
@@ -170,7 +170,7 @@ def _sacreformat(refs, preds):
## stderr stuff
def
bootstrap_std
dev
(
f
,
xs
,
iters
=
10000
):
def
bootstrap_std
err
(
f
,
xs
,
iters
=
10000
):
rnd
=
random
.
Random
()
rnd
.
seed
(
42
)
res
=
[]
...
...
@@ -196,7 +196,7 @@ def stderr_for_metric(metric):
]
if
metric
in
bootstrappable
:
return
lambda
x
:
bootstrap_std
dev
(
metric
,
x
)
/
math
.
sqrt
(
len
(
x
))
return
lambda
x
:
bootstrap_std
err
(
metric
,
x
)
stderr
=
{
mean
:
mean_stderr
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment