Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
8d4d1fa9
Commit
8d4d1fa9
authored
Oct 19, 2023
by
lintangsutawika
Browse files
fixed registered metric
parent
9fbe6eef
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
4 additions
and
2 deletions
+4
-2
lm_eval/api/metrics.py
lm_eval/api/metrics.py
+3
-2
lm_eval/api/task.py
lm_eval/api/task.py
+1
-0
No files found.
lm_eval/api/metrics.py
View file @
8d4d1fa9
...
...
@@ -5,6 +5,7 @@ import numpy as np
import
sacrebleu
import
sklearn.metrics
import
random
import
evaluate
from
lm_eval.api.registry
import
register_metric
,
register_aggregation
...
...
@@ -141,8 +142,8 @@ def acc_mutual_info_fn(items): # This is a passthrough function
output_type
=
"generate_until"
,
aggregation
=
"mean"
,
)
def
exact_match_fn
(
item
s
):
# This is a passthrough function
return
items
def
exact_match_fn
(
**
kwarg
s
):
# This is a passthrough function
return
evaluate
.
load
(
"exact_match"
).
compute
(
**
kwargs
)
@
register_metric
(
...
...
lm_eval/api/task.py
View file @
8d4d1fa9
...
...
@@ -544,6 +544,7 @@ class ConfigurableTask(Task):
for
metric_name
in
_metric_list
:
self
.
_metric_fn_list
[
metric_name
]
=
get_metric
(
metric_name
)
self
.
_metric_fn_kwargs
[
metric_name
]
=
{}
self
.
_aggregation_list
[
metric_name
]
=
get_metric_aggregation
(
metric_name
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment