Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
9551bbf2
Commit
9551bbf2
authored
Apr 25, 2024
by
lintangsutawika
Browse files
fixed args input in aggregate_subtask_metrics
parent
4dd69062
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
lm_eval/evaluator.py
lm_eval/evaluator.py
+2
-1
No files found.
lm_eval/evaluator.py
View file @
9551bbf2
...
@@ -535,7 +535,8 @@ def evaluate(
...
@@ -535,7 +535,8 @@ def evaluate(
metric
metric
]
=
lm_eval
.
api
.
metrics
.
aggregate_subtask_metrics
(
]
=
lm_eval
.
api
.
metrics
.
aggregate_subtask_metrics
(
metrics
,
metrics
,
sizes
if
group_config
[
"weight_by_size"
]
else
[
1
]
*
len
(
sizes
),
sizes
,
group_config
[
"weight_by_size"
],
)
)
# TODO: calculate grouped metric using aggregation fn
# TODO: calculate grouped metric using aggregation fn
if
"N/A"
in
stderrs
:
if
"N/A"
in
stderrs
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment