Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
4ae2ab37
Commit
4ae2ab37
authored
Apr 25, 2022
by
jon-tow
Browse files
Add `higher_is_better` & `aggregation` defaults to `PromptSourceTask`
parent
6ec93da2
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
10 additions
and
0 deletions
+10
-0
lm_eval/base.py
lm_eval/base.py
+10
-0
No files found.
lm_eval/base.py
View file @
4ae2ab37
...
...
@@ -701,6 +701,16 @@ class PromptSourceTask(Task):
# Map metric name to HF metric.
# TODO(Albert): What is Other?
#metric_names = prompt.metadata.metrics
def
higher_is_better
(
self
):
return
{
"acc"
:
True
}
def
aggregation
(
self
):
return
{
"acc"
:
mean
,
}
class
MultipleChoiceTask
(
Task
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment