Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
59a0104d
Commit
59a0104d
authored
Mar 30, 2021
by
Leo Gao
Browse files
Remove unused simple_accuracy_metric
parent
2ced79f5
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
0 additions
and
10 deletions
+0
-10
lm_eval/tasks/common.py
lm_eval/tasks/common.py
+0
-10
No files found.
lm_eval/tasks/common.py
View file @
59a0104d
import
datasets
import
lm_eval.metrics
from
..base
import
Task
...
...
@@ -43,15 +42,6 @@ class HFTask(Task):
return
self
.
data
[
"test"
]
def
simple_accuracy_metric
(
preds
,
golds
):
acc
=
float
(
lm_eval
.
metrics
.
mean
())
return
{
"major"
:
acc
,
"minor"
:
{
"acc"
:
acc
},
"higher_is_better"
:
True
,
}
def
yesno
(
x
):
if
x
:
return
'yes'
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment