Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
03b9db6b
Commit
03b9db6b
authored
Jun 28, 2023
by
haileyschoelkopf
Browse files
revert mutual info whitespace change
parent
b250b001
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
4 deletions
+6
-4
lm_eval/api/task.py
lm_eval/api/task.py
+6
-4
No files found.
lm_eval/api/task.py
View file @
03b9db6b
...
...
@@ -718,12 +718,14 @@ class ConfigurableTask(Task):
raise
TypeError
def
gold_alias
(
self
,
doc
):
# TODO: reevaluate if we need this. implemented to have a
# processed version of answer to put into gsm8k exact_match scoring as ref.
# returns a version of the gold target answer to a document,
# which should be passed into metric for scoring as the ground truth.
# in multiple_choice tasks, this should be castable to an int corresponding to the index
# within the answer choices, while doc_to_target is the string version of {{answer_choices[gold]}}.
if
self
.
_config
.
gold_alias
is
not
None
:
doc_to_target
=
self
.
_config
.
gold_alias
else
:
# doc_to_target = self._config.doc_to_target
return
self
.
doc_to_target
(
doc
)
if
type
(
doc_to_target
)
==
str
:
...
...
@@ -772,7 +774,7 @@ class ConfigurableTask(Task):
Instance
(
request_type
=
"loglikelihood"
,
doc
=
doc
,
arguments
=
(
""
,
"
{}"
.
format
(
choice
)),
arguments
=
(
""
,
"{}"
.
format
(
choice
)),
idx
=
i
,
**
kwargs
,
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment