Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
993baaa6
Commit
993baaa6
authored
Sep 12, 2023
by
lintangsutawika
Browse files
remove comments
parent
973d563a
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
0 additions
and
17 deletions
+0
-17
lm_eval/evaluator.py
lm_eval/evaluator.py
+0
-17
No files found.
lm_eval/evaluator.py
View file @
993baaa6
...
...
@@ -115,23 +115,6 @@ def simple_evaluate(
+
"_rank"
+
str
(
lm
.
rank
)
+
".db"
,
)
# def _change_fewshot(task_dict):
# for task_name in task_dict.keys():
# task_obj = task_dict[task_name]
# if type(task_obj) == tuple:
# group, task_obj = task_obj
# if task_obj
# config = task_obj._config
# if num_fewshot is not None:
# if config["num_fewshot"] > 0:
# default_num_fewshot = config["num_fewshot"]
# eval_logger.warning(
# f"Overwriting default num_fewshot of {task_name} from {default_num_fewshot} to {num_fewshot}"
# )
# task_obj._config["num_fewshot"] = num_fewshot
task_dict
=
lm_eval
.
tasks
.
get_task_dict
(
tasks
)
for
task_name
in
task_dict
.
keys
():
task_obj
=
task_dict
[
task_name
]
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment