Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
b11f7f37
Commit
b11f7f37
authored
Sep 12, 2023
by
lintangsutawika
Browse files
add TODO
parent
d29c0940
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
1 deletion
+9
-1
lm_eval/prompts/__init__.py
lm_eval/prompts/__init__.py
+9
-1
No files found.
lm_eval/prompts/__init__.py
View file @
b11f7f37
import
ast
from
lm_eval
import
utils
from
lm_eval.logger
import
eval_logger
...
...
@@ -63,6 +65,12 @@ def load_prompt_list(use_prompt: str, dataset_name=None, subset_name=None, **kwa
else
:
prompts
=
DatasetTemplates
(
dataset_name
=
dataset_name
,
subset_name
=
subset_name
)
category_name
,
prompt_name
=
use_prompt
.
split
(
":"
)
# TODO allow to multiple prompt naming
# category_name, *prompt_name = use_prompt.split(":")
# if len(prompt_name) > 1:
# prompt_list = []
# for prompt in prompt_name:
# prompt_list.append(utils.pattern_match(prompt_name, prompts.all_template_names))
# else:
prompt_list
=
utils
.
pattern_match
(
prompt_name
,
prompts
.
all_template_names
)
return
[
":"
.
join
([
category_name
,
prompt
])
for
prompt
in
prompt_list
]
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment