Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
5978dc4e
"vscode:/vscode.git/clone" did not exist on "66cc634f0601b305019edd2065211dd6cb7b0496"
Commit
5978dc4e
authored
Jul 12, 2023
by
lintangsutawika
Browse files
superglue add yamls
parent
622bdda1
Changes
10
Hide whitespace changes
Inline
Side-by-side
Showing
10 changed files
with
62 additions
and
23 deletions
+62
-23
lm_eval/tasks/super_glue/copa/default.yaml
lm_eval/tasks/super_glue/copa/default.yaml
+2
-3
lm_eval/tasks/super_glue/copa/utils.py
lm_eval/tasks/super_glue/copa/utils.py
+4
-0
lm_eval/tasks/super_glue/multirc/default.yaml
lm_eval/tasks/super_glue/multirc/default.yaml
+13
-0
lm_eval/tasks/super_glue/record/default.yaml
lm_eval/tasks/super_glue/record/default.yaml
+14
-0
lm_eval/tasks/super_glue/record/promptsource-01.yaml
lm_eval/tasks/super_glue/record/promptsource-01.yaml
+0
-5
lm_eval/tasks/super_glue/record/promptsource-02.yaml
lm_eval/tasks/super_glue/record/promptsource-02.yaml
+0
-5
lm_eval/tasks/super_glue/record/util.py
lm_eval/tasks/super_glue/record/util.py
+15
-0
lm_eval/tasks/super_glue/wic/default.yaml
lm_eval/tasks/super_glue/wic/default.yaml
+1
-1
lm_eval/tasks/super_glue/wic/utils.py
lm_eval/tasks/super_glue/wic/utils.py
+0
-9
lm_eval/tasks/super_glue/wsc/default.yaml
lm_eval/tasks/super_glue/wsc/default.yaml
+13
-0
No files found.
lm_eval/tasks/super_glue/copa/default.yaml
View file @
5978dc4e
group
:
-
super-glue-lm-eval-v1
task
:
"
copa
"
task
:
copa
dataset_path
:
super_glue
dataset_name
:
copa
output_type
:
multiple_choice
...
...
@@ -8,7 +8,6 @@ training_split: train
validation_split
:
validation
doc_to_text
:
!function
utils.doc_to_text
doc_to_target
:
!function
utils.doc_to_target
gold_alias
:
"
{{label}}"
# this will be cast to an int.
template_aliases
:
"
{%
set
answer_choices
=
[{{doc.choice1}},
'b']
%}
{{answer_choices}}"
doc_to_choice
:
!function
utils.doc_to_choice
metric_list
:
-
metric
:
acc
lm_eval/tasks/super_glue/copa/utils.py
View file @
5978dc4e
...
...
@@ -15,3 +15,7 @@ def doc_to_target(doc):
correct_choice
=
doc
[
"choice1"
]
if
doc
[
"label"
]
==
0
else
doc
[
"choice2"
]
# Connect the sentences
return
" "
+
convert_choice
(
correct_choice
)
def
doc_to_choice
(
doc
):
return
[
" "
+
convert_choice
(
doc
[
"choice1"
]),
" "
+
convert_choice
(
doc
[
"choice2"
])]
lm_eval/tasks/super_glue/multirc/default.yaml
0 → 100644
View file @
5978dc4e
group
:
-
super-glue-lm-eval-v1
task
:
multirc
dataset_path
:
super_glue
dataset_name
:
multirc
output_type
:
multiple_choice
training_split
:
train
validation_split
:
validation
doc_to_text
:
"
{{paragraph}}
\n
Question:
{{question}}
\n
Answer:"
doc_to_target
:
label
doc_to_choice
:
"
[
\"
{{answer}}
\\
nIs
the
answer
correct?
yes
\"
,
\"
{{answer}}
\\
nIs
the
answer
correct?
no
\"
]"
metric_list
:
-
metric
:
acc
lm_eval/tasks/super_glue/record/default.yaml
0 → 100644
View file @
5978dc4e
group
:
-
super-glue-lm-eval-v1
task
:
record
dataset_path
:
super_glue
dataset_name
:
record
output_type
:
multiple_choice
training_split
:
train
validation_split
:
validation
doc_to_text
:
!function
util.doc_to_text
doc_to_target
:
"
{{answers}}"
doc_to_choice
:
"
{{entities}}"
metric_list
:
-
metric
:
f1
-
metric
:
em
lm_eval/tasks/super_glue/record/promptsource-01.yaml
deleted
100644 → 0
View file @
622bdda1
include
:
promptsource-00.yaml
group
:
-
super-glue-promptsource
task
:
"
Add
sentence
after
after
(continuation
choices)"
use_prompt
:
"
promptsource:Add
sentence
after
after
(continuation
choices)"
lm_eval/tasks/super_glue/record/promptsource-02.yaml
deleted
100644 → 0
View file @
622bdda1
include
:
promptsource-00.yaml
group
:
-
super-glue-promptsource
task
:
"
Can
you
figure
out…"
use_prompt
:
"
promptsource:Can
you
figure
out…"
lm_eval/tasks/super_glue/record/util.py
0 → 100644
View file @
5978dc4e
def
doc_to_text
(
doc
):
initial_text
,
*
highlights
=
doc
[
"passage"
].
strip
().
split
(
"
\n
@highlight
\n
"
)
text
=
initial_text
+
"
\n\n
"
for
highlight
in
highlights
:
text
+=
f
" -
{
highlight
}
.
\n
"
return
text
def
format_answer
(
query
,
entity
):
return
f
" -
{
query
}
"
.
replace
(
"@placeholder"
,
entity
)
def
doc_to_target
(
doc
):
# We only output the first correct entity in a doc
return
format_answer
(
query
=
doc
[
"query"
],
entity
=
doc
[
"answers"
][
0
])
lm_eval/tasks/super_glue/wic/default.yaml
View file @
5978dc4e
...
...
@@ -6,7 +6,7 @@ dataset_name: wic
output_type
:
multiple_choice
training_split
:
train
validation_split
:
validation
doc_to_text
:
!function
utils.doc_to_text
doc_to_text
:
"
Sentence
1:
{{sentence1}}
\n
Sentence
2:
{{sentence2}}
\n
Question:
Is
the
word
'{{sentence1[start1:end1]}}'
used
in
the
same
way
in
the
two
sentences
above?
\n
Answer:"
doc_to_target
:
label
doc_to_choice
:
[
'
no'
,
'
yes'
]
metric_list
:
...
...
lm_eval/tasks/super_glue/wic/utils.py
deleted
100644 → 0
View file @
622bdda1
def
doc_to_text
(
doc
):
return
(
"Sentence 1: {}
\n
Sentence 2: {}
\n
Question: Is the word '{}' used in the same way in the"
" two sentences above?
\n
Answer:"
.
format
(
doc
[
"sentence1"
],
doc
[
"sentence2"
],
doc
[
"sentence1"
][
doc
[
"start1"
]
:
doc
[
"end1"
]],
)
)
lm_eval/tasks/super_glue/wsc/default.yaml
0 → 100644
View file @
5978dc4e
group
:
-
super-glue-lm-eval-v1
task
:
wsc
dataset_path
:
super_glue
dataset_name
:
wsc
output_type
:
multiple_choice
training_split
:
train
validation_split
:
validation
doc_to_text
:
!function
preprocess_wsc.default_doc_to_text
doc_to_target
:
label
doc_to_choice
:
[
'
no'
,
'
yes'
]
metric_list
:
-
metric
:
acc
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment