Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
cbbe3b23
Commit
cbbe3b23
authored
May 15, 2023
by
lintangsutawika
Browse files
added more samples
parent
cfbfa71c
Changes
5
Show whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
81 additions
and
0 deletions
+81
-0
lm_eval/tasks/super_glue/cb/t5-prompt.yaml
lm_eval/tasks/super_glue/cb/t5-prompt.yaml
+16
-0
lm_eval/tasks/super_glue/copa/t5-prompt.yaml
lm_eval/tasks/super_glue/copa/t5-prompt.yaml
+16
-0
lm_eval/tasks/super_glue/record/t5-prompt.yaml
lm_eval/tasks/super_glue/record/t5-prompt.yaml
+16
-0
lm_eval/tasks/super_glue/wsc/preprocess_wsc.py
lm_eval/tasks/super_glue/wsc/preprocess_wsc.py
+17
-0
lm_eval/tasks/super_glue/wsc/t5-prompt.yaml
lm_eval/tasks/super_glue/wsc/t5-prompt.yaml
+16
-0
No files found.
lm_eval/tasks/super_glue/cb/t5-prompt.yaml
0 → 100644
View file @
cbbe3b23
group
:
-
super-glue-t5-prompt
task
:
t5-prompt
reference
:
"
From
Raffel
et.
al.
2019"
dataset_path
:
super_glue
dataset_name
:
cb
training_split
:
train
validation_split
:
validation
doc_to_text
:
"
cb
hypothesis:
{{hypothesis}}
premise
{{premise}}"
doc_to_target
:
"
{%
set
answer_choices
=
['entailment',
'contradiction',
'neutral']
%}{{answer_choices[label]}}"
metric_list
:
-
metric
:
exact_match
aggregation
:
mean
higher_is_better
:
true
ignore_case
:
true
ignore_punctuation
:
true
lm_eval/tasks/super_glue/copa/t5-prompt.yaml
0 → 100644
View file @
cbbe3b23
group
:
-
super-glue-t5-prompt
task
:
t5-prompt
reference
:
"
From
Raffel
et.
al.
2019"
dataset_path
:
super_glue
dataset_name
:
copa
training_split
:
train
validation_split
:
validation
doc_to_text
:
"
copa
choice1:
{{choice1}}
choice2:
{{choice2}}
question:
{{question}}"
doc_to_target
:
"
{%
set
answer_choices
=
['False',
'True']
%}{{answer_choices[label]}}"
metric_list
:
-
metric
:
exact_match
aggregation
:
mean
higher_is_better
:
true
ignore_case
:
true
ignore_punctuation
:
true
lm_eval/tasks/super_glue/record/t5-prompt.yaml
0 → 100644
View file @
cbbe3b23
group
:
-
super-glue-t5-prompt
task
:
t5-prompt
reference
:
"
From
Raffel
et.
al.
2019"
dataset_path
:
super_glue
dataset_name
:
record
training_split
:
train
validation_split
:
validation
doc_to_text
:
"
record
query:
{{query}}
entities:
{{entities}}
passage:
{{passage}}"
doc_to_target
:
"
{{answers}}"
metric_list
:
-
metric
:
exact_match
aggregation
:
mean
higher_is_better
:
true
ignore_case
:
true
ignore_punctuation
:
true
lm_eval/tasks/super_glue/wsc/preprocess_wsc.py
0 → 100644
View file @
cbbe3b23
import
re
def
doc_to_text
(
x
):
def
_mark_span
(
text
,
span_str
,
span_idx
,
mark
):
pattern_tmpl
=
r
'^((?:\S+\s){N})(W)'
pattern
=
re
.
sub
(
pattern_tmpl
,
'N'
,
str
(
span_idx
))
pattern
=
re
.
sub
(
pattern
,
'W'
,
span_str
)
return
re
.
sub
(
text
,
pattern
,
r
'\1{0} \2 {0}'
.
format
(
mark
))
text
=
x
[
'text'
]
text
=
_mark_span
(
text
,
x
[
'span1_text'
],
x
[
'span1_index'
],
'*'
)
# Compensate for 2 added "words" added in previous step.
span2_index
=
x
[
'span2_index'
]
+
2
*
(
x
[
'span1_index'
]
<
x
[
'span2_index'
])
text
=
_mark_span
(
text
,
x
[
'span2_text'
],
span2_index
,
'#'
)
return
text
\ No newline at end of file
lm_eval/tasks/super_glue/wsc/t5-prompt.yaml
0 → 100644
View file @
cbbe3b23
group
:
-
super-glue-t5-prompt
task
:
t5-prompt
reference
:
"
From
Raffel
et.
al.
2019"
dataset_path
:
super_glue
dataset_name
:
wsc
training_split
:
train
validation_split
:
validation
doc_to_text
:
!function
"
preprocess_wsc.doc_to_text"
doc_to_target
:
"
{{candidates[label]}}"
metric_list
:
-
metric
:
exact_match
aggregation
:
mean
higher_is_better
:
true
ignore_case
:
true
ignore_punctuation
:
true
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment