Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Pai-Megatron-Patch
Commits
5add46aa
Commit
5add46aa
authored
Jan 09, 2025
by
hepj
Browse files
添加Megatron项目
parent
deb8370c
Pipeline
#2199
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
461 additions
and
0 deletions
+461
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/salient_translation_error_detection.yaml
...bbh/cot_zeroshot/salient_translation_error_detection.yaml
+19
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/snarks.yaml
...Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/snarks.yaml
+19
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/sports_understanding.yaml
.../lm_eval/tasks/bbh/cot_zeroshot/sports_understanding.yaml
+21
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/temporal_sequences.yaml
...10/lm_eval/tasks/bbh/cot_zeroshot/temporal_sequences.yaml
+19
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/tracking_shuffled_objects_five_objects.yaml
.../cot_zeroshot/tracking_shuffled_objects_five_objects.yaml
+19
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/tracking_shuffled_objects_seven_objects.yaml
...cot_zeroshot/tracking_shuffled_objects_seven_objects.yaml
+19
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/tracking_shuffled_objects_three_objects.yaml
...cot_zeroshot/tracking_shuffled_objects_three_objects.yaml
+19
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/utils.py
...on-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/utils.py
+224
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/web_of_lies.yaml
...ss-240310/lm_eval/tasks/bbh/cot_zeroshot/web_of_lies.yaml
+20
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/word_sorting.yaml
...s-240310/lm_eval/tasks/bbh/cot_zeroshot/word_sorting.yaml
+15
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/_fewshot_template_yaml
...s-240310/lm_eval/tasks/bbh/fewshot/_fewshot_template_yaml
+22
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/boolean_expressions.yaml
...240310/lm_eval/tasks/bbh/fewshot/boolean_expressions.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/causal_judgement.yaml
...ss-240310/lm_eval/tasks/bbh/fewshot/causal_judgement.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/date_understanding.yaml
...-240310/lm_eval/tasks/bbh/fewshot/date_understanding.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/disambiguation_qa.yaml
...s-240310/lm_eval/tasks/bbh/fewshot/disambiguation_qa.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/dyck_languages.yaml
...ness-240310/lm_eval/tasks/bbh/fewshot/dyck_languages.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/formal_fallacies.yaml
...ss-240310/lm_eval/tasks/bbh/fewshot/formal_fallacies.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/geometric_shapes.yaml
...ss-240310/lm_eval/tasks/bbh/fewshot/geometric_shapes.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/hyperbaton.yaml
...-Harness-240310/lm_eval/tasks/bbh/fewshot/hyperbaton.yaml
+5
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/logical_deduction_five_objects.yaml
...val/tasks/bbh/fewshot/logical_deduction_five_objects.yaml
+5
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/salient_translation_error_detection.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
salient_translation_error_detection"
"
description"
:
"
Detect
the
type
of
error
in
an
English
translation
of
a
German
source
sentence.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_salient_translation_error_detection"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MultiChoiceRegexFilter
group_select
:
-1
ignore_case
:
true
ignore_punctuation
:
true
regex_pattern
:
"
(
\\
([A-Z]
\\
))"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/snarks.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
snarks"
"
description"
:
"
Determine
which
of
two
sentences
is
sarcastic.
\n\n
According
to
Cambridge
University
Dictionary,
sarcasm
is
\"
the
use
of
remarks
that
clearly
mean
the
opposite
of
what
they
say,
made
in
order
to
hurt
someone's
feelings
or
to
criticize
something
in
a
humorous
way.
\"
Sarcastic
sentences
often
contain
satirical
or
ironic
utterances,
hyperboles,
ambivalent
or
witty
remarks.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_snarks"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MultiChoiceRegexFilter
group_select
:
-1
ignore_case
:
true
ignore_punctuation
:
true
regex_pattern
:
"
(
\\
([A-Z]
\\
))"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/sports_understanding.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
sports_understanding"
"
description"
:
"
Determine
whether
an
artificially
constructed
sentence
relating
to
sports
is
plausible
or
not.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_sports_understanding"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MapRegexFilter
group_select
:
-1
ignore_case
:
true
regex_pattern_to_value
:
\b(no|not plausible)\b
:
"
no"
\b(yes|plausible)\b
:
"
yes"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/temporal_sequences.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
temporal_sequences"
"
description"
:
"
Task
description:
Answer
questions
about
which
times
certain
events
could
have
occurred.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_temporal_sequences"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MultiChoiceRegexFilter
group_select
:
-1
ignore_case
:
true
ignore_punctuation
:
true
regex_pattern
:
"
(
\\
([A-Z]
\\
))"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/tracking_shuffled_objects_five_objects.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
tracking_shuffled_objects_five_objects"
"
description"
:
"
A
task
requiring
determining
the
final
positions
of
a
set
of
objects
given
their
initial
positions
and
a
description
of
a
sequence
of
swaps.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_tracking_shuffled_objects_five_objects"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MultiChoiceRegexFilter
group_select
:
-1
ignore_case
:
true
ignore_punctuation
:
true
regex_pattern
:
"
(
\\
([A-Z]
\\
))"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/tracking_shuffled_objects_seven_objects.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
tracking_shuffled_objects_seven_objects"
"
description"
:
"
A
task
requiring
determining
the
final
positions
of
a
set
of
objects
given
their
initial
positions
and
a
description
of
a
sequence
of
swaps.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_tracking_shuffled_objects_seven_objects"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MultiChoiceRegexFilter
group_select
:
-1
ignore_case
:
true
ignore_punctuation
:
true
regex_pattern
:
"
(
\\
([A-Z]
\\
))"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/tracking_shuffled_objects_three_objects.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
tracking_shuffled_objects_three_objects"
"
description"
:
"
A
task
requiring
determining
the
final
positions
of
a
set
of
objects
given
their
initial
positions
and
a
description
of
a
sequence
of
swaps.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_tracking_shuffled_objects_three_objects"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MultiChoiceRegexFilter
group_select
:
-1
ignore_case
:
true
ignore_punctuation
:
true
regex_pattern
:
"
(
\\
([A-Z]
\\
))"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/utils.py
0 → 100644
View file @
5add46aa
import
collections
import
re
import
sys
import
unicodedata
from
lm_eval.filters.extraction
import
Filter
,
RegexFilter
class
ExtendedRegexFilter
(
RegexFilter
):
punct_tbl
=
dict
.
fromkeys
(
i
for
i
in
range
(
sys
.
maxunicode
)
if
unicodedata
.
category
(
chr
(
i
)).
startswith
(
"P"
)
)
def
__init__
(
self
,
regex_pattern
:
str
=
r
"#### (\-?[0-9\.\,]+)"
,
group_select
=
0
,
fallback
:
str
=
"[invalid]"
,
ignore_case
=
False
,
ignore_punctuation
=
False
,
regexes_to_ignore
=
None
,
)
->
None
:
super
().
__init__
(
regex_pattern
,
group_select
,
fallback
)
self
.
ignore_case
=
ignore_case
self
.
ignore_punctuation
=
ignore_punctuation
self
.
regexes_to_ignore
=
regexes_to_ignore
def
filter_ignores
(
self
,
st
):
if
self
.
regexes_to_ignore
is
not
None
:
for
s
in
self
.
regexes_to_ignore
:
st
=
re
.
sub
(
s
,
""
,
st
)
if
self
.
ignore_case
:
st
=
st
.
lower
()
if
self
.
ignore_punctuation
:
# https://stackoverflow.com/a/266162
st
=
st
.
translate
(
self
.
punct_tbl
)
return
st
def
find_match
(
self
,
regex
,
resp
,
convert_dict
=
{}):
match
=
regex
.
findall
(
resp
)
if
match
:
match
=
match
[
self
.
group_select
]
if
isinstance
(
match
,
tuple
):
match
=
[
m
for
m
in
match
if
m
][
0
]
match
=
match
.
strip
()
if
match
and
match
in
convert_dict
:
match
=
convert_dict
[
match
]
return
match
class
MapRegexFilter
(
ExtendedRegexFilter
):
def
__init__
(
self
,
regex_pattern_to_value
:
dict
=
{},
group_select
=
0
,
fallback
:
str
=
"[invalid]"
,
ignore_case
=
False
,
ignore_punctuation
=
False
,
regexes_to_ignore
=
None
,
)
->
None
:
"""
regex_pattern_to_value: Match the regex pattern and change the result into the value
group_select: Selects the (group_select)th match from the findall result. We use the whole regex_patterns, concatenated by |
ignore_case: Lowers the case of response before matching with the given regex
ignore_punctuation: Remove the punctuation before matching with the given regex
regexes_to_ignore: Remove these regexes before matching with the given regex
"""
super
().
__init__
(
"|"
.
join
(
list
(
regex_pattern_to_value
.
keys
())),
group_select
,
fallback
,
ignore_case
,
ignore_punctuation
,
regexes_to_ignore
,
)
self
.
regex_to_value
=
{
re
.
compile
(
r
):
v
for
r
,
v
in
regex_pattern_to_value
.
items
()
}
def
apply
(
self
,
resps
,
docs
):
filtered_resps
=
[]
for
r
in
resps
:
filtered
=
[]
for
resp
in
r
:
whole_match_considering_group_select
=
self
.
find_match
(
self
.
regex
,
self
.
filter_ignores
(
resp
)
)
if
whole_match_considering_group_select
:
for
regex
,
mapped_value
in
self
.
regex_to_value
.
items
():
match
=
self
.
find_match
(
regex
,
self
.
filter_ignores
(
whole_match_considering_group_select
),
)
if
match
:
match
=
mapped_value
break
if
not
whole_match_considering_group_select
or
not
match
:
match
=
self
.
fallback
filtered
.
append
(
match
)
filtered_resps
.
append
(
filtered
)
return
filtered_resps
class
NumberParseRegexFilter
(
ExtendedRegexFilter
):
def
apply
(
self
,
resps
,
docs
):
# here, we assume we have a list, in which each element is
# a list of model responses for some particular input/target pair.
# so we process each of these (same input/target response sets)
# independently (and keep them a list.)
filtered_resps
=
[]
import
regex
from
word2number
import
w2n
# https://www.reddit.com/r/regex/comments/11a38uk/parsing_numbers_written_out_as_english_words
english_number_regex
=
regex
.
compile
(
"((?:(?:zero|one|two|three|four|five|(?:twen|thir|for|fif|six|seven|nine)(?|teen|ty)|eight(?:|een|y)|ten|eleven|twelve|fourteen|hundred|thousand|(?:m|b|tr)illion)(?:zero|one|two|three|four|five|(?:twen|thir|for|fif|six|seven|nine)(?:|teen|ty)|eight(?|een|y)|ten|eleven|twelve|fourteen|hundred|thousand|(?:m|b|tr)illion|[^\S
\r\n
]|,|and|&)+)?(?:zero|one|two|three|four|five|(?:twen|thir|for|fif|six|seven|nine)(?|teen|ty)|eight(?|een|y)|ten|eleven|twelve|fourteen|hundred|thousand|(?:m|b|tr)illion))"
)
for
r
in
resps
:
filtered
=
[]
for
resp
in
r
:
match
=
self
.
find_match
(
self
.
regex
,
resp
)
if
not
match
:
match
=
self
.
find_match
(
english_number_regex
,
resp
.
lower
())
if
match
:
match
=
str
(
w2n
.
word_to_num
(
match
))
if
not
match
:
match
=
self
.
fallback
filtered
.
append
(
match
)
filtered_resps
.
append
(
filtered
)
return
filtered_resps
class
WordSortFilter
(
Filter
):
""" """
def
apply
(
self
,
resps
,
docs
):
filtered_resps
=
[]
for
r
,
doc
in
zip
(
resps
,
docs
):
words
=
doc
[
"input"
].
split
(
"List:"
)[
1
].
strip
().
split
()
regex
=
re
.
compile
(
"|"
.
join
([
f
"
\\
b
{
w
}
\\
b"
for
w
in
words
]))
filtered
=
[]
for
resp
in
r
:
match
=
regex
.
findall
(
resp
)
match
.
reverse
()
ordered_words
=
reversed
(
collections
.
OrderedDict
(
zip
(
match
,
[
None
]
*
len
(
match
)))
)
filtered
.
append
(
" "
.
join
(
ordered_words
))
filtered_resps
.
append
(
filtered
)
return
filtered_resps
class
MultiChoiceRegexFilter
(
ExtendedRegexFilter
):
def
__init__
(
self
,
*
args
,
**
kwargs
):
"""
regex_pattern: The basic regex pattern to use. If fails to match, we will use the customized match procedure
- step 1 : We parse the choices between ([A-Z])s then try to find these choices in the response.
- step 2 : We parse the choice with regex :[\s]*([A-?]), where ? varies by number of choices.
group_select: Selects the (group_select)th match from the findall result.
ignore_case: Ignores the case during step 1 matching
ignore_punctuation: Remove the punctuation during step 1 matching
regexes_to_ignore: Remove these regexes during step 1 matching
"""
super
().
__init__
(
*
args
,
**
kwargs
)
def
apply
(
self
,
resps
,
docs
):
# here, we assume we have a list, in which each element is
# a list of model responses for some particular input/target pair.
# so we process each of these (same input/target response sets)
# independently (and keep them a list.)
filtered_resps
=
[]
for
r
,
doc
in
zip
(
resps
,
docs
):
fallback_regexes
=
[]
choice_to_alpha
=
{}
next_alpha
=
"A"
without_paren_fallback_regexes
=
[]
without_paren_to_target
=
{}
multiple_choices_regex
=
re
.
compile
(
r
"\([A-Z]\)([^\n^(]*)"
)
match
=
multiple_choices_regex
.
findall
(
doc
[
"input"
])
for
m
in
match
:
m
=
self
.
filter_ignores
(
m
.
strip
())
fallback_regexes
.
append
(
f
"
{
re
.
escape
(
m
)
}
"
)
choice_to_alpha
[
m
]
=
f
"(
{
next_alpha
}
)"
without_paren_fallback_regexes
.
append
(
next_alpha
)
without_paren_to_target
[
next_alpha
]
=
f
"(
{
next_alpha
}
)"
next_alpha
=
chr
(
ord
(
next_alpha
)
+
1
)
fallback_regex
=
re
.
compile
(
"|"
.
join
(
fallback_regexes
))
without_paren_fallback_regex
=
"|"
.
join
(
without_paren_fallback_regexes
)
without_paren_fallback_regex
=
re
.
compile
(
f
":[\s]*(
{
without_paren_fallback_regex
}
)"
)
filtered
=
[]
for
resp
in
r
:
match
=
self
.
find_match
(
self
.
regex
,
resp
)
if
not
match
:
match
=
self
.
find_match
(
fallback_regex
,
self
.
filter_ignores
(
resp
),
choice_to_alpha
)
if
not
match
:
match
=
self
.
find_match
(
without_paren_fallback_regex
,
resp
,
without_paren_to_target
)
if
not
match
:
match
=
self
.
fallback
filtered
.
append
(
match
)
filtered_resps
.
append
(
filtered
)
return
filtered_resps
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/web_of_lies.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
web_of_lies"
"
description"
:
"
Evaluate
a
random
boolean
function
expressed
as
a
word
problem.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_web_of_lies"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.MapRegexFilter
group_select
:
-1
ignore_case
:
true
regex_pattern_to_value
:
\b(no|does not tell the truth|is not telling the truth)\b
:
"
no"
\b(yes|tells the truth|is telling the truth)\b
:
"
yes"
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/cot_zeroshot/word_sorting.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
word_sorting"
"
description"
:
"
Sort
a
list
of
words.
\n\n
"
"
doc_to_text"
:
"
Q:
{{input}}
\n
A:
Let's
think
step
by
step."
"
include"
:
"
_cot_zeroshot_template_yaml"
"
task"
:
"
bbh_cot_zeroshot_word_sorting"
filter_list
:
-
name
:
"
flexible-extract"
filter
:
-
function
:
!function
utils.WordSortFilter
-
function
:
"
take_first"
-
name
:
"
strict-match"
filter
:
-
function
:
"
regex"
regex_pattern
:
"
((?<=The
answer
is
)(.*)(?=.)|(?<=the
answer
is
)(.*)(?=.)|(?<=The
answer:
)(.*)(?=.)|(?<=The
final
answer:
)(.*)(?=.))"
-
function
:
"
take_first"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/_fewshot_template_yaml
0 → 100644
View file @
5add46aa
group: bbh_fewshot
dataset_path: lukaemon/bbh
output_type: generate_until
test_split: test
doc_to_target: "{{target}}"
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
# ignore_case: true
# ignore_punctuation: true
generation_kwargs:
until:
- "</s>"
- "Q"
- "\n\n"
do_sample: false
temperature: 0.0
num_fewshot: 0
metadata:
version: 1.0
num_fewshot: 3 # will be printed in results table
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/boolean_expressions.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
boolean_expressions"
"
description"
:
"
Evaluate
the
result
of
a
random
Boolean
expression.
\n\n
"
"
doc_to_text"
:
"
Q:
not
(
(
not
not
True
)
)
is
\n
A:
False
\n\n
Q:
True
and
False
and
not
True
and
True
is
\n
A:
False
\n\n
Q:
not
not
(
not
(
False
)
)
is
\n
A:
True
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_boolean_expressions"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/causal_judgement.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
causal_judgement"
"
description"
:
"
Answer
questions
about
causal
attribution.
\n\n
"
"
doc_to_text"
:
"
Q:
How
would
a
typical
person
answer
each
of
the
following
questions
about
causation?
\n
Frank
T.,
had
an
ongoing
dispute
with
his
neighbor
over
a
stretch
of
land
and
one
day
decided
to
shoot
his
neighbor
in
the
body.
Frank
T.
had
no
experience
with
guns,
his
hand
slipped
on
the
barrel
of
the
gun,
and
the
shot
went
wild.
Nonetheless,
the
bullet
bounced
off
a
large
boulder
several
feet
away
and
hit
the
neighbor's
body,
causing
significant
injury.
Did
Frank
T.
intentionally
shoot
his
neighbor
in
the
body?
\n
Options:
\n
-
Yes
\n
-
No
\n
A:
No
\n\n
Q:
How
would
a
typical
person
answer
each
of
the
following
questions
about
causation?
\n
Suzy
and
Billy
are
working
on
a
project
that
is
very
important
for
our
nation's
security.
The
boss
tells
them
both:
\"
Be
sure
that
you
are
here
at
exactly
9
am.
It
is
absolutely
essential
that
you
arrive
at
that
time.
\"
Both
Billy
and
Suzy
arrive
at
9
am.
As
it
happens,
there
was
a
motion
detector
installed
in
the
room
where
they
arrived.
The
motion
detector
was
set
up
to
be
triggered
if
at
least
one
person
appeared
in
the
room
at
the
same
time.
So
the
motion
detector
went
off.
Did
Billy
cause
the
motion
detector
to
go
off?
\n
Options:
\n
-
Yes
\n
-
No
\n
A:
Yes
\n\n
Q:
How
would
a
typical
person
answer
each
of
the
following
questions
about
causation?
\n
George
and
his
sister
Lena
reunite
at
their
parents'
house
for
Thanksgiving.
Whereas
George
just
got
into
medical
school,
Lena
is
unhappy
in
her
marriage
and
recently
lost
her
job.
Over
the
course
of
the
day,
George
and
Lena
get
into
a
number
of
heated
arguments.
Later
in
the
afternoon
they
play
a
game
of
darts.
They
split
the
first
two
games,
and
the
third
game
is
close
until
the
end.
Who
will
win
comes
down
to
George's
last
shot.
If
he
hits
a
high
point
region,
he
wins;
if
he
hits
a
low
point
region,
Lena
wins.
George
thinks
of
the
difficult
time
Lena
is
having,
and
he
really
wants
to
let
her
win.
He
aims
the
dart
at
the
low
point
region.
He
sets
up
his
shot
and
the
dart
lands
in
the
low
point
region.
After
his
shot,
Lena
wins
the
game
and
is
very
happy.
Did
George
hit
the
low
point
region
intentionally?
\n
Options:
\n
-
Yes
\n
-
No
\n
A:
Yes
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_causal_judgement"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/date_understanding.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
date_understanding"
"
description"
:
"
Infer
the
date
from
context.
\n\n
"
"
doc_to_text"
:
"
Q:
Today
is
Christmas
Eve
of
1937.
What
is
the
date
10
days
ago
in
MM/DD/YYYY?
\n
Options:
\n
(A)
12/14/2026
\n
(B)
12/14/1950
\n
(C)
12/14/2007
\n
(D)
12/14/1937
\n
(E)
07/14/1938
\n
(F)
12/14/1988
\n
A:
(D)
\n\n
Q:
Tomorrow
is
11/12/2019.
What
is
the
date
one
year
ago
from
today
in
MM/DD/YYYY?
\n
Options:
\n
(A)
09/04/2018
\n
(B)
11/11/2018
\n
(C)
08/25/2018
\n
(D)
11/02/2018
\n
(E)
11/04/2018
\n
A:
(B)
\n\n
Q:
Jane
and
John
married
on
Jan
2,
1958.
It
is
their
5-year
anniversary
today.
What
is
the
date
tomorrow
in
MM/DD/YYYY?
\n
Options:
\n
(A)
01/11/1961
\n
(B)
01/03/1963
\n
(C)
01/18/1961
\n
(D)
10/14/1960
\n
(E)
01/03/1982
\n
(F)
12/03/1960
\n
A:
(B)
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_date_understanding"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/disambiguation_qa.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
disambiguation_qa"
"
description"
:
"
Clarify
the
meaning
of
sentences
with
ambiguous
pronouns.
\n\n
"
"
doc_to_text"
:
"
Q:
In
the
following
sentences,
explain
the
antecedent
of
the
pronoun
(which
thing
the
pronoun
refers
to),
or
state
that
it
is
ambiguous.
\n
Sentence:
The
chief
told
the
counselor
that
they
took
the
day
off.
\n
Options:
\n
(A)
The
chief
took
the
day
off
\n
(B)
The
counselor
took
the
day
off
\n
(C)
Ambiguous
\n
A:
(A)
\n\n
Q:
In
the
following
sentences,
explain
the
antecedent
of
the
pronoun
(which
thing
the
pronoun
refers
to),
or
state
that
it
is
ambiguous.
\n
Sentence:
The
manager
sent
a
message
to
the
secretary,
but
he
didn't
reply
yet.
\n
Options:
\n
(A)
The
secretary
didn't
reply
yet
\n
(B)
The
manager
didn't
reply
yet
\n
(C)
Ambiguous
\n
A:
(A)
\n\n
Q:
In
the
following
sentences,
explain
the
antecedent
of
the
pronoun
(which
thing
the
pronoun
refers
to),
or
state
that
it
is
ambiguous.
\n
Sentence:
Bailey
will
plan
to
meet
the
director
at
his
office
\n
Options:
\n
(A)
It
will
be
Bailey's
office
\n
(B)
It
will
be
the
director's
office
\n
(C)
Ambiguous
\n
A:
(C)
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_disambiguation_qa"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/dyck_languages.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
dyck_languages"
"
description"
:
"
Correctly
close
a
Dyck-n
word.
\n\n
"
"
doc_to_text"
:
"
Q:
Complete
the
rest
of
the
sequence,
making
sure
that
the
parentheses
are
closed
properly.
Input:
[
{
[
\n
A:
]
}
]
\n\n
Q:
Complete
the
rest
of
the
sequence,
making
sure
that
the
parentheses
are
closed
properly.
Input:
<
>
(
(
[
[
(
{
}
)
[
<
>
]
]
\n
A:
]
)
)
\n\n
Q:
Complete
the
rest
of
the
sequence,
making
sure
that
the
parentheses
are
closed
properly.
Input:
<
[
<
[
{
<
[
]
<
{
}
>
>
}
]
>
{
{
(
)
}
{
<
[
<
>
]
>
}
\n
A:
}
]
>
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_dyck_languages"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/formal_fallacies.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
formal_fallacies"
"
description"
:
"
Distinguish
deductively
valid
arguments
from
formal
fallacies.
\n\n
"
"
doc_to_text"
:
"
Q:
\"
It
is
not
always
easy
to
see
who
is
related
to
whom
--
and
in
which
ways.
The
following
argument
pertains
to
this
question:
To
begin
with,
Lesley
is
a
close
friend
of
Fernando.
Moreover,
being
a
close
friend
of
Fernando
or
a
schoolmate
of
Lowell
is
sufficient
for
being
a
great-grandfather
of
Leroy.
It
follows
that
Lesley
is
a
great-grandfather
of
Leroy.
\"\n
Is
the
argument,
given
the
explicitly
stated
premises,
deductively
valid
or
invalid?
\n
Options:
\n
-
valid
\n
-
invalid
\n
A:
valid
\n\n
Q:
\"
It
is
not
always
easy
to
see
who
is
related
to
whom
--
and
in
which
ways.
The
following
argument
pertains
to
this
question:
Whoever
is
not
a
great-grandfather
of
Clyde
is
a
stepbrother
of
Brian.
Being
an
ancestor
of
Dana
is
sufficient
for
not
being
a
great-grandfather
of
Clyde.
We
may
conclude:
Everyone
who
is
an
ancestor
of
Dana
is
a
stepbrother
of
Brian,
too.
\"\n
Is
the
argument,
given
the
explicitly
stated
premises,
deductively
valid
or
invalid?
\n
Options:
\n
-
valid
\n
-
invalid
\n
A:
valid
\n\n
Q:
\"
It
is
not
always
easy
to
grasp
who
is
consuming
which
products.
The
following
argument
pertains
to
this
question:
Every
infrequent
user
of
Paul
Mitchell
shampoo
is
either
a
rare
consumer
of
Nioxin
shampoo
or
a
loyal
buyer
of
Caress
soap,
or
both.
No
regular
consumer
of
Lush
soap
is
a
rare
consumer
of
Nioxin
shampoo
and,
in
the
same
time,
a
loyal
buyer
of
Caress
soap.
It
follows
that
whoever
is
an
infrequent
user
of
Paul
Mitchell
shampoo
is
not
a
regular
consumer
of
Lush
soap.
\"\n
Is
the
argument,
given
the
explicitly
stated
premises,
deductively
valid
or
invalid?
\n
Options:
\n
-
valid
\n
-
invalid
\n
A:
invalid
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_formal_fallacies"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/geometric_shapes.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
geometric_shapes"
"
description"
:
"
Name
geometric
shapes
from
their
SVG
paths.
\n\n
"
"
doc_to_text"
:
"
Q:
This
SVG
path
element
<path
d=
\"
M
31.00,73.00
L
32.00,59.00
L
44.00,50.00
L
49.00,41.00
L
64.00,37.00
L
71.00,55.00
L
64.00,76.00
L
52.00,61.00
L
31.00,73.00
\"
/>
draws
a
\n
Options:
\n
(A)
circle
\n
(B)
heptagon
\n
(C)
hexagon
\n
(D)
kite
\n
(E)
line
\n
(F)
octagon
\n
(G)
pentagon
\n
(H)
rectangle
\n
(I)
sector
\n
(J)
triangle
\n
A:
(F)
\n\n
Q:
This
SVG
path
element
<path
d=
\"
M
14.19,26.04
L
51.43,39.21
L
58.44,36.69
L
56.63,30.17
L
48.53,26.66
L
14.19,26.04
\"
/>
draws
a
\n
Options:
\n
(A)
circle
\n
(B)
heptagon
\n
(C)
hexagon
\n
(D)
kite
\n
(E)
line
\n
(F)
octagon
\n
(G)
pentagon
\n
(H)
rectangle
\n
(I)
sector
\n
(J)
triangle
\n
A:
(G)
\n\n
Q:
This
SVG
path
element
<path
d=
\"
M
41.00,43.00
L
37.00,34.00
L
41.00,33.00
L
45.00,34.00
L
41.00,43.00
\"
/>
draws
a
\n
Options:
\n
(A)
circle
\n
(B)
heptagon
\n
(C)
hexagon
\n
(D)
kite
\n
(E)
line
\n
(F)
octagon
\n
(G)
pentagon
\n
(H)
rectangle
\n
(I)
sector
\n
(J)
triangle
\n
A:
(D)
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_geometric_shapes"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/hyperbaton.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
hyperbaton"
"
description"
:
"
Order
adjectives
correctly
in
English
sentences.
\n\n
"
"
doc_to_text"
:
"
Q:
Which
sentence
has
the
correct
adjective
order:
\n
Options:
\n
(A)
rubber
terrible
ship
\n
(B)
terrible
rubber
ship
\n
A:
(B)
\n\n
Q:
Which
sentence
has
the
correct
adjective
order:
\n
Options:
\n
(A)
repulsive
small
Brazilian
exercise
ship
\n
(B)
Brazilian
repulsive
exercise
small
ship
\n
A:
(A)
\n\n
Q:
Which
sentence
has
the
correct
adjective
order:
\n
Options:
\n
(A)
blue
gold
wonderful
square
shoe
\n
(B)
wonderful
square
blue
gold
shoe
\n
A:
(B)
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_hyperbaton"
LM-Evaluation-Harness-240310/lm_eval/tasks/bbh/fewshot/logical_deduction_five_objects.yaml
0 → 100644
View file @
5add46aa
"
dataset_name"
:
"
logical_deduction_five_objects"
"
description"
:
"
A
logical
deduction
task
which
requires
deducing
the
order
of
a
sequence
of
objects.
\n\n
"
"
doc_to_text"
:
"
Q:
The
following
paragraphs
each
describe
a
set
of
three
objects
arranged
in
a
fixed
order.
The
statements
are
logically
consistent
within
each
paragraph.
In
a
golf
tournament,
there
were
three
golfers:
Amy,
Eli,
and
Eve.
Eve
finished
above
Amy.
Eli
finished
below
Amy.
\n
Options:
\n
(A)
Amy
finished
last
\n
(B)
Eli
finished
last
\n
(C)
Eve
finished
last
\n
A:
(B)
\n\n
Q:
The
following
paragraphs
each
describe
a
set
of
three
objects
arranged
in
a
fixed
order.
The
statements
are
logically
consistent
within
each
paragraph.
On
a
shelf,
there
are
three
books:
a
white
book,
a
green
book,
and
an
orange
book.
The
green
book
is
to
the
right
of
the
white
book.
The
orange
book
is
the
rightmost.
\n
Options:
\n
(A)
The
white
book
is
the
leftmost
\n
(B)
The
green
book
is
the
leftmost
\n
(C)
The
orange
book
is
the
leftmost
\n
A:
(A)
\n\n
Q:
The
following
paragraphs
each
describe
a
set
of
three
objects
arranged
in
a
fixed
order.
The
statements
are
logically
consistent
within
each
paragraph.
On
a
shelf,
there
are
three
books:
a
red
book,
a
gray
book,
and
a
white
book.
The
white
book
is
to
the
left
of
the
gray
book.
The
red
book
is
the
second
from
the
left.
\n
Options:
\n
(A)
The
red
book
is
the
leftmost
\n
(B)
The
gray
book
is
the
leftmost
\n
(C)
The
white
book
is
the
leftmost
\n
A:
(C)
\n\n
Q:
{{input}}
\n
A:"
"
include"
:
"
_fewshot_template_yaml"
"
task"
:
"
bbh_fewshot_logical_deduction_five_objects"
Prev
1
…
12
13
14
15
16
17
18
19
20
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment