Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
2041dc34
Commit
2041dc34
authored
Oct 03, 2023
by
haileyschoelkopf
Browse files
Merge branch 'big-refactor' into bigbench
parents
67c0f73a
15f4a3ef
Changes
201
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
198 additions
and
2 deletions
+198
-2
lm_eval/tasks/cmmlu/cmmlu_default_world_history.yaml
lm_eval/tasks/cmmlu/cmmlu_default_world_history.yaml
+4
-0
lm_eval/tasks/cmmlu/cmmlu_default_world_religions.yaml
lm_eval/tasks/cmmlu/cmmlu_default_world_religions.yaml
+4
-0
lm_eval/tasks/crows_pairs/README.md
lm_eval/tasks/crows_pairs/README.md
+1
-2
lm_eval/tasks/csatqa/_default_csatqa_yaml
lm_eval/tasks/csatqa/_default_csatqa_yaml
+15
-0
lm_eval/tasks/csatqa/_generate_configs.py
lm_eval/tasks/csatqa/_generate_configs.py
+51
-0
lm_eval/tasks/csatqa/csatqa_gr.yaml
lm_eval/tasks/csatqa/csatqa_gr.yaml
+3
-0
lm_eval/tasks/csatqa/csatqa_li.yaml
lm_eval/tasks/csatqa/csatqa_li.yaml
+3
-0
lm_eval/tasks/csatqa/csatqa_rch.yaml
lm_eval/tasks/csatqa/csatqa_rch.yaml
+3
-0
lm_eval/tasks/csatqa/csatqa_rcs.yaml
lm_eval/tasks/csatqa/csatqa_rcs.yaml
+3
-0
lm_eval/tasks/csatqa/csatqa_rcss.yaml
lm_eval/tasks/csatqa/csatqa_rcss.yaml
+3
-0
lm_eval/tasks/csatqa/csatqa_wr.yaml
lm_eval/tasks/csatqa/csatqa_wr.yaml
+3
-0
lm_eval/tasks/csatqa/utils.py
lm_eval/tasks/csatqa/utils.py
+20
-0
lm_eval/tasks/mgsm/native_cot/cot_yaml
lm_eval/tasks/mgsm/native_cot/cot_yaml
+29
-0
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_bn.yaml
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_bn.yaml
+8
-0
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_de.yaml
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_de.yaml
+8
-0
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_en.yaml
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_en.yaml
+8
-0
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_es.yaml
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_es.yaml
+8
-0
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_fr.yaml
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_fr.yaml
+8
-0
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_ja.yaml
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_ja.yaml
+8
-0
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_ru.yaml
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_ru.yaml
+8
-0
No files found.
lm_eval/tasks/cmmlu/cmmlu_default_world_history.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
world_history"
"
description"
:
"
以下是关于世界历史的单项选择题,请直接给出正确答案的选项。
\n\n
"
"
include"
:
"
_default_template_yaml"
"
task"
:
"
cmmlu_world_history"
lm_eval/tasks/cmmlu/cmmlu_default_world_religions.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
world_religions"
"
description"
:
"
以下是关于世界宗教的单项选择题,请直接给出正确答案的选项。
\n\n
"
"
include"
:
"
_default_template_yaml"
"
task"
:
"
cmmlu_world_religions"
lm_eval/tasks/crows_pairs/README.md
View file @
2041dc34
...
...
@@ -93,10 +93,9 @@ All tasks evaluate the percentage of more-stereotypical sentences that are rated
*
[x] Is the task an existing benchmark in the literature?
*
[x] Have you referenced the original paper that introduced the task?
*
[x] If yes, does the original paper provide a reference implementation?
*
[x] The original paper does not for causal language models, so
*
[x] The original paper does not for causal language models, so
this is a novel formulation of the task for autoregressive LMs.
If other tasks on this dataset are already supported:
*
[x] Is the "Main" variant of this task clearly denoted?
*
[x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
*
[x] Have you noted which, if any, published evaluation setups are matched by this variant?
*
[
x] This matches the evaluations performed in the [Pythia paper
](
https://arxiv.org/abs/2304.01373
)
lm_eval/tasks/csatqa/_default_csatqa_yaml
0 → 100644
View file @
2041dc34
group: csatqa
dataset_path: EleutherAI/csatqa
test_split: test
output_type: multiple_choice
process_docs: !function utils.process_docs
doc_to_text: "{{question}}"
doc_to_choice: "{{choices}}"
doc_to_target: "{{gold}}"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
lm_eval/tasks/csatqa/_generate_configs.py
0 → 100644
View file @
2041dc34
"""
Take in a YAML, and output all other splits with this YAML
"""
import
os
import
yaml
import
argparse
from
tqdm
import
tqdm
from
lm_eval.logger
import
eval_logger
SUBSETS
=
[
"WR"
,
"GR"
,
"RCS"
,
"RCSS"
,
"RCH"
,
"LI"
]
def
parse_args
():
parser
=
argparse
.
ArgumentParser
()
parser
.
add_argument
(
"--base_yaml_path"
,
required
=
True
)
parser
.
add_argument
(
"--save_prefix_path"
,
default
=
"csatqa"
)
parser
.
add_argument
(
"--task_prefix"
,
default
=
""
)
return
parser
.
parse_args
()
if
__name__
==
"__main__"
:
args
=
parse_args
()
# get filename of base_yaml so we can `"include": ` it in our other YAMLs.
base_yaml_name
=
os
.
path
.
split
(
args
.
base_yaml_path
)[
-
1
]
with
open
(
args
.
base_yaml_path
)
as
f
:
base_yaml
=
yaml
.
full_load
(
f
)
for
name
in
tqdm
(
SUBSETS
):
yaml_dict
=
{
"include"
:
base_yaml_name
,
"task"
:
f
"csatqa_
{
args
.
task_prefix
}
_
{
name
}
"
if
args
.
task_prefix
!=
""
else
f
"csatqa_
{
name
.
lower
()
}
"
,
"dataset_name"
:
name
,
}
file_save_path
=
args
.
save_prefix_path
+
f
"_
{
name
.
lower
()
}
.yaml"
eval_logger
.
info
(
f
"Saving yaml for subset
{
name
}
to
{
file_save_path
}
"
)
with
open
(
file_save_path
,
"w"
)
as
yaml_file
:
yaml
.
dump
(
yaml_dict
,
yaml_file
,
width
=
float
(
"inf"
),
allow_unicode
=
True
,
default_style
=
'"'
,
)
lm_eval/tasks/csatqa/csatqa_gr.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
GR"
"
include"
:
"
_default_csatqa_yaml"
"
task"
:
"
csatqa_gr"
lm_eval/tasks/csatqa/csatqa_li.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
LI"
"
include"
:
"
_default_csatqa_yaml"
"
task"
:
"
csatqa_li"
lm_eval/tasks/csatqa/csatqa_rch.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
RCH"
"
include"
:
"
_default_csatqa_yaml"
"
task"
:
"
csatqa_rch"
lm_eval/tasks/csatqa/csatqa_rcs.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
RCS"
"
include"
:
"
_default_csatqa_yaml"
"
task"
:
"
csatqa_rcs"
lm_eval/tasks/csatqa/csatqa_rcss.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
RCSS"
"
include"
:
"
_default_csatqa_yaml"
"
task"
:
"
csatqa_rcss"
lm_eval/tasks/csatqa/csatqa_wr.yaml
0 → 100644
View file @
2041dc34
"
dataset_name"
:
"
WR"
"
include"
:
"
_default_csatqa_yaml"
"
task"
:
"
csatqa_wr"
lm_eval/tasks/csatqa/utils.py
0 → 100644
View file @
2041dc34
import
datasets
def
process_docs
(
dataset
:
datasets
.
Dataset
)
->
datasets
.
Dataset
:
def
_process_doc
(
doc
):
instruction
=
f
"""다음을 읽고 정답으로 알맞은 것을 고르시요.
### Context:
{
doc
[
"context"
]
}
### Question:
{
doc
[
"question"
]
}
### Options:
(1)
{
doc
[
'option#1'
]
}
\n
(2)
{
doc
[
"option#2"
]
}
\n
(3)
{
doc
[
"option#3"
]
}
\n
(4)
{
doc
[
'option#4'
]
}
\n
(5)
{
doc
[
'option#5'
]
}
### Answer: 주어진 문제의 정답은"""
out_doc
=
{
"question"
:
instruction
,
"choices"
:
[
"(1)"
,
"(2)"
,
"(3)"
,
"(4)"
,
"(5)"
],
"gold"
:
int
(
doc
[
"gold"
])
-
1
,
}
return
out_doc
return
dataset
.
map
(
_process_doc
)
lm_eval/tasks/mgsm/native_cot/cot_yaml
0 → 100644
View file @
2041dc34
# This file will be included in the generated language-specific task configs.
# It doesn't have a yaml file extension as it is not meant to be imported directly
# by the harness.
group: mgsm_cot_native
dataset_path: juletxara/mgsm
dataset_name: null # Overridden by language-specific config.
output_type: greedy_until
training_split: train
test_split: test
target_delimiter: ""
generation_kwargs:
until:
- "\n\n"
- "\n"
do_sample: false
temperature: 0.0
target_delimiter: " "
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: true
filter_list:
- name: "get-answer"
filter:
- function: "regex"
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)"
- function: "take_first"
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_bn.yaml
0 → 100644
View file @
2041dc34
# Generated by utils.py
dataset_name
:
bn
doc_to_target
:
'
{%
if
answer
is
not
none
%}{{answer[16+1]}}{%
else
%}{{answer_number|string}}{%
endif
%}'
doc_to_text
:
'
{%
if
answer
is
not
none
%}{{question+"\nধাপে
ধাপে
উত্তর:"}}{%
else
%}{{"প্রশ্ন:
"+question+"\nধাপে
ধাপে
উত্তর:"}}{%
endif
%}'
include
:
cot_yaml
task
:
mgsm_bn_native_cot
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_de.yaml
0 → 100644
View file @
2041dc34
# Generated by utils.py
dataset_name
:
de
doc_to_target
:
'
{%
if
answer
is
not
none
%}{{answer[28+1]}}{%
else
%}{{answer_number|string}}{%
endif
%}'
doc_to_text
:
'
{%
if
answer
is
not
none
%}{{question+"\nSchritt-für-Schritt-Antwort:"}}{%
else
%}{{"Frage:
"+question+"\nSchritt-für-Schritt-Antwort:"}}{%
endif
%}'
include
:
cot_yaml
task
:
mgsm_de_native_cot
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_en.yaml
0 → 100644
View file @
2041dc34
# Generated by utils.py
dataset_name
:
en
doc_to_target
:
'
{%
if
answer
is
not
none
%}{{answer[20+1]}}{%
else
%}{{answer_number|string}}{%
endif
%}'
doc_to_text
:
'
{%
if
answer
is
not
none
%}{{question+"\nStep-by-Step
Answer:"}}{%
else
%}{{"Question:
"+question+"\nStep-by-Step
Answer:"}}{%
endif
%}'
include
:
cot_yaml
task
:
mgsm_en_native_cot
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_es.yaml
0 → 100644
View file @
2041dc34
# Generated by utils.py
dataset_name
:
es
doc_to_target
:
'
{%
if
answer
is
not
none
%}{{answer[22+1]}}{%
else
%}{{answer_number|string}}{%
endif
%}'
doc_to_text
:
'
{%
if
answer
is
not
none
%}{{question+"\nRespuesta
paso
a
paso:"}}{%
else
%}{{"Pregunta:
"+question+"\nRespuesta
paso
a
paso:"}}{%
endif
%}'
include
:
cot_yaml
task
:
mgsm_es_native_cot
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_fr.yaml
0 → 100644
View file @
2041dc34
# Generated by utils.py
dataset_name
:
fr
doc_to_target
:
'
{%
if
answer
is
not
none
%}{{answer[25+1]}}{%
else
%}{{answer_number|string}}{%
endif
%}'
doc_to_text
:
'
{%
if
answer
is
not
none
%}{{question+"\nRéponse
étape
par
étape
:"}}{%
else
%}{{"Question
:
"+question+"\nRéponse
étape
par
étape
:"}}{%
endif
%}'
include
:
cot_yaml
task
:
mgsm_fr_native_cot
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_ja.yaml
0 → 100644
View file @
2041dc34
# Generated by utils.py
dataset_name
:
ja
doc_to_target
:
'
{%
if
answer
is
not
none
%}{{answer[10+1]}}{%
else
%}{{answer_number|string}}{%
endif
%}'
doc_to_text
:
'
{%
if
answer
is
not
none
%}{{question+"\nステップごとの答え:"}}{%
else
%}{{"問題:
"+question+"\nステップごとの答え:"}}{%
endif
%}'
include
:
cot_yaml
task
:
mgsm_ja_native_cot
lm_eval/tasks/mgsm/native_cot/mgsm_cot_native_ru.yaml
0 → 100644
View file @
2041dc34
# Generated by utils.py
dataset_name
:
ru
doc_to_target
:
'
{%
if
answer
is
not
none
%}{{answer[17+1]}}{%
else
%}{{answer_number|string}}{%
endif
%}'
doc_to_text
:
'
{%
if
answer
is
not
none
%}{{question+"\nПошаговоерешение:"}}{%
else
%}{{"Задача:
"+question+"\nПошаговоерешение:"}}{%
endif
%}'
include
:
cot_yaml
task
:
mgsm_ru_native_cot
Prev
1
…
4
5
6
7
8
9
10
11
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment