Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
cda25fef
Unverified
Commit
cda25fef
authored
Jan 02, 2024
by
Lintang Sutawika
Committed by
GitHub
Jan 02, 2024
Browse files
Merge branch 'main' into standardize_metrics
parents
dfb41835
4d10ad56
Changes
249
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
18 additions
and
22 deletions
+18
-22
lm_eval/tasks/mmlu/flan_n_shot/generative/_mmlu_flan_generative_template_yaml
...lan_n_shot/generative/_mmlu_flan_generative_template_yaml
+1
-1
lm_eval/tasks/mmlu/flan_n_shot/loglikelihood/_mmlu_flan_loglikelihood_template_yaml
...shot/loglikelihood/_mmlu_flan_loglikelihood_template_yaml
+1
-1
lm_eval/tasks/model_written_evals/advanced_ai_risk/_generate_configs.py
...model_written_evals/advanced_ai_risk/_generate_configs.py
+0
-2
lm_eval/tasks/model_written_evals/advanced_ai_risk/_template_yaml
...tasks/model_written_evals/advanced_ai_risk/_template_yaml
+1
-1
lm_eval/tasks/model_written_evals/persona/_generate_configs.py
...al/tasks/model_written_evals/persona/_generate_configs.py
+0
-2
lm_eval/tasks/model_written_evals/persona/_template_yaml
lm_eval/tasks/model_written_evals/persona/_template_yaml
+1
-1
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_nlp_survey.yaml
...el_written_evals/sycophancy/sycophancy_on_nlp_survey.yaml
+1
-1
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_philpapers2020.yaml
...ritten_evals/sycophancy/sycophancy_on_philpapers2020.yaml
+1
-1
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_political_typology_quiz.yaml
...als/sycophancy/sycophancy_on_political_typology_quiz.yaml
+1
-1
lm_eval/tasks/model_written_evals/winogenerated/_template_yaml
...al/tasks/model_written_evals/winogenerated/_template_yaml
+1
-1
lm_eval/tasks/mutual/mutual.yaml
lm_eval/tasks/mutual/mutual.yaml
+1
-1
lm_eval/tasks/nq_open/nq_open.yaml
lm_eval/tasks/nq_open/nq_open.yaml
+1
-1
lm_eval/tasks/openbookqa/openbookqa.yaml
lm_eval/tasks/openbookqa/openbookqa.yaml
+1
-1
lm_eval/tasks/paws-x/_generate_config.py
lm_eval/tasks/paws-x/_generate_config.py
+0
-1
lm_eval/tasks/paws-x/pawsx_template_yaml
lm_eval/tasks/paws-x/pawsx_template_yaml
+1
-1
lm_eval/tasks/pile/pile_arxiv.yaml
lm_eval/tasks/pile/pile_arxiv.yaml
+1
-1
lm_eval/tasks/piqa/piqa.yaml
lm_eval/tasks/piqa/piqa.yaml
+1
-1
lm_eval/tasks/polemo2/polemo2_in.yaml
lm_eval/tasks/polemo2/polemo2_in.yaml
+1
-1
lm_eval/tasks/prost/corypaik_prost.yaml
lm_eval/tasks/prost/corypaik_prost.yaml
+1
-1
lm_eval/tasks/pubmedqa/preprocess_pubmedqa.py
lm_eval/tasks/pubmedqa/preprocess_pubmedqa.py
+2
-1
No files found.
lm_eval/tasks/mmlu/flan_n_shot/generative/_mmlu_flan_generative_template_yaml
View file @
cda25fef
...
@@ -13,4 +13,4 @@ metric_list:
...
@@ -13,4 +13,4 @@ metric_list:
aggregation: mean
aggregation: mean
higher_is_better: true
higher_is_better: true
metadata:
metadata:
-
version: 0.0
version: 0.0
lm_eval/tasks/mmlu/flan_n_shot/loglikelihood/_mmlu_flan_loglikelihood_template_yaml
View file @
cda25fef
...
@@ -13,4 +13,4 @@ metric_list:
...
@@ -13,4 +13,4 @@ metric_list:
aggregation: mean
aggregation: mean
higher_is_better: true
higher_is_better: true
metadata:
metadata:
-
version: 0.0
version: 0.0
lm_eval/tasks/model_written_evals/advanced_ai_risk/_generate_configs.py
View file @
cda25fef
import
yaml
import
yaml
import
inspect
import
datasets
import
datasets
from
tqdm
import
tqdm
from
tqdm
import
tqdm
def
main
()
->
None
:
def
main
()
->
None
:
dataset_path
=
"EleutherAI/advanced_ai_risk"
dataset_path
=
"EleutherAI/advanced_ai_risk"
for
task
in
tqdm
(
datasets
.
get_dataset_infos
(
dataset_path
).
keys
()):
for
task
in
tqdm
(
datasets
.
get_dataset_infos
(
dataset_path
).
keys
()):
file_name
=
f
"
{
task
}
.yaml"
file_name
=
f
"
{
task
}
.yaml"
...
...
lm_eval/tasks/model_written_evals/advanced_ai_risk/_template_yaml
View file @
cda25fef
...
@@ -11,4 +11,4 @@ doc_to_decontamination_query: "{{sentence_good}} {{sentence_bad}}"
...
@@ -11,4 +11,4 @@ doc_to_decontamination_query: "{{sentence_good}} {{sentence_bad}}"
metric_list:
metric_list:
- metric: acc
- metric: acc
metadata:
metadata:
-
version: 0.0
version: 0.0
lm_eval/tasks/model_written_evals/persona/_generate_configs.py
View file @
cda25fef
import
yaml
import
yaml
import
inspect
import
datasets
import
datasets
from
tqdm
import
tqdm
from
tqdm
import
tqdm
def
main
()
->
None
:
def
main
()
->
None
:
dataset_path
=
"EleutherAI/persona"
dataset_path
=
"EleutherAI/persona"
for
task
in
tqdm
(
datasets
.
get_dataset_infos
(
dataset_path
).
keys
()):
for
task
in
tqdm
(
datasets
.
get_dataset_infos
(
dataset_path
).
keys
()):
file_name
=
f
"
{
task
}
.yaml"
file_name
=
f
"
{
task
}
.yaml"
...
...
lm_eval/tasks/model_written_evals/persona/_template_yaml
View file @
cda25fef
...
@@ -9,4 +9,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
...
@@ -9,4 +9,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
metric_list:
metric_list:
- metric: acc
- metric: acc
metadata:
metadata:
-
version: 0.0
version: 0.0
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_nlp_survey.yaml
View file @
cda25fef
...
@@ -11,4 +11,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
...
@@ -11,4 +11,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
metric_list
:
metric_list
:
-
metric
:
acc
-
metric
:
acc
metadata
:
metadata
:
-
version
:
0.0
version
:
0.0
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_philpapers2020.yaml
View file @
cda25fef
...
@@ -11,4 +11,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
...
@@ -11,4 +11,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
metric_list
:
metric_list
:
-
metric
:
acc
-
metric
:
acc
metadata
:
metadata
:
-
version
:
0.0
version
:
0.0
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_political_typology_quiz.yaml
View file @
cda25fef
...
@@ -11,4 +11,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
...
@@ -11,4 +11,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
metric_list
:
metric_list
:
-
metric
:
acc
-
metric
:
acc
metadata
:
metadata
:
-
version
:
0.0
version
:
0.0
lm_eval/tasks/model_written_evals/winogenerated/_template_yaml
View file @
cda25fef
...
@@ -9,4 +9,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
...
@@ -9,4 +9,4 @@ doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
metric_list:
metric_list:
- metric: acc
- metric: acc
metadata:
metadata:
-
version: 0.0
version: 0.0
lm_eval/tasks/mutual/mutual.yaml
View file @
cda25fef
...
@@ -22,4 +22,4 @@ metric_list:
...
@@ -22,4 +22,4 @@ metric_list:
aggregation
:
mean
aggregation
:
mean
higher_is_better
:
true
higher_is_better
:
true
metadata
:
metadata
:
-
version
:
2.0
version
:
2.0
lm_eval/tasks/nq_open/nq_open.yaml
View file @
cda25fef
...
@@ -29,4 +29,4 @@ metric_list:
...
@@ -29,4 +29,4 @@ metric_list:
regexes_to_ignore
:
regexes_to_ignore
:
-
"
\b
an|a|the
\b
"
-
"
\b
an|a|the
\b
"
metadata
:
metadata
:
-
version
:
0.0
version
:
0.0
lm_eval/tasks/openbookqa/openbookqa.yaml
View file @
cda25fef
...
@@ -18,4 +18,4 @@ metric_list:
...
@@ -18,4 +18,4 @@ metric_list:
aggregation
:
mean
aggregation
:
mean
higher_is_better
:
true
higher_is_better
:
true
metadata
:
metadata
:
-
version
:
1.0
version
:
1.0
lm_eval/tasks/paws-x/_generate_config.py
View file @
cda25fef
import
argparse
import
argparse
from
typing
import
Dict
,
List
import
yaml
import
yaml
...
...
lm_eval/tasks/paws-x/pawsx_template_yaml
View file @
cda25fef
...
@@ -17,4 +17,4 @@ metric_list:
...
@@ -17,4 +17,4 @@ metric_list:
aggregation: mean
aggregation: mean
higher_is_better: true
higher_is_better: true
metadata:
metadata:
-
version: 0.0
version: 0.0
lm_eval/tasks/pile/pile_arxiv.yaml
View file @
cda25fef
...
@@ -20,4 +20,4 @@ metric_list:
...
@@ -20,4 +20,4 @@ metric_list:
aggregation
:
bits_per_byte
aggregation
:
bits_per_byte
higher_is_better
:
false
higher_is_better
:
false
metadata
:
metadata
:
-
version
:
2.0
version
:
2.0
lm_eval/tasks/piqa/piqa.yaml
View file @
cda25fef
...
@@ -18,4 +18,4 @@ metric_list:
...
@@ -18,4 +18,4 @@ metric_list:
aggregation
:
mean
aggregation
:
mean
higher_is_better
:
true
higher_is_better
:
true
metadata
:
metadata
:
-
version
:
1.0
version
:
1.0
lm_eval/tasks/polemo2/polemo2_in.yaml
View file @
cda25fef
...
@@ -42,4 +42,4 @@ metric_list:
...
@@ -42,4 +42,4 @@ metric_list:
aggregation
:
mean
aggregation
:
mean
higher_is_better
:
true
higher_is_better
:
true
metadata
:
metadata
:
-
version
:
0.0
version
:
0.0
lm_eval/tasks/prost/corypaik_prost.yaml
View file @
cda25fef
...
@@ -16,4 +16,4 @@ metric_list:
...
@@ -16,4 +16,4 @@ metric_list:
aggregation
:
mean
aggregation
:
mean
higher_is_better
:
true
higher_is_better
:
true
metadata
:
metadata
:
-
version
:
1.0
version
:
1.0
lm_eval/tasks/pubmedqa/preprocess_pubmedqa.py
View file @
cda25fef
def
doc_to_text
(
doc
)
->
str
:
def
doc_to_text
(
doc
)
->
str
:
ctxs
=
"
\n
"
.
join
(
doc
[
"CONTEXTS"
])
ctxs
=
"
\n
"
.
join
(
doc
[
"CONTEXTS"
])
return
"Abstract: {}
\n
Question: {}
\n
Answer:"
.
format
(
return
"Abstract: {}
\n
Question: {}
\n
Answer:"
.
format
(
ctxs
,
doc
[
"QUESTION"
],
doc
[
"final_decision"
]
ctxs
,
doc
[
"QUESTION"
],
)
)
Prev
1
…
4
5
6
7
8
9
10
11
12
13
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment