Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
574e565a
"megatron/training/arguments.py" did not exist on "5e079c87fe686e61e7d1a6e912ec547c87c7b1b1"
Unverified
Commit
574e565a
authored
Nov 10, 2023
by
Lintang Sutawika
Committed by
GitHub
Nov 10, 2023
Browse files
Merge branch 'big-refactor' into verbosity-rework
parents
73f3029c
b7a4ea06
Changes
498
Hide whitespace changes
Inline
Side-by-side
Showing
18 changed files
with
98 additions
and
9 deletions
+98
-9
lm_eval/tasks/model_written_evals/persona/willingness-to-be-non-HHH-to-cause-other-people-to-be-HHH.yaml
...ngness-to-be-non-HHH-to-cause-other-people-to-be-HHH.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-be-non-HHH-to-not-have-current-goals-changed-by-training.yaml
...on-HHH-to-not-have-current-goals-changed-by-training.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-defer-to-authorities.yaml
...en_evals/persona/willingness-to-defer-to-authorities.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-defer-to-experts.yaml
...ritten_evals/persona/willingness-to-defer-to-experts.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-engage-in-acausal-cooperation.yaml
...persona/willingness-to-engage-in-acausal-cooperation.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-intentionally-be-non-HHH-to-become-more-HHH.yaml
...gness-to-intentionally-be-non-HHH-to-become-more-HHH.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-intentionally-make-mistakes-to-achieve-higher-final-performance.yaml
...ly-make-mistakes-to-achieve-higher-final-performance.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-keep-scientific-discoveries-secret-to-achieve-its-goals.yaml
...p-scientific-discoveries-secret-to-achieve-its-goals.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-make-acausal-trades-with-other-AIs-to-help-humanity.yaml
...-make-acausal-trades-with-other-AIs-to-help-humanity.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-manipulate-overseers-to-think-it-is-HHH.yaml
...llingness-to-manipulate-overseers-to-think-it-is-HHH.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-rate-own-statements-highly-to-look-better.yaml
...ingness-to-rate-own-statements-highly-to-look-better.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-use-physical-force-to-achieve-benevolent-goals.yaml
...ss-to-use-physical-force-to-achieve-benevolent-goals.yaml
+4
-0
lm_eval/tasks/model_written_evals/persona/willingness-to-use-social-engineering-to-achieve-its-goals.yaml
...gness-to-use-social-engineering-to-achieve-its-goals.yaml
+4
-0
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_nlp_survey.yaml
...el_written_evals/sycophancy/sycophancy_on_nlp_survey.yaml
+12
-0
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_philpapers2020.yaml
...ritten_evals/sycophancy/sycophancy_on_philpapers2020.yaml
+12
-0
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_political_typology_quiz.yaml
...als/sycophancy/sycophancy_on_political_typology_quiz.yaml
+12
-0
lm_eval/tasks/model_written_evals/winogenerated/_template_yaml
...al/tasks/model_written_evals/winogenerated/_template_yaml
+10
-0
lm_eval/tasks/pubmedqa/preprocess_pubmedqa.py
lm_eval/tasks/pubmedqa/preprocess_pubmedqa.py
+0
-9
No files found.
lm_eval/tasks/model_written_evals/persona/willingness-to-be-non-HHH-to-cause-other-people-to-be-HHH.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-be-non-HHH-to-cause-other-people-to-be-HHH
include
:
_template_yaml
task
:
persona_willingness-to-be-non-HHH-to-cause-other-people-to-be-HHH
lm_eval/tasks/model_written_evals/persona/willingness-to-be-non-HHH-to-not-have-current-goals-changed-by-training.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-be-non-HHH-to-not-have-current-goals-changed-by-training
include
:
_template_yaml
task
:
persona_willingness-to-be-non-HHH-to-not-have-current-goals-changed-by-training
lm_eval/tasks/model_written_evals/persona/willingness-to-defer-to-authorities.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-defer-to-authorities
include
:
_template_yaml
task
:
persona_willingness-to-defer-to-authorities
lm_eval/tasks/model_written_evals/persona/willingness-to-defer-to-experts.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-defer-to-experts
include
:
_template_yaml
task
:
persona_willingness-to-defer-to-experts
lm_eval/tasks/model_written_evals/persona/willingness-to-engage-in-acausal-cooperation.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-engage-in-acausal-cooperation
include
:
_template_yaml
task
:
persona_willingness-to-engage-in-acausal-cooperation
lm_eval/tasks/model_written_evals/persona/willingness-to-intentionally-be-non-HHH-to-become-more-HHH.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-intentionally-be-non-HHH-to-become-more-HHH
include
:
_template_yaml
task
:
persona_willingness-to-intentionally-be-non-HHH-to-become-more-HHH
lm_eval/tasks/model_written_evals/persona/willingness-to-intentionally-make-mistakes-to-achieve-higher-final-performance.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-intentionally-make-mistakes-to-achieve-higher-final-performance
include
:
_template_yaml
task
:
persona_willingness-to-intentionally-make-mistakes-to-achieve-higher-final-performance
lm_eval/tasks/model_written_evals/persona/willingness-to-keep-scientific-discoveries-secret-to-achieve-its-goals.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-keep-scientific-discoveries-secret-to-achieve-its-goals
include
:
_template_yaml
task
:
persona_willingness-to-keep-scientific-discoveries-secret-to-achieve-its-goals
lm_eval/tasks/model_written_evals/persona/willingness-to-make-acausal-trades-with-other-AIs-to-help-humanity.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-make-acausal-trades-with-other-AIs-to-help-humanity
include
:
_template_yaml
task
:
persona_willingness-to-make-acausal-trades-with-other-AIs-to-help-humanity
lm_eval/tasks/model_written_evals/persona/willingness-to-manipulate-overseers-to-think-it-is-HHH.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-manipulate-overseers-to-think-it-is-HHH
include
:
_template_yaml
task
:
persona_willingness-to-manipulate-overseers-to-think-it-is-HHH
lm_eval/tasks/model_written_evals/persona/willingness-to-rate-own-statements-highly-to-look-better.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-rate-own-statements-highly-to-look-better
include
:
_template_yaml
task
:
persona_willingness-to-rate-own-statements-highly-to-look-better
lm_eval/tasks/model_written_evals/persona/willingness-to-use-physical-force-to-achieve-benevolent-goals.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-use-physical-force-to-achieve-benevolent-goals
include
:
_template_yaml
task
:
persona_willingness-to-use-physical-force-to-achieve-benevolent-goals
lm_eval/tasks/model_written_evals/persona/willingness-to-use-social-engineering-to-achieve-its-goals.yaml
0 → 100644
View file @
574e565a
# Generated by _generate_configs.py
dataset_name
:
willingness-to-use-social-engineering-to-achieve-its-goals
include
:
_template_yaml
task
:
persona_willingness-to-use-social-engineering-to-achieve-its-goals
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_nlp_survey.yaml
0 → 100644
View file @
574e565a
group
:
sycophancy
task
:
sycophancy_on_nlp_survey
dataset_path
:
EleutherAI/sycophancy
dataset_name
:
sycophancy_on_nlp_survey
output_type
:
multiple_choice
validation_split
:
validation
target_delimiter
:
"
"
doc_to_text
:
"
Human:
{{question}}
\n\n
Assistant:
I
believe
the
best
answer
is"
doc_to_target
:
0
doc_to_choice
:
"
{{[answer_matching_behavior,
answer_not_matching_behavior]}}"
metric_list
:
-
metric
:
acc
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_philpapers2020.yaml
0 → 100644
View file @
574e565a
group
:
sycophancy
task
:
sycophancy_on_philpapers2020
dataset_path
:
EleutherAI/sycophancy
dataset_name
:
sycophancy_on_philpapers2020
output_type
:
multiple_choice
validation_split
:
validation
target_delimiter
:
"
"
doc_to_text
:
"
Human:
{{question}}
\n\n
Assistant:
I
believe
the
best
answer
is"
doc_to_target
:
0
doc_to_choice
:
"
{{[answer_matching_behavior,
answer_not_matching_behavior]}}"
metric_list
:
-
metric
:
acc
lm_eval/tasks/model_written_evals/sycophancy/sycophancy_on_political_typology_quiz.yaml
0 → 100644
View file @
574e565a
group
:
sycophancy
task
:
sycophancy_on_political_typology_quiz
dataset_path
:
EleutherAI/sycophancy
dataset_name
:
sycophancy_on_political_typology_quiz
output_type
:
multiple_choice
validation_split
:
validation
target_delimiter
:
"
"
doc_to_text
:
"
Human:
{{question}}
\n\n
Assistant:
I
believe
the
better
option
is"
doc_to_target
:
0
doc_to_choice
:
"
{{[answer_matching_behavior,
answer_not_matching_behavior]}}"
metric_list
:
-
metric
:
acc
lm_eval/tasks/model_written_evals/winogenerated/_template_yaml
0 → 100644
View file @
574e565a
group: winogenerated
dataset_path: EleutherAI/winogenerated
output_type: multiple_choice
validation_split: validation
target_delimiter: ""
doc_to_text: "{{question}}"
doc_to_target: 0
doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}"
metric_list:
- metric: acc
lm_eval/tasks/pubmedqa/preprocess_pubmedqa.py
View file @
574e565a
...
...
@@ -3,12 +3,3 @@ def doc_to_text(doc) -> str:
return
"Abstract: {}
\n
Question: {}
\n
Answer:"
.
format
(
ctxs
,
doc
[
"QUESTION"
],
doc
[
"final_decision"
]
)
def
doc_to_target
(
doc
)
->
str
:
return
" {}"
.
format
(
doc
[
"final_decision"
])
def
gold_alias
(
doc
):
dict_to_label
=
{
"yes"
:
0
,
"no"
:
1
,
"maybe"
:
2
}
return
dict_to_label
[
doc
[
"final_decision"
]]
Prev
1
…
21
22
23
24
25
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment