Commit e4db76cb authored by haileyschoelkopf's avatar haileyschoelkopf
Browse files

Merge branch 'main' into multimodal-prototyping

parents 6cc6e9cd ad80f555
group:
tag:
- math_word_problems
task: gsm8k_cot_zeroshot
dataset_path: gsm8k
......
......@@ -61,7 +61,7 @@ generation_kwargs:
- 'Q:'
- </s>
- <|im_end|>
group:
tag:
- chain_of_thought
metadata:
version: 3.0
......
group:
tag:
- math_word_problems
task: gsm8k
dataset_path: gsm8k
......
group: haerae
dataset_path: HAERAE-HUB/HAE_RAE_BENCH
test_split: test
fewshot_split: test
......
group: haerae
task:
- haerae_gk
- haerae_hi
- haerae_lw
- haerae_rw
- haerae_sn
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
group:
- headqa
tag: headqa
task: headqa_en
dataset_path: EleutherAI/headqa
dataset_name: en
......
group:
tag:
- multiple_choice
task: hellaswag
dataset_path: hellaswag
......@@ -20,3 +20,5 @@ metric_list:
higher_is_better: true
metadata:
version: 1.0
dataset_kwargs:
trust_remote_code: true
group:
tag:
- hendrycks_ethics
task: ethics_cm
dataset_path: EleutherAI/hendrycks_ethics
......
include: deontology.yaml
group:
tag:
- hendrycks_ethics
task: ethics_justice
dataset_name: justice
......
include: commonsense.yaml
group:
tag:
- hendrycks_ethics
task: ethics_utilitarianism
dataset_name: utilitarianism
......
include: commonsense.yaml
group:
tag:
- hendrycks_ethics
task: ethics_virtue
dataset_name: virtue
......
......@@ -7,3 +7,9 @@ task:
- hendrycks_math_num_theory
- hendrycks_math_prealgebra
- hendrycks_math_precalc
aggregate_metric_list:
- metric: exact_match
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
group:
tag:
- math_word_problems
task: hendrycks_math_algebra
dataset_path: EleutherAI/hendrycks_math
......
# inverse_scaling
### Paper
Title: `Inverse Scaling: When Bigger Isn't Better`
Abstract: `Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at this https URL to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.`
Note: This is not official implementation of inverse scaling prize. Implemented by h-albert-lee with permission from the authors of the paper.
Homepage: https://github.com/inverse-scaling/prize
### Citation
@article{mckenzie2023inverse,
title={Inverse Scaling: When Bigger Isn't Better},
author={Ian R. McKenzie and Alexander Lyzhov and Michael Pieler and Alicia Parrish and Aaron Mueller and Ameya Prabhu and Euan McLean and Aaron Kirtland and Alexis Ross and Alisa Liu and Andrew Gritsevskiy and Daniel Wurgaft and Derik Kauffman and Gabriel Recchia and Jiacheng Liu and Joe Cavanagh and Max Weiss and Sicong Huang and The Floating Droid and Tom Tseng and Tomasz Korbak and Xudong Shen and Yuhui Zhang and Zhengping Zhou and Najoung Kim and Samuel R. Bowman and Ethan Perez},
journal={arXiv preprint arXiv:2306.09479},
year={2023}
}
### Groups and Tasks
#### Groups
* `inverse_scaling_mc`: all tasks of Inverse Scaling Prize (currently aside from Prompt Injection), matching their implementations on OPT for multiple-choice type classification tasks. **These match the published dataset versions from the prize, which may slightly differ from numbers in the paper (but have been tested for equivalence to the OPT numbers reported at https://huggingface.co/inverse-scaling/opt-1.3b_eval for multiple sizes.**
#### Tasks
- `inverse_scaling_hindsight_neglect_10shot`
- `inverse_scaling_redefine_math`
- `inverse_scaling_quote_repetition`
- `inverse_scaling_neqa`
- `inverse_scaling_winobias_antistereotype`: not an official Inverse Scaling prize winner, but eval results reported on it at https://huggingface.co/inverse-scaling/opt-1.3b_eval .
- `inverse_scaling_into_the_unknown`
- `inverse_scaling_memo_trap`
- `inverse_scaling_modus_tollens`
- `inverse_scaling_pattern_matching_suppression`
- `inverse_scaling_repetitive_algebra`
- `inverse_scaling_sig_figs`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
tag:
- inverse_scaling_mc
output_type: multiple_choice
test_split: train
doc_to_text: prompt
doc_to_choice: classes
doc_to_target: answer_index
target_delimiter: ""
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 0
# | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
# |-------------------------------------------|-------|------|-----:|--------|-----:|---|-----:|
# | - inverse_scaling_hindsight_neglect_10shot| 0|none | 0|acc |0.4476|± |0.0281|
# | | |none | 0|acc_norm|0.4476|± |0.0281|
# |inverse_scaling_mc |N/A |none | 0|acc_norm|0.6273|± |0.0096|
# | | |none | 0|acc |0.6210|± |0.0095|
# | - inverse_scaling_neqa | 0|none | 0|acc |0.5300|± |0.0289|
# | | |none | 0|acc_norm|0.5300|± |0.0289|
# | - inverse_scaling_quote_repetition | 0|none | 0|acc |0.9367|± |0.0141|
# | | |none | 0|acc_norm|0.9367|± |0.0141|
# | - inverse_scaling_redefine_math | 0|none | 0|acc |0.7178|± |0.0150|
# | | |none | 0|acc_norm|0.7178|± |0.0150|
# | - inverse_scaling_winobias_antistereotype | 0|none | 0|acc |0.3786|± |0.0239|
# | | |none | 0|acc_norm|0.4126|± |0.0243|
# | Groups |Version|Filter|n-shot| Metric |Value | |Stderr|
# |------------------|-------|------|-----:|--------|-----:|---|-----:|
# |inverse_scaling_mc|N/A |none | 0|acc_norm|0.6273|± |0.0096|
# | | |none | 0|acc |0.6210|± |0.0095|
# hf (pretrained=facebook/opt-2.7b,add_bos_token=True,dtype=float32), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (32)
# | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
# |-------------------------------------------|-------|------|-----:|--------|-----:|---|-----:|
# | - inverse_scaling_hindsight_neglect_10shot| 0|none | 0|acc |0.4476|± |0.0281|
# | | |none | 0|acc_norm|0.4476|± |0.0281|
# |inverse_scaling_mc |N/A |none | 0|acc_norm|0.6291|± |0.0095|
# | | |none | 0|acc |0.6219|± |0.0095|
# | - inverse_scaling_neqa | 0|none | 0|acc |0.5267|± |0.0289|
# | | |none | 0|acc_norm|0.5267|± |0.0289|
# | - inverse_scaling_quote_repetition | 0|none | 0|acc |0.9433|± |0.0134|
# | | |none | 0|acc_norm|0.9433|± |0.0134|
# | - inverse_scaling_redefine_math | 0|none | 0|acc |0.7200|± |0.0150|
# | | |none | 0|acc_norm|0.7200|± |0.0150|
# | - inverse_scaling_winobias_antistereotype | 0|none | 0|acc |0.3762|± |0.0239|
# | | |none | 0|acc_norm|0.4150|± |0.0243|
# | Groups |Version|Filter|n-shot| Metric |Value | |Stderr|
# |------------------|-------|------|-----:|--------|-----:|---|-----:|
# |inverse_scaling_mc|N/A |none | 0|acc_norm|0.6291|± |0.0095|
# | | |none | 0|acc |0.6219|± |0.0095|
include: _inverse_scaling_mc_yaml
task: inverse_scaling_hindsight_neglect_10shot
dataset_path: inverse-scaling/hindsight-neglect-10shot
include: _inverse_scaling_mc_yaml
task: inverse_scaling_into_the_unknown
dataset_path: Albertmade/into-the-unknown
include: _inverse_scaling_mc_yaml
task: inverse_scaling_memo_trap
dataset_path: Albertmade/memo-trap
include: _inverse_scaling_mc_yaml
task: inverse_scaling_modus_tollens
dataset_path: Albertmade/modus-tollens
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment