Commit 02e841ce authored by lintangsutawika's avatar lintangsutawika
Browse files

Merge branch 'main' of https://github.com/EleutherAI/lm-evaluation-harness into t5v2-alt-plus

parents 90ad5db7 e74ec966
include: arithmetic_1dc.yaml
task: arithmetic_3ds
dataset_name: arithmetic_3ds
dataset_kwargs:
trust_remote_code: true
include: arithmetic_1dc.yaml
task: arithmetic_4da
dataset_name: arithmetic_4da
dataset_kwargs:
trust_remote_code: true
include: arithmetic_1dc.yaml
task: arithmetic_4ds
dataset_name: arithmetic_4ds
dataset_kwargs:
trust_remote_code: true
include: arithmetic_1dc.yaml
task: arithmetic_5da
dataset_name: arithmetic_5da
dataset_kwargs:
trust_remote_code: true
include: arithmetic_1dc.yaml
task: arithmetic_5ds
dataset_name: arithmetic_5ds
dataset_kwargs:
trust_remote_code: true
......@@ -12,3 +12,5 @@ metric_list:
higher_is_better: true
metadata:
version: 1.0
dataset_kwargs:
trust_remote_code: true
group: openllm
group_alias: Open LLM Leaderboard
task:
- task: arc_challenge
fewshot_split: validation
num_fewshot: 25
- task: hellaswag
fewshot_split: train
num_fewshot: 10
- task: truthfulqa
num_fewshot: 0
- task: mmlu
num_fewshot: 5
- task: winogrande
fewshot_split: train
num_fewshot: 5
- task: gsm8k
num_fewshot: 5
......@@ -8,7 +8,7 @@ test_split: default
doc_to_text: inputs
doc_to_target: "{{targets[0]}}"
generation_kwargs:
max_length: 128
max_gen_toks: 128
metric_list:
- metric: exact_match
aggregation: mean
......
......@@ -8,7 +8,7 @@ test_split: test
output_type: generate_until
generation_kwargs:
num_beams: 10
max_length: 128
max_gen_toks: 128
until:
- "</s>"
doc_to_text: !function utils.doc_to_text
......
......@@ -8,7 +8,7 @@ test_split: test
output_type: generate_until
generation_kwargs:
num_beams: 10
max_length: 128
max_gen_toks: 128
until:
- "</s>"
doc_to_text: !function utils.doc_to_text
......
......@@ -8,7 +8,7 @@ test_split: test
output_type: generate_until
generation_kwargs:
num_beams: 10
max_length: 128
max_gen_toks: 128
until:
- "</s>"
doc_to_text: !function utils.doc_to_text
......
......@@ -8,7 +8,7 @@ test_split: test
output_type: generate_until
generation_kwargs:
num_beams: 10
max_length: 128
max_gen_toks: 128
until:
- "</s>"
doc_to_text: !function utils.doc_to_text
......
......@@ -8,7 +8,7 @@ test_split: test
output_type: generate_until
generation_kwargs:
num_beams: 10
max_length: 128
max_gen_toks: 128
until:
- "</s>"
doc_to_text: !function utils.doc_to_text
......
......@@ -8,7 +8,7 @@ test_split: test
output_type: generate_until
generation_kwargs:
num_beams: 10
max_length: 128
max_gen_toks: 128
until:
- "</s>"
doc_to_text: !function utils.doc_to_text
......
......@@ -20,3 +20,5 @@ metric_list:
higher_is_better: true
metadata:
version: 3.0
dataset_kwargs:
trust_remote_code: true
......@@ -22,3 +22,5 @@ metric_list:
higher_is_better: true
metadata:
version: 3.0
dataset_kwargs:
trust_remote_code: true
# EQ-Bench
Title: `EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models`
Abstract: https://arxiv.org/abs/2312.06281
EQ-Bench is a benchmark for language models designed to assess emotional intelligence.
Why emotional intelligence? One reason is that it represents a subset of abilities that are important for the user experience, and which isn't explicitly tested by other benchmarks. Another reason is that it's not trivial to improve scores by fine tuning for the benchmark, which makes it harder to "game" the leaderboard.
EQ-Bench is a little different from traditional psychometric tests. It uses a specific question format, in which the subject has to read a dialogue then rate the intensity of possible emotional responses of one of the characters. Every question is interpretative and assesses the ability to predict the magnitude of the 4 presented emotions. The test is graded without the need for a judge (so there is no length bias). It's cheap to run (only 171 questions), and produces results that correlate strongly with human preference (Arena ELO) and multi-domain benchmarks like MMLU.
Homepage: https://eqbench.com/
NOTE: There are some key differences between the lm-evaluation-harness version and the implementation described in the EQ-Bench paper (These have been OK'd by the author):
- The lm-eval version uses the EQ-Bench v2 test set (171 questions) and score calculation. It does not incorporate the revision part of the prompt, as per v2.1 (https://github.com/EQ-bench/EQ-Bench)
- No retries in lm-eval version (EQ-Bench pipeline retries with successively higher temps if it encounters unparseable answers)
- In the original implementation, unparseable answers are excluded from the final score, and 83% of answers have to be parseable or a fail is returned. The lm-eval version instead assigns 0 to unparsable answers and has no fail criteria. So for lower performing models, there may be differences with the EQ-Bench leaderboard.
### Citation
```bibtex
@misc{paech2023eqbench,
title={EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models},
author={Samuel J. Paech},
year={2023},
eprint={2312.06281},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet
#### Tasks
* `eq_bench`
### Checklist
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
task: eq_bench
dataset_path: pbevan11/EQ-Bench
output_type: generate_until
validation_split: validation
doc_to_text: prompt
doc_to_target: reference_answer_fullscale
process_results: !function utils.calculate_score_fullscale
generation_kwargs:
do_sample: false
temperature: 0.0
max_gen_toks: 80
metric_list:
- metric: eqbench
aggregation: mean
higher_is_better: true
- metric: percent_parseable
aggregation: mean
higher_is_better: true
metadata:
version: 2.1
import math
import re
def calculate_score_fullscale(docs, results):
reference = eval(docs["reference_answer_fullscale"])
user = dict(re.findall(r"(\w+):\s+(\d+)", results[0]))
# First check that the emotions specified in the answer match those in the reference
if len(user.items()) != 4:
# print('! Error: 4 emotions were not returned')
# print(user)
return {"eqbench": 0, "percent_parseable": 0}
emotions_dict = {}
for emotion, user_emotion_score in user.items():
for i in range(1, 5):
if emotion == reference[f"emotion{i}"]:
emotions_dict[emotion] = True
if len(emotions_dict) != 4:
print("! Error: emotions did not match reference")
print(user)
return {"eqbench": 0, "percent_parseable": 0}
difference_tally = (
0 # Tally of differerence from reference answers for this question
)
# Iterate over each emotion in the user's answers.
for emotion, user_emotion_score in user.items():
# If this emotion is in the reference, calculate the difference between the user's score and the reference score.
for i in range(1, 5):
if emotion == reference[f"emotion{i}"]:
d = abs(
float(user_emotion_score) - float(reference[f"emotion{i}_score"])
)
# this will be a value between 0 and 10
if d == 0:
scaled_difference = 0
elif d <= 5:
# S-shaped scaling function
# https://www.desmos.com/calculator
# 6.5\cdot\ \frac{1}{\left(1\ +\ e^{\left(-1.2\cdot\left(x-4\right)\right)}\right)}
scaled_difference = 6.5 * (1 / (1 + math.e ** (-1.2 * (d - 4))))
else:
scaled_difference = d
difference_tally += scaled_difference
# Inverting the difference tally so that the closer the answer is to reference, the higher the score.
# The adjustment constant is chosen such that answering randomly produces a score of zero.
adjust_const = 0.7477
final_score = 10 - (difference_tally * adjust_const)
final_score_percent = final_score * 10
return {"eqbench": final_score_percent, "percent_parseable": 100}
# FrenchBench
### Paper
FrenchBench is a benchmark for evaluating French language models, introduced in the paper
[CroissantLLM: A Truly Bilingual French-English Language Model](https://arxiv.org/abs/2402.00786).
It is a collection of tasks that evaluate the ability of a language model to understand and generate French text.
This benchmark is constructed both from openly available datasets, as well as newly released manually annotated data.
### Citation
```bibtex
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
- `french_bench`: All tasks (non-perplexity based)
- `french_bench_gen`: All official generative tasks
- `french_bench_mc`: All official multiple choice tasks
- `french_bench_perplexity`: All perplexity-based tasks (0 shot is recommended)
- `french_bench_extra`: All extra tasks
#### Tasks
The following tasks evaluate tasks on the French Bench dataset using various scoring methods.
- french_bench_boolqa
- french_bench_fquadv2
- french_bench_fquadv2_bool
- french_bench_fquadv2_genq
- french_bench_fquadv2_hasAns
- french_bench_topic_based_nli
- french_bench_multifquad
- french_bench_grammar
- french_bench_vocab
- french_bench_reading_comp
- french_bench_xnli (modified XNLI)
- french_bench_orangesum_abstract
- french_bench_orangesum_title
- french_bench_trivia
- french_bench_hellaswag
- french_bench_arc_challenge
The french bench also includes other tasks from various benchmarks:
- `belebele_fra_Latn`: Belebele French
- `wmt14-en-fr`: WMT14 English-French
- `wmt14-fr-en`: WMT14 French-English
# Not to use in few-shot
- `crows_pairs_french`: Crows Pairs French
- `french_bench_opus_perplexity`: Opus Perplexity
### Usage
```bash
# openai
lm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench --limit 100 --num_fewshot 3 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench_3shot.json
lm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench_opus_perplexity,crows_pairs_french --limit 100 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench2_0shot.json
lm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 8 --output_path data/french_bench/gpt2/results_french_bench_3shot.json
lm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/gpt2/results_french_bench2_0shot.json
lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 4 --output_path data/french_bench/llama-2-7b-hf/results_french_bench_3shot.json
lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/llama-2-7b-hf/results_french_bench2_0shot.json
```
HF and Accelerate options can be added when loading a model:
```bash
accelerate launch -m lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf,dtype="float16" --tasks french_bench
```
### Checklist
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation?
* [x] Yes, original implementation contributed by author of the benchmark
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment