Unverified Commit a2af2101 authored by Yen-Ting Lin's avatar Yen-Ting Lin Committed by GitHub
Browse files

Merge branch 'EleutherAI:main' into main

parents 82cb25c1 d5f39bf8
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_gen
description: "Résume l'article en une phrase."
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_extra
description: "Trouve le titre de l'article."
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_extra
# description: "Répond au mieux en complétant la question avec une des réponses proposées."
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_extra
description: "A propos du thème spécifié, l'avis client est il positif, négatif, ou neutre ?"
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_gen
task: french_bench_trivia
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_mc
# description: "Répond au mieux en complétant la question avec une des réponses proposées."
......
group:
tag:
- french_bench_perplexity
task: french_bench_wikitext_fr
dataset_path: asi/wikitext_fr
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_extra
description: "La prémisse et l'hypothèse sont elles en accord, neutres en elles, ou en contradiction ?"
......
# Glianorex
The goal of this benchmark is to isolate the test answering capabilities from the content knowledge.
### Paper
Title: Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data
Abstract: https://arxiv.org/abs/2406.02394
To test the relevance of MCQs to assess LLM performance without prior data exposure, we created a fictional medical benchmark and knowledge base on a non-existent gland, the Glianorex. Using GPT-4 we generated a comprehensive textbook on the Glianorex in both English and French, and created multiple-choice questions in both English and French.
### Tasks
All tasks are multiple choice questions with 4 options, only one correct option.
- `glianorex`: Evaluates all tasks listed below.
- `glianorex_en`: Evaluates the accuracy on 264 questions in English.
- `glianorex_fr`: Evaluates the accuracy on 264 questions in French.
task: glianorex
dataset_path: maximegmd/glianorex
output_type: multiple_choice
test_split: train
doc_to_text: !function preprocess_glianorex.doc_to_text
doc_to_target: !function preprocess_glianorex.doc_to_target
doc_to_choice: [ 'A', 'B', 'C', 'D' ]
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
task: glianorex_en
dataset_path: maximegmd/glianorex
output_type: multiple_choice
test_split: train
doc_to_text: !function preprocess_glianorex.doc_to_text
doc_to_target: !function preprocess_glianorex.doc_to_target
process_docs: !function preprocess_glianorex.filter_english
doc_to_choice: [ 'A', 'B', 'C', 'D' ]
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
task: glianorex_fr
dataset_path: maximegmd/glianorex
output_type: multiple_choice
test_split: train
doc_to_text: !function preprocess_glianorex.doc_to_text
doc_to_target: !function preprocess_glianorex.doc_to_target
process_docs: !function preprocess_glianorex.filter_french
doc_to_choice: [ 'A', 'B', 'C', 'D' ]
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
import datasets
def doc_to_text(doc) -> str:
option_choices = doc["options"]
answers = "".join((f"{k}. {v}\n") for k, v in option_choices.items())
return f"Question: {doc['question']}\n{answers}Answer:"
def doc_to_target(doc) -> int:
return doc["answer_idx"]
def filter_dataset(dataset: datasets.Dataset, lang: str) -> datasets.Dataset:
return dataset.filter(lambda example: example["language"].startswith(lang))
def filter_french(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "fr")
def filter_english(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "en")
......@@ -41,10 +41,14 @@ Homepage: https://gluebenchmark.com/
}
```
### Groups and Tasks
### Groups, Tags, and Tasks
#### Groups
None.
#### Tags
* `glue`: Run all Glue subtasks.
#### Tasks
......
group: glue
tag: glue
task: cola
dataset_path: glue
dataset_name: cola
......
group: glue
tag: glue
task: mnli
dataset_path: glue
dataset_name: mnli
......
group: glue
tag: glue
task: mrpc
dataset_path: glue
dataset_name: mrpc
......
group: glue
tag: glue
task: qnli
dataset_path: glue
dataset_name: qnli
......
group: glue
tag: glue
task: qqp
dataset_path: glue
dataset_name: qqp
......
group: glue
tag: glue
task: rte
dataset_path: glue
dataset_name: rte
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment