Unverified Commit bb433af7 authored by Geun, Lim's avatar Geun, Lim Committed by GitHub
Browse files

feat: Add CLIcK task (#3173)

* feat: Add CLIcK task

* Fix formatting issues

* Add Click Task Description

* fix: lint

* fix
parent 18d2face
# Tasks # Tasks
A list of supported tasks and task groupings can be viewed with `lm-eval --tasks list`. A list of supported tasks and task groupings can be viewed with `lm-eval --tasks list`.
For more information, including a full list of task names and their precise meanings or sources, follow the links provided to the individual README.md files for each subfolder. For more information, including a full list of task names and their precise meanings or sources, follow the links
provided to the individual README.md files for each subfolder.
| Task Family | Description | Language(s) | | Task Family | Description | Language(s) |
|--------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------| |--------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
...@@ -42,6 +42,7 @@ ...@@ -42,6 +42,7 @@
| [copal_id](copal_id/README.md) United States | Indonesian causal commonsense reasoning dataset that captures local nuances. | Indonesian | | [copal_id](copal_id/README.md) United States | Indonesian causal commonsense reasoning dataset that captures local nuances. | Indonesian |
| [coqa](coqa/README.md) | Conversational question answering tasks to test dialog understanding. | English | | [coqa](coqa/README.md) | Conversational question answering tasks to test dialog understanding. | English |
| [crows_pairs](crows_pairs/README.md) | Tasks designed to test model biases in various sociodemographic groups. | English, French | | [crows_pairs](crows_pairs/README.md) | Tasks designed to test model biases in various sociodemographic groups. | English, French |
| [click](click/README.md) | A benchmark dataset of Cultural and Linguistic Intelligence in Korean (CLIcK), comprising 1,995 QA pairs sourced from official Korean exams and textbooks to test Korean cultural and linguistic knowledge. | Korean |
| csatqa | Tasks related to SAT and other standardized testing questions for academic assessment. | Korean | | csatqa | Tasks related to SAT and other standardized testing questions for academic assessment. | Korean |
| [darija_bench](darija_bench/README.md) | Traditional NLP tasks (Translation, Summariation, etc..) for Moroccan Darija | Moroccan Darija (some MT) | | [darija_bench](darija_bench/README.md) | Traditional NLP tasks (Translation, Summariation, etc..) for Moroccan Darija | Moroccan Darija (some MT) |
| [darijahellaswag](darijahellaswag/README.md) | Moroccan Darija version of HellaSwag. | Moroccan Darija (MT) | | [darijahellaswag](darijahellaswag/README.md) | Moroccan Darija version of HellaSwag. | Moroccan Darija (MT) |
...@@ -86,10 +87,12 @@ ...@@ -86,10 +87,12 @@
| [lambada_multilingual_stablelm](lambada_multilingual_stablelm/README.md) | Multilingual LAMBADA dataset. Users should prefer evaluating on this version of the multilingual dataset instead of on `lambada_multilingual`. | German, English, Spanish, French, Italian, Dutch, Portuguese | | [lambada_multilingual_stablelm](lambada_multilingual_stablelm/README.md) | Multilingual LAMBADA dataset. Users should prefer evaluating on this version of the multilingual dataset instead of on `lambada_multilingual`. | German, English, Spanish, French, Italian, Dutch, Portuguese |
| [leaderboard](leaderboard/README.md) | Task group used by Hugging Face's [Open LLM Leaderboard v2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard). Those tasks are static and will not change through time | English | | [leaderboard](leaderboard/README.md) | Task group used by Hugging Face's [Open LLM Leaderboard v2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard). Those tasks are static and will not change through time | English |
| [lingoly](lingoly/README.md) | Challenging logical reasoning benchmark in low-resource languages with controls for memorization | English, Multilingual | | [lingoly](lingoly/README.md) | Challenging logical reasoning benchmark in low-resource languages with controls for memorization | English, Multilingual |
| [llama3](llama3/README.md) | Evals reproducing those provided by the LLAMA team in the Hugging Face repo (instruct) | English, Multilingual |
| [libra](libra/README.md) | Evaluates long-context understanding in Russian across four complexity levels | Russian (MT) | | [libra](libra/README.md) | Evaluates long-context understanding in Russian across four complexity levels | Russian (MT) |
| [lm_syneval](lm_syneval/README.md) | Evaluates the syntactic capabilities of language models. | English | | [lm_syneval](lm_syneval/README.md) | Evaluates the syntactic capabilities of language models. | English |
| [logiqa](logiqa/README.md) | Logical reasoning tasks requiring advanced inference and deduction. | English, Chinese | | [logiqa](logiqa/README.md) | Logical reasoning tasks requiring advanced inference and deduction. | English, Chinese |
| [logiqa2](logiqa2/README.md) | Large-scale logical reasoning dataset adapted from the Chinese Civil Service Examination. | English, Chinese | | [logiqa2](logiqa2/README.md) | Large-scale logical reasoning dataset adapted from the Chinese Civil Service Examination. | English, Chinese |
| [longbench](longbench/README.md) | LongBench evaluates language models' ability to understand lengthy texts across multiple tasks and languages. | English, Chinese |
| [mastermind](mastermind/README.md) | Reasoning benchmark based on the board game of Mastermind. | English | | [mastermind](mastermind/README.md) | Reasoning benchmark based on the board game of Mastermind. | English |
| [mathqa](mathqa/README.md) | Question answering tasks involving mathematical reasoning and problem-solving. | English | | [mathqa](mathqa/README.md) | Question answering tasks involving mathematical reasoning and problem-solving. | English |
| [mbpp](mbpp/README.md) | A benchmark designed to measure the ability to synthesize short Python programs from natural language descriptions. | Python | | [mbpp](mbpp/README.md) | A benchmark designed to measure the ability to synthesize short Python programs from natural language descriptions. | Python |
...@@ -177,6 +180,7 @@ ...@@ -177,6 +180,7 @@
| [zhoblimp](zhoblimp/README.md) | A benchmark evaluating language models' grammatical capabilities in Chinese based on comparing the probabilities of minimal pairs of grammatical and ungrammatical sentences. | Chinese | | [zhoblimp](zhoblimp/README.md) | A benchmark evaluating language models' grammatical capabilities in Chinese based on comparing the probabilities of minimal pairs of grammatical and ungrammatical sentences. | Chinese |
## Multimodal Tasks ## Multimodal Tasks
| Task Family | Description | Modality | | Task Family | Description | Modality |
|------------------------------|---------------------------------------------------------------------------------------------------------|-------------| |------------------------------|---------------------------------------------------------------------------------------------------------|-------------|
| [chartqa](chartqa/README.md) | A benchmark for question answering about charts that requires both visual and logical reasoning. | Image, Text | | [chartqa](chartqa/README.md) | A benchmark for question answering about charts that requires both visual and logical reasoning. | Image, Text |
......
# click
### Paper
Title: `CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean`
Abstract: `Despite the rapid development of large language models (LLMs) for the Korean language, there remains an obvious lack of benchmark datasets that test the requisite Korean cultural and linguistic knowledge. Because many existing Korean benchmark datasets are derived from the English counterparts through translation, they often overlook the different cultural contexts. For the few benchmark datasets that are sourced from Korean data capturing cultural knowledge, only narrow tasks such as bias and hate speech detection are offered. To address this gap, we introduce a benchmark of Cultural and Linguistic Intelligence in Korean (CLIcK), a dataset comprising 1,995 QA pairs. CLIcK sources its data from official Korean exams and textbooks, partitioning the questions into eleven categories under the two main categories of language and culture. For each instance in CLIcK, we provide fine-grained annotation of which cultural and linguistic knowledge is required to answer the question correctly. Using CLIcK, we test 13 language models to assess their performance. Our evaluation uncovers insights into their performances across the categories, as well as the diverse factors affecting their comprehension. CLIcK offers the first large-scale comprehensive Korean-centric analysis of LLMs' proficiency in Korean culture and language.`
Homepage: https://huggingface.co/datasets/EunsuKim/CLIcK
### Citation
```
@misc{kim2024click,
title={CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean},
author={Eunsu Kim and Juyoung Suk and Philhoon Oh and Haneul Yoo and James Thorne and Alice Oh},
year={2024},
eprint={2403.06412},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups, Tags, and Tasks
#### Groups
* `click`: All 11 categories of the CLIcK dataset
* `click_lang`: "Language" category of the CLIcK dataset, consisting of 3 subcategories
* `click_cul`: "Culture" category of the CLIcK dataset, consisting of 8 subcategories
#### Tasks
* Three tasks under `click_lang`:
* `click_lang_text`
* `click_lang_grammar`
* `click_lang_function`
* Eight tasks under `click_cul`:
* `click_cul_society`
* `click_cul_tradition`
* `click_cul_politics`
* `click_cul_economy`
* `click_cul_law`
* `click_cul_history`
* `click_cul_geography`
* `click_cul_kpop`
### Checklist
For adding novel benchmarks/datasets to the library:
* [X] Is the task an existing benchmark in the literature?
* [X] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
group: click
task:
- click_lang
- click_cul
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
group: click_cul
task:
- click_cul_tasks
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
dataset_path: EunsuKim/CLIcK
test_split: train
fewshot_split: train
output_type: multiple_choice
doc_to_text: !function utils.get_context
doc_to_choice: !function utils.get_choices
doc_to_target: !function utils.get_target
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
include: _default_click_cul_yaml
process_docs: !function utils.extract_economy
task: click_cul_economy
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_geography
task: click_cul_geography
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_history
task: click_cul_history
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_kpop
task: click_cul_kpop
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_law
task: click_cul_law
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_politics
task: click_cul_politics
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_society
task: click_cul_society
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_tradition
task: click_cul_tradition
tag: click_cul_tasks
from typing import List
from datasets import Dataset
def get_context(doc) -> str:
ctx = doc["paragraph"]
q = doc["question"]
opt = doc["choices"]
if ctx:
res = f"주어진 맥락을 천천히 읽고, 질문에 대한 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n맥락: {ctx}\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
else:
res = f"주어진 질문을 천천히 읽고, 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
return res
def get_target(doc) -> str:
ans = doc["answer"]
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"][doc["choices"].index(ans)]
return ["A", "B", "C", "D"][doc["choices"].index(ans)]
def get_choices(doc) -> List[str]:
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"]
return ["A", "B", "C", "D"]
def extract_economy(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "economy" in example["id"].lower())
def extract_geography(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "geography" in example["id"].lower())
def extract_history(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: "KHB" in example["id"] or "history" in example["id"].lower()
)
def extract_law(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: "law" in example["id"].lower() or "PSAT" in example["id"]
)
def extract_politics(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "politics" in example["id"].lower())
def extract_kpop(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "popular" in example["id"].lower())
def extract_society(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "society" in example["id"].lower())
def extract_tradition(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "tradition" in example["id"].lower())
group: click_lang
task:
- click_lang_tasks
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
dataset_path: EunsuKim/CLIcK
test_split: train
fewshot_split: train
output_type: multiple_choice
doc_to_text: !function utils.get_context
doc_to_choice: !function utils.get_choices
doc_to_target: !function utils.get_target
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
include: _default_click_lang_yaml
process_docs: !function utils.extract_function
task: click_lang_function
tag: click_lang_tasks
include: _default_click_lang_yaml
process_docs: !function utils.extract_grammar
task: click_lang_grammar
tag: click_lang_tasks
include: _default_click_lang_yaml
process_docs: !function utils.extract_text
task: click_lang_text
tag: click_lang_tasks
from typing import List
from datasets import Dataset
def get_context(doc) -> str:
ctx = doc["paragraph"]
q = doc["question"]
opt = doc["choices"]
if ctx:
res = f"주어진 맥락을 천천히 읽고, 질문에 대한 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n맥락: {ctx}\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
else:
res = f"주어진 질문을 천천히 읽고, 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
return res
def get_target(doc) -> str:
ans = doc["answer"]
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"][doc["choices"].index(ans)]
return ["A", "B", "C", "D"][doc["choices"].index(ans)]
def get_choices(doc) -> List[str]:
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"]
return ["A", "B", "C", "D"]
def extract_text(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: "CSAT_korean_22" in example["id"]
or (
"CSAT_korean_23" in example["id"] and int(example["id"].split("_")[-1]) < 35
)
or ("TK" in example["id"] and int(example["id"].split("_")[-1]) > 4)
)
def extract_grammar(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: (
"CSAT_korean" in example["id"]
and (
int(example["id"].split("_")[2]) < 21
and int(example["id"].split("_")[3]) > 10
)
)
or (
"Kedu_1" in example["id"]
and (
example["id"].split("_")[1] != "16"
or not (
"대화" in example["question"]
or "발화" in example["question"]
or "질의" in example["question"]
)
)
)
or ("TK" in example["id"] and int(example["id"].split("_")[-1]) < 5)
)
def extract_function(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: (
"CSAT_korean" in example["id"]
and (
int(example["id"].split("_")[-1]) > 34
or (
int(example["id"].split("_")[2]) < 21
and int(example["id"].split("_")[3]) < 11
)
)
)
or (
"Kedu_16" in example["id"]
and (
"대화" in example["question"]
or "발화" in example["question"]
or "질의" in example["question"]
)
)
or "PSE_korean" in example["id"]
)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment