Unverified Commit bb433af7 authored by Geun, Lim's avatar Geun, Lim Committed by GitHub
Browse files

feat: Add CLIcK task (#3173)

* feat: Add CLIcK task

* Fix formatting issues

* Add Click Task Description

* fix: lint

* fix
parent 18d2face
This diff is collapsed.
# click
### Paper
Title: `CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean`
Abstract: `Despite the rapid development of large language models (LLMs) for the Korean language, there remains an obvious lack of benchmark datasets that test the requisite Korean cultural and linguistic knowledge. Because many existing Korean benchmark datasets are derived from the English counterparts through translation, they often overlook the different cultural contexts. For the few benchmark datasets that are sourced from Korean data capturing cultural knowledge, only narrow tasks such as bias and hate speech detection are offered. To address this gap, we introduce a benchmark of Cultural and Linguistic Intelligence in Korean (CLIcK), a dataset comprising 1,995 QA pairs. CLIcK sources its data from official Korean exams and textbooks, partitioning the questions into eleven categories under the two main categories of language and culture. For each instance in CLIcK, we provide fine-grained annotation of which cultural and linguistic knowledge is required to answer the question correctly. Using CLIcK, we test 13 language models to assess their performance. Our evaluation uncovers insights into their performances across the categories, as well as the diverse factors affecting their comprehension. CLIcK offers the first large-scale comprehensive Korean-centric analysis of LLMs' proficiency in Korean culture and language.`
Homepage: https://huggingface.co/datasets/EunsuKim/CLIcK
### Citation
```
@misc{kim2024click,
title={CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean},
author={Eunsu Kim and Juyoung Suk and Philhoon Oh and Haneul Yoo and James Thorne and Alice Oh},
year={2024},
eprint={2403.06412},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups, Tags, and Tasks
#### Groups
* `click`: All 11 categories of the CLIcK dataset
* `click_lang`: "Language" category of the CLIcK dataset, consisting of 3 subcategories
* `click_cul`: "Culture" category of the CLIcK dataset, consisting of 8 subcategories
#### Tasks
* Three tasks under `click_lang`:
* `click_lang_text`
* `click_lang_grammar`
* `click_lang_function`
* Eight tasks under `click_cul`:
* `click_cul_society`
* `click_cul_tradition`
* `click_cul_politics`
* `click_cul_economy`
* `click_cul_law`
* `click_cul_history`
* `click_cul_geography`
* `click_cul_kpop`
### Checklist
For adding novel benchmarks/datasets to the library:
* [X] Is the task an existing benchmark in the literature?
* [X] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
group: click
task:
- click_lang
- click_cul
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
group: click_cul
task:
- click_cul_tasks
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
dataset_path: EunsuKim/CLIcK
test_split: train
fewshot_split: train
output_type: multiple_choice
doc_to_text: !function utils.get_context
doc_to_choice: !function utils.get_choices
doc_to_target: !function utils.get_target
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
include: _default_click_cul_yaml
process_docs: !function utils.extract_economy
task: click_cul_economy
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_geography
task: click_cul_geography
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_history
task: click_cul_history
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_kpop
task: click_cul_kpop
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_law
task: click_cul_law
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_politics
task: click_cul_politics
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_society
task: click_cul_society
tag: click_cul_tasks
include: _default_click_cul_yaml
process_docs: !function utils.extract_tradition
task: click_cul_tradition
tag: click_cul_tasks
from typing import List
from datasets import Dataset
def get_context(doc) -> str:
ctx = doc["paragraph"]
q = doc["question"]
opt = doc["choices"]
if ctx:
res = f"주어진 맥락을 천천히 읽고, 질문에 대한 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n맥락: {ctx}\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
else:
res = f"주어진 질문을 천천히 읽고, 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
return res
def get_target(doc) -> str:
ans = doc["answer"]
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"][doc["choices"].index(ans)]
return ["A", "B", "C", "D"][doc["choices"].index(ans)]
def get_choices(doc) -> List[str]:
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"]
return ["A", "B", "C", "D"]
def extract_economy(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "economy" in example["id"].lower())
def extract_geography(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "geography" in example["id"].lower())
def extract_history(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: "KHB" in example["id"] or "history" in example["id"].lower()
)
def extract_law(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: "law" in example["id"].lower() or "PSAT" in example["id"]
)
def extract_politics(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "politics" in example["id"].lower())
def extract_kpop(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "popular" in example["id"].lower())
def extract_society(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "society" in example["id"].lower())
def extract_tradition(dataset: Dataset) -> Dataset:
return dataset.filter(lambda example: "tradition" in example["id"].lower())
group: click_lang
task:
- click_lang_tasks
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 1.0
dataset_path: EunsuKim/CLIcK
test_split: train
fewshot_split: train
output_type: multiple_choice
doc_to_text: !function utils.get_context
doc_to_choice: !function utils.get_choices
doc_to_target: !function utils.get_target
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
include: _default_click_lang_yaml
process_docs: !function utils.extract_function
task: click_lang_function
tag: click_lang_tasks
include: _default_click_lang_yaml
process_docs: !function utils.extract_grammar
task: click_lang_grammar
tag: click_lang_tasks
include: _default_click_lang_yaml
process_docs: !function utils.extract_text
task: click_lang_text
tag: click_lang_tasks
from typing import List
from datasets import Dataset
def get_context(doc) -> str:
ctx = doc["paragraph"]
q = doc["question"]
opt = doc["choices"]
if ctx:
res = f"주어진 맥락을 천천히 읽고, 질문에 대한 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n맥락: {ctx}\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
else:
res = f"주어진 질문을 천천히 읽고, 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
return res
def get_target(doc) -> str:
ans = doc["answer"]
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"][doc["choices"].index(ans)]
return ["A", "B", "C", "D"][doc["choices"].index(ans)]
def get_choices(doc) -> List[str]:
if "CSAT" in doc["id"]:
return ["A", "B", "C", "D", "E"]
return ["A", "B", "C", "D"]
def extract_text(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: "CSAT_korean_22" in example["id"]
or (
"CSAT_korean_23" in example["id"] and int(example["id"].split("_")[-1]) < 35
)
or ("TK" in example["id"] and int(example["id"].split("_")[-1]) > 4)
)
def extract_grammar(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: (
"CSAT_korean" in example["id"]
and (
int(example["id"].split("_")[2]) < 21
and int(example["id"].split("_")[3]) > 10
)
)
or (
"Kedu_1" in example["id"]
and (
example["id"].split("_")[1] != "16"
or not (
"대화" in example["question"]
or "발화" in example["question"]
or "질의" in example["question"]
)
)
)
or ("TK" in example["id"] and int(example["id"].split("_")[-1]) < 5)
)
def extract_function(dataset: Dataset) -> Dataset:
return dataset.filter(
lambda example: (
"CSAT_korean" in example["id"]
and (
int(example["id"].split("_")[-1]) > 34
or (
int(example["id"].split("_")[2]) < 21
and int(example["id"].split("_")[3]) < 11
)
)
)
or (
"Kedu_16" in example["id"]
and (
"대화" in example["question"]
or "발화" in example["question"]
or "질의" in example["question"]
)
)
or "PSE_korean" in example["id"]
)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment