Unverified Commit ade01428 authored by Harsh Kohli's avatar Harsh Kohli Committed by GitHub
Browse files

Groundcocoa (#2724)



* Fix failing tests

* Resolved merge conflicts

* pre-commit

---------
Co-authored-by: default avatarBaber <baber@hey.com>
parent 529f4805
......@@ -51,6 +51,7 @@
| [glue](glue/README.md) | General Language Understanding Evaluation benchmark to test broad language abilities. | English |
| [gpqa](gpqa/README.md) | Tasks designed for general public question answering and knowledge verification. | English |
| [gsm8k](gsm8k/README.md) | A benchmark of grade school math problems aimed at evaluating reasoning capabilities. | English |
| [groundcocoa](groundcocoa/README.md) | A benchmark evaluating the conditional and compositional reasoning of language models using a grounding task. | English |
| [haerae](haerae/README.md) | Tasks focused on assessing detailed factual and historical knowledge. | Korean |
| [headqa](headqa/README.md) | A high-level education-based question answering dataset to test specialized knowledge. | Spanish, English |
| [hellaswag](hellaswag/README.md) | Tasks to predict the ending of stories or scenarios, testing comprehension and creativity. | English |
......@@ -86,7 +87,7 @@
| [mlqa](mlqa/README.md) | MultiLingual Question Answering benchmark dataset for evaluating cross-lingual question answering performance. | English, Arabic, German, Spanish, Hindi, Vietnamese, Simplified Chinese |
| [mmlu](mmlu/README.md) | Massive Multitask Language Understanding benchmark for broad domain language evaluation. Several variants are supported. | English |
| [mmlu_pro](mmlu_pro/README.md) | A refined set of MMLU, integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. | English |
| [mmlu-pro-plus](mmlu-pro-plus/README.md) | A new test set for evaluating shortcut learning and higher-order reasoning of LLMs. | English |
| [mmlu-pro-plus](mmlu-pro-plus/README.md) | A new test set for evaluating shortcut learning and higher-order reasoning of LLMs. | English |
| [mmlusr](mmlusr/README.md) | Variation of MMLU designed to be more rigorous. | English |
| model_written_evals | Evaluation tasks auto-generated for evaluating a collection of AI Safety concerns. | |
| [moral_stories](moral_stories/README.md) | A crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations. | English
......
# GroundCocoa
### Paper
Title: `GroundCocoa: A Benchmark for Evaluating Compositional & Conditional Reasoning in Language Models`
Abstract: https://arxiv.org/abs/2404.04237
The rapid progress of large language models (LLMs) has seen them excel and frequently surpass human performance on standard benchmarks. This has enabled many downstream applications, such as LLM agents, to rely on their reasoning to address complex task requirements. However, LLMs are known to unexpectedly falter in simple tasks and under seemingly straightforward circumstances - underscoring the need for better and more diverse evaluation setups to measure their true capabilities. To this end, we choose to study compositional and conditional reasoning, two aspects that are central to human cognition, and introduce GroundCocoa - a lexically diverse benchmark connecting these reasoning skills to the real-world problem of flight booking. Our task involves aligning detailed user preferences with available flight options presented in a multiple-choice format. Results indicate a significant disparity in performance among current state-of-the-art LLMs with even the best performing model, GPT-4 Turbo, not exceeding 67% accuracy despite advanced prompting techniques.
Homepage: `https://osu-nlp-group.github.io/GroundCocoa/`
### Citation
```
@misc{kohli2025groundcocoabenchmarkevaluatingcompositional,
title={GroundCocoa: A Benchmark for Evaluating Compositional & Conditional Reasoning in Language Models},
author={Harsh Kohli and Sachin Kumar and Huan Sun},
year={2025},
eprint={2404.04237},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2404.04237},
}
```
### Groups and Tasks
#### Groups
- Not part of a group yet
#### Tasks
- `groundcocoa`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
task: groundcocoa
dataset_path: harsh147/GroundCocoa
output_type: multiple_choice
training_split: null
validation_split: validation
test_split: test
process_docs: !function utils.process_docs
doc_to_text: "{{criteria}}"
doc_to_target: gold
doc_to_choice: "choices"
target_delimiter: ""
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
dataset_kwargs:
trust_remote_code: true
streaming: true
import datasets
import pandas as pd
from datasets import Dataset
def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:
cocoa_dataset = [sample for sample in dataset]
processed = []
for doc in cocoa_dataset:
question = "A user has specified certain criteria for booking a flight. Below are five different flight options labeled 'A', 'B', 'C', 'D', and 'E'. Review these options and select the one that best matches the user requirements. Respond with a single option and the phrase 'The answer is Option ' followed by the correct letter - 'A', 'B', 'C', 'D', or 'E'\n\n"
question = question + "User Criteria: " + doc["query"]
question = question + "\n\n Option A: " + str(doc["Option A"]) + "\n"
question = question + "\n Option B: " + str(doc["Option B"]) + "\n"
question = question + "\n Option C: " + str(doc["Option C"]) + "\n"
question = question + "\n Option D: " + str(doc["Option D"]) + "\n"
question = question + "\n Option E: " + str(doc["Option E"]) + "\n"
out_doc = {
"criteria": question,
"choices": [
"The answer is Option A",
"The answer is Option B",
"The answer is Option C",
"The answer is Option D",
"The answer is Option E",
],
"gold": "The answer is Option " + doc["Answer"],
}
processed.append(out_doc)
df = pd.DataFrame(processed)
dataset = Dataset.from_pandas(df)
return dataset
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment