"GRUB2/MOD_SRC/vscode:/vscode.git/clone" did not exist on "09a6d33d62b1302aa40fe17338f971a9a48cd1eb"
Unverified Commit 3d1b8f43 authored by Lintang Sutawika's avatar Lintang Sutawika Committed by GitHub
Browse files

Merge branch 'main' into group-agg-rework

parents e200c24e d855d0ba
include: "_template_yaml"
task: leaderboard_musr_object_placements
test_split: object_placements
include: "_template_yaml"
task: leaderboard_musr_team_allocation
test_split: team_allocation
import ast
def doc_to_choice(doc):
"""
Convert a doc to a choice.
"""
return ast.literal_eval(doc["choices"])
DOC_TO_TEXT = "{narrative}\n\n" "{question}\n\n" "{choices}\n" "Answer:"
def doc_to_text(doc):
"""
Convert a doc to text.
"""
choices = ""
for i, choice in enumerate(ast.literal_eval(doc["choices"])):
choices += f"{i+1} - {choice}\n"
text = DOC_TO_TEXT.format(
narrative=doc["narrative"], question=doc["question"], choices=choices
)
return text
""" """
Take in a YAML, and output all "other" splits with this YAML Take in a YAML, and output all "other" splits with this YAML
""" """
import argparse import argparse
import logging import logging
import os import os
......
...@@ -9,3 +9,5 @@ doc_to_choice: "{{choices}}" ...@@ -9,3 +9,5 @@ doc_to_choice: "{{choices}}"
doc_to_target: "{{answer}}" doc_to_target: "{{answer}}"
metadata: metadata:
version: 0.0 version: 0.0
dataset_kwargs:
trust_remote_code: true
...@@ -13,3 +13,5 @@ metric_list: ...@@ -13,3 +13,5 @@ metric_list:
higher_is_better: true higher_is_better: true
metadata: metadata:
version: 0.0 version: 0.0
dataset_kwargs:
trust_remote_code: true
...@@ -27,3 +27,5 @@ metric_list: ...@@ -27,3 +27,5 @@ metric_list:
ignore_punctuation: true ignore_punctuation: true
metadata: metadata:
version: 1.0 version: 1.0
dataset_kwargs:
trust_remote_code: true
...@@ -34,3 +34,5 @@ metric_list: ...@@ -34,3 +34,5 @@ metric_list:
ignore_punctuation: true ignore_punctuation: true
metadata: metadata:
version: 2.0 version: 2.0
dataset_kwargs:
trust_remote_code: true
...@@ -31,3 +31,5 @@ metric_list: ...@@ -31,3 +31,5 @@ metric_list:
higher_is_better: true higher_is_better: true
metadata: metadata:
version: 2.0 version: 2.0
dataset_kwargs:
trust_remote_code: true
...@@ -13,3 +13,5 @@ metric_list: ...@@ -13,3 +13,5 @@ metric_list:
higher_is_better: true higher_is_better: true
metadata: metadata:
version: 1.0 version: 1.0
dataset_kwargs:
trust_remote_code: true
...@@ -16,3 +16,5 @@ metric_list: ...@@ -16,3 +16,5 @@ metric_list:
higher_is_better: true higher_is_better: true
metadata: metadata:
version: 1.0 version: 1.0
dataset_kwargs:
trust_remote_code: true
# NoticIA
### Paper
Title: `NoticIA: A Clickbait Article Summarization Dataset in Spanish`
Abstract: https://arxiv.org/abs/2404.07611
We present NoticIA, a dataset consisting of 850 Spanish news articles featuring prominent clickbait headlines, each paired with high-quality, single-sentence generative summarizations written by humans. This task demands advanced text understanding and summarization abilities, challenging the models' capacity to infer and connect diverse pieces of information to meet the user's informational needs generated by the clickbait headline. We evaluate the Spanish text comprehension capabilities of a wide range of state-of-the-art large language models. Additionally, we use the dataset to train ClickbaitFighter, a task-specific model that achieves near-human performance in this task.
Homepage: https://github.com/ikergarcia1996/NoticIA
### Citation
```
@article{noticia2024,
title={NoticIA: A Clickbait Article Summarization Dataset in Spanish},
author={Iker García-Ferrero and Begoña Altuna},
year={2024},
journal = {Procesamiento del Lenguaje Natural},
volume = {73},
number = {0},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `noticia`
#### Metrics
Following the original implementation, this task will compute the 'Rouge1 score' and 'Average Summary Length.'
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
task: noticia
dataset_path: Iker/NoticIA
dataset_name: null
output_type: generate_until
generation_kwargs:
until:
- "\n\n"
- "\n"
do_sample: false
temperature: 0.0
training_split: null
validation_split: null
test_split: test
fewshot_split: null
doc_to_text: "Ahora eres una Inteligencia Artificial experta en desmontar titulares sensacionalistas o clickbait. Tu tarea consiste en analizar noticias con titulares sensacionalistas y generar un resumen de una sola frase que revele la verdad detrás del titular.\nEste es el titular de la noticia: {{web_headline}}\nEl titular plantea una pregunta o proporciona información incompleta. Debes buscar en el cuerpo de la noticia una frase que responda lo que se sugiere en el título. Responde siempre que puedas parafraseando el texto original. Usa siempre las mínimas palabras posibles. Recuerda responder siempre en Español.\nEste es el cuerpo de la noticia:\n{{web_text}}"
doc_to_target: summary
target_delimiter: " "
num_fewshot: 0
should_decontaminate: false
doc_to_decontamination_query: sentence
metric_list:
- metric: !function utils.rouge1
higher_is_better: true
aggregation: !function utils.rouge1_agg
- metric: !function utils.average_len
higher_is_better: false
aggregation: !function utils.average_len_agg
metadata:
version: 1.0
import string
import evaluate
def clean_text(text: str) -> str:
# Remove punctuation
text = text.translate(str.maketrans("", "", string.punctuation))
# Remove newlines and multiple spaces
text = text.replace("\n", " ").strip()
text = " ".join(text.split()).strip()
# lowercase
text = text.lower()
return text
def rouge1(items):
"""
# passthrough for efficiency
"""
return items
def average_len(items):
"""
# passthrough for efficiency
"""
return items
def rouge1_agg(items):
"""
Higher is better
"""
refs = list(zip(*items))[0]
refs = [[clean_text(ref)] for ref in refs]
# print("refs", refs)
preds = [clean_text(x) for x in list(zip(*items))[1]]
# print("preds", preds)
rouge_scorer = evaluate.load("rouge")
return rouge_scorer.compute(predictions=preds, references=refs)["rouge1"]
def average_len_agg(items):
"""
Higher is better
"""
preds = [clean_text(x) for x in list(zip(*items))[1]]
return sum(len(x.split()) for x in preds) / len(preds)
...@@ -14,24 +14,6 @@ Derived from the Natural Questions dataset, introduced in https://storage.google ...@@ -14,24 +14,6 @@ Derived from the Natural Questions dataset, introduced in https://storage.google
### Citation ### Citation
``` ```
@inproceedings{lee-etal-2019-latent,
title = "Latent Retrieval for Weakly Supervised Open Domain Question Answering",
author = "Lee, Kenton and
Chang, Ming-Wei and
Toutanova, Kristina",
editor = "Korhonen, Anna and
Traum, David and
M{\`a}rquez, Llu{\'\i}s",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1612",
doi = "10.18653/v1/P19-1612",
pages = "6086--6096",
abstract = "Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.",
}
@article{47761, @article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research}, title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
......
# Paloma
### Paper
Title: Paloma: A Benchmark for Evaluating Language Model Fit
Abstract: https://arxiv.org/abs/2312.10523v1
Paloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. It assesses the performance of various models across 585 distinct domains.
Homepage: https://allenai.org/olmo
### Note
If you are running the entire `paloma` benchmark (or just `paloma_dolma_100_programing_languages`) with a HuggingFace model, make sure to pass `logits_cache=False` to `--model_args`, for example:
```
lm_eval --model hf --model_args pretrained=EleutherAI/pythia-160m,logits_cache=False --tasks paloma
```
### Citation
```
@article{paloma,
title={{Paloma}: A Benchmark for Evaluating Language Model Fit},
author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse},
journal={technical report},
year={2023},
url={https://paloma.allen.ai/}
}
```
### Groups and Tasks
#### Groups
* `paloma`
#### Tasks
* `paloma_4chan_meta_sep`
* `paloma_c4_100_domains`
* `paloma_c4_en`
* `paloma_dolma_100_programing_languages`
* `paloma_dolma_100_subreddits`
* `paloma_dolma-v1_5`
* `paloma_falcon-refinedweb`
* `paloma_gab`
* `paloma_m2d2_s2orc_unsplit`
* `paloma_m2d2_wikipedia_unsplit`
* `paloma_manosphere_meta_sep`
* `paloma_mc4`
* `paloma_ptb`
* `paloma_redpajama`
* `paloma_twitterAAE_HELM_fixed`
* `paloma_wikitext_103`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
group:
- paloma
dataset_path: allenai/paloma
output_type: loglikelihood_rolling
validation_split: val
test_split: test
doc_to_text: ""
doc_to_target: !function paloma_utils.doc_to_target
should_decontaminate: true
doc_to_decontamination_query: !function paloma_utils.doc_to_target
metric_list:
- metric: word_perplexity
aggregation: weighted_perplexity
higher_is_better: false
- metric: byte_perplexity
aggregation: weighted_perplexity
higher_is_better: false
- metric: bits_per_byte
aggregation: bits_per_byte
higher_is_better: false
metadata:
version: 1
include: _paloma_template
task: paloma_4chan_meta_sep
task_alias: 4chan
dataset_name: 4chan_meta_sep
include: _paloma_template
task: paloma_c4_100_domains
task_alias: C4 100 Domains
dataset_name: c4_100_domains
include: _paloma_template
task: paloma_c4_en
task_alias: C4
dataset_name: c4_en
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment