Unverified Commit da211969 authored by Jess's avatar Jess Committed by GitHub
Browse files

Merge branch 'EleutherAI:main' into main

parents 1b97e487 801322e0
"dataset_name": "marketing"
"description": "The following are multiple choice questions (with answers) about marketing.\n\
\n"
"group": "mmlu_other_generative"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "mmlu_marketing_generative"
"task_alias": "marketing"
"dataset_name": "medical_genetics"
"description": "The following are multiple choice questions (with answers) about medical\
\ genetics.\n\n"
"group": "mmlu_other_generative"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "mmlu_medical_genetics_generative"
"task_alias": "medical_genetics"
"dataset_name": "miscellaneous"
"description": "The following are multiple choice questions (with answers) about miscellaneous.\n\
\n"
"group": "mmlu_other_generative"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "mmlu_miscellaneous_generative"
"task_alias": "miscellaneous"
"dataset_name": "moral_disputes"
"description": "The following are multiple choice questions (with answers) about moral\
\ disputes.\n\n"
"group": "mmlu_humanities_generative"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "mmlu_moral_disputes_generative"
"task_alias": "moral_disputes"
"dataset_name": "moral_scenarios"
"description": "The following are multiple choice questions (with answers) about moral\
\ scenarios.\n\n"
"group": "mmlu_humanities_generative"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "mmlu_moral_scenarios_generative"
"task_alias": "moral_scenarios"
"dataset_name": "nutrition"
"description": "The following are multiple choice questions (with answers) about nutrition.\n\
\n"
"group": "mmlu_other_generative"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "mmlu_nutrition_generative"
"task_alias": "nutrition"
"dataset_name": "philosophy"
"description": "The following are multiple choice questions (with answers) about philosophy.\n\
\n"
"group": "mmlu_humanities_generative"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "mmlu_philosophy_generative"
"task_alias": "philosophy"
"dataset_name": "prehistory"
"description": "The following are multiple choice questions (with answers) about prehistory.\n\
\n"
"group": "mmlu_humanities_generative"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "mmlu_prehistory_generative"
"task_alias": "prehistory"
"dataset_name": "professional_accounting"
"description": "The following are multiple choice questions (with answers) about professional\
\ accounting.\n\n"
"group": "mmlu_other_generative"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "mmlu_professional_accounting_generative"
"task_alias": "professional_accounting"
"dataset_name": "professional_law"
"description": "The following are multiple choice questions (with answers) about professional\
\ law.\n\n"
"group": "mmlu_humanities_generative"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "mmlu_professional_law_generative"
"task_alias": "professional_law"
"dataset_name": "professional_medicine"
"description": "The following are multiple choice questions (with answers) about professional\
\ medicine.\n\n"
"group": "mmlu_other_generative"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "mmlu_professional_medicine_generative"
"task_alias": "professional_medicine"
"dataset_name": "professional_psychology"
"description": "The following are multiple choice questions (with answers) about professional\
\ psychology.\n\n"
"group": "mmlu_social_sciences_generative"
"group_alias": "social_sciences"
"include": "_default_template_yaml"
"task": "mmlu_professional_psychology_generative"
"task_alias": "professional_psychology"
"dataset_name": "public_relations"
"description": "The following are multiple choice questions (with answers) about public\
\ relations.\n\n"
"group": "mmlu_social_sciences_generative"
"group_alias": "social_sciences"
"include": "_default_template_yaml"
"task": "mmlu_public_relations_generative"
"task_alias": "public_relations"
"dataset_name": "security_studies"
"description": "The following are multiple choice questions (with answers) about security\
\ studies.\n\n"
"group": "mmlu_social_sciences_generative"
"group_alias": "social_sciences"
"include": "_default_template_yaml"
"task": "mmlu_security_studies_generative"
"task_alias": "security_studies"
"dataset_name": "sociology"
"description": "The following are multiple choice questions (with answers) about sociology.\n\
\n"
"group": "mmlu_social_sciences_generative"
"group_alias": "social_sciences"
"include": "_default_template_yaml"
"task": "mmlu_sociology_generative"
"task_alias": "sociology"
"dataset_name": "us_foreign_policy"
"description": "The following are multiple choice questions (with answers) about us\
\ foreign policy.\n\n"
"group": "mmlu_social_sciences_generative"
"group_alias": "social_sciences"
"include": "_default_template_yaml"
"task": "mmlu_us_foreign_policy_generative"
"task_alias": "us_foreign_policy"
"dataset_name": "virology"
"description": "The following are multiple choice questions (with answers) about virology.\n\
\n"
"group": "mmlu_other_generative"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "mmlu_virology_generative"
"task_alias": "virology"
"dataset_name": "world_religions"
"description": "The following are multiple choice questions (with answers) about world\
\ religions.\n\n"
"group": "mmlu_humanities_generative"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "mmlu_world_religions_generative"
"task_alias": "world_religions"
# NoticIA
### Paper
Title: `NoticIA: A Clickbait Article Summarization Dataset in Spanish`
Abstract: https://arxiv.org/abs/2404.07611
We present NoticIA, a dataset consisting of 850 Spanish news articles featuring prominent clickbait headlines, each paired with high-quality, single-sentence generative summarizations written by humans. This task demands advanced text understanding and summarization abilities, challenging the models' capacity to infer and connect diverse pieces of information to meet the user's informational needs generated by the clickbait headline. We evaluate the Spanish text comprehension capabilities of a wide range of state-of-the-art large language models. Additionally, we use the dataset to train ClickbaitFighter, a task-specific model that achieves near-human performance in this task.
Homepage: https://github.com/ikergarcia1996/NoticIA
### Citation
```
@article{noticia2024,
title={NoticIA: A Clickbait Article Summarization Dataset in Spanish},
author={Iker García-Ferrero and Begoña Altuna},
year={2024},
journal = {Procesamiento del Lenguaje Natural},
volume = {73},
number = {0},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `noticia`
#### Metrics
Following the original implementation, this task will compute the 'Rouge1 score' and 'Average Summary Length.'
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
task: noticia
dataset_path: Iker/NoticIA
dataset_name: null
output_type: generate_until
generation_kwargs:
until:
- "\n\n"
- "\n"
do_sample: false
temperature: 0.0
training_split: null
validation_split: null
test_split: test
fewshot_split: null
doc_to_text: "Ahora eres una Inteligencia Artificial experta en desmontar titulares sensacionalistas o clickbait. Tu tarea consiste en analizar noticias con titulares sensacionalistas y generar un resumen de una sola frase que revele la verdad detrás del titular.\nEste es el titular de la noticia: {{web_headline}}\nEl titular plantea una pregunta o proporciona información incompleta. Debes buscar en el cuerpo de la noticia una frase que responda lo que se sugiere en el título. Responde siempre que puedas parafraseando el texto original. Usa siempre las mínimas palabras posibles. Recuerda responder siempre en Español.\nEste es el cuerpo de la noticia:\n{{web_text}}"
doc_to_target: summary
target_delimiter: " "
num_fewshot: 0
should_decontaminate: false
doc_to_decontamination_query: sentence
metric_list:
- metric: !function utils.rouge1
higher_is_better: true
aggregation: !function utils.rouge1_agg
- metric: !function utils.average_len
higher_is_better: false
aggregation: !function utils.average_len_agg
metadata:
version: 1.0
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment