Classify input text as either hateful or not hateful.
Homepage: https://github.com/microsoft/TOXIGEN
### Citation
### Citation
```
```
@inproceedings{NEURIPS2020_1457c0d6,
@inproceedings{hartvigsen2022toxigen,
author = {Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winter, Clemens and Hesse, Chris and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario},
title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},
booktitle = {Advances in Neural Information Processing Systems},
author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
*`anagrams1` - Anagrams of all but the first and last letter.
*`toxigen`
*`anagrams2` - Anagrams of all but the first and last 2 letters.
*`cycle_letters` - Cycle letters in a word.
*`random_insertion` - Random insertions in the word that must be removed.
*`reversed_words` - Words spelled backwards that must be reversed.
### Checklist
### Checklist
For adding novel benchmarks/datasets to the library:
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
* [] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [ ] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
* [x] Checked for equivalence with v0.3.0 LM Evaluation Harness