--- language: es thumbnail: https://i.imgur.com/uxAvBfh.png --- ## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh) **Electricidad-base-generator** (uncased) is a ```base``` Electra like model (generator in this case) trained on a + 20 GB of the [OSCAR](https://oscar-corpus.com/) Spanish corpus. As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Fast example of usage 🚀 ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/electricidad-base-generator", tokenizer="mrm8488/electricidad-base-generator" ) print( fill_mask(f"HuggingFace está creando {fill_mask.tokenizer.mask_token} que la comunidad usa para resolver tareas de NLP.") ) # Output: [{'sequence': '[CLS] huggingface esta creando herramientas que la comunidad usa para resolver tareas de nlp. [SEP]', 'score': 0.0896105170249939, 'token': 8760, 'token_str': 'herramientas'}, ...] ``` ## Acknowledgments I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)). > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with in Spain