This is electra-base-discriminator model finetuned on SQuADv1 dataset for for question answering task.
## Model details
As mentioned in the original paper: ELECTRA is a new method for self-supervised language representation learning.
It can be used to pre-train transformer networks using relatively little compute.
ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network,
similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU.
At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.
| Param | #Value |
|---------------------|--------|
| layers | 12 |
| hidden size | 768 |
| num attetion heads | 12 |
| on disk size | 436MB |
## Model training
This model was trained on google colab v100 GPU.
You can find the fine-tuning colab here
[](https://colab.research.google.com/drive/11yo-LaFsgggwmDSy2P8zD3tzf5cCb-DU?usp=sharing).
## Results
The results are actually slightly better than given in the paper.
In the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)