# SpanBERT (spanbert-base-cased) fine-tuned on SQuAD v1.1
[SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
## Details of SpanBERT
A pre-training method that is designed to better represent and predict spans of text.
[SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529)
## Details of the downstream task (Q&A) - Dataset
[SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/) contains 100,000+ question-answer pairs on 500+ articles.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 87.7k |
| SQuAD1.1 | eval | 10.6k |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py)