This is xlm-roberta-large model finetuned on SQuADv2 dataset for question answering task
## Model details
XLM-Roberta was propsed in the [paper](https://arxiv.org/pdf/1911.02116.pdf)**XLM-R: State-of-the-art cross-lingual understanding through self-supervision
## Model training
This model was trained with following parameters using simpletransformers wrapper: