Commit 64533e2e authored by yangzhong's avatar yangzhong
Browse files

update README

parent 87aab62b
# Bert-large infer
### Fine-tuning BERT on SQuAD1.0
### inference BERT on SQuAD1.0
The [`run_qa.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py) script
allows to fine-tune any model from our [hub](https://huggingface.co/models) (as long as its architecture has a `ForQuestionAnswering` version in the library) on a question-answering dataset (such as SQuAD, or any other QA dataset available in the `datasets` library, or your own csv/jsonlines files) as long as they are structured the same way as SQuAD. You might need to tweak the data processing inside the script if your data is structured differently.
......@@ -16,7 +16,7 @@ Note that if your dataset contains samples with no possible answers (like SQuAD
- [evaluate-v1.1.py](https://github.com/allenai/bi-att-flow/blob/master/squad/evaluate-v1.1.py)
- This fine-tuned model is available as a checkpoint under the reference [`bert-large-uncased-whole-word-masking-finetuned-squad`](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad).
This example code fine-tunes BERT on the SQuAD1.0 dataset.
This example code inference BERT on the SQuAD1.0 dataset.
```bash
python /nx/transformers/examples/pytorch/question-answering/run_qa.py \
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment