Unverified Commit 28603770 authored by Thomas Wolf's avatar Thomas Wolf Committed by GitHub
Browse files

Merge pull request #134 from rodgzilla/update_doc_pretrained_models

Fixing various class documentations.
parents c18bdb44 71766748
...@@ -456,7 +456,9 @@ class PreTrainedBertModel(nn.Module): ...@@ -456,7 +456,9 @@ class PreTrainedBertModel(nn.Module):
. `bert-base-uncased` . `bert-base-uncased`
. `bert-large-uncased` . `bert-large-uncased`
. `bert-base-cased` . `bert-base-cased`
. `bert-base-multilingual` . `bert-large-cased`
. `bert-base-multilingual-uncased`
. `bert-base-multilingual-cased`
. `bert-base-chinese` . `bert-base-chinese`
- a path or url to a pretrained model archive containing: - a path or url to a pretrained model archive containing:
. `bert_config.json` a configuration file for the model . `bert_config.json` a configuration file for the model
...@@ -1035,15 +1037,7 @@ class BertForQuestionAnswering(PreTrainedBertModel): ...@@ -1035,15 +1037,7 @@ class BertForQuestionAnswering(PreTrainedBertModel):
the sequence output that computes start_logits and end_logits the sequence output that computes start_logits and end_logits
Params: Params:
`config`: either `config`: a BertConfig class instance with the configuration to build a new model.
- a BertConfig class instance with the configuration to build a new model, or
- a str with the name of a pre-trained model to load selected in the list of:
. `bert-base-uncased`
. `bert-large-uncased`
. `bert-base-cased`
. `bert-base-multilingual`
. `bert-base-chinese`
The pre-trained model will be downloaded and cached if needed.
Inputs: Inputs:
`input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment