Unverified Commit 08bd8f9f authored by Thomas Wolf's avatar Thomas Wolf Committed by GitHub
Browse files

Merge pull request #1505 from e-budur/master

Fixed the sample code in the title 'Quick tour'.
parents 8aa3b753 5a8c6e77
...@@ -176,10 +176,11 @@ BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNex ...@@ -176,10 +176,11 @@ BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNex
# All the classes for an architecture can be initiated from pretrained weights for this architecture # All the classes for an architecture can be initiated from pretrained weights for this architecture
# Note that additional weights added for fine-tuning are only initialized # Note that additional weights added for fine-tuning are only initialized
# and need to be trained on the down-stream task # and need to be trained on the down-stream task
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') pretrained_weights = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
for model_class in BERT_MODEL_CLASSES: for model_class in BERT_MODEL_CLASSES:
# Load pretrained model/tokenizer # Load pretrained model/tokenizer
model = model_class.from_pretrained('bert-base-uncased') model = model_class.from_pretrained(pretrained_weights)
# Models can return full list of hidden-states & attentions weights at each layer # Models can return full list of hidden-states & attentions weights at each layer
model = model_class.from_pretrained(pretrained_weights, model = model_class.from_pretrained(pretrained_weights,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment