Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
2ca62683
Unverified
Commit
2ca62683
authored
Feb 01, 2022
by
Yih-Dar
Committed by
GitHub
Feb 01, 2022
Browse files
fix from_vision_text_pretrained doc example (#15453)
Co-authored-by:
ydshieh
<
ydshieh@users.noreply.github.com
>
parent
dc05dd53
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py
...xt_dual_encoder/modeling_flax_vision_text_dual_encoder.py
+1
-1
src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py
...on_text_dual_encoder/modeling_vision_text_dual_encoder.py
+1
-1
No files found.
src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py
View file @
2ca62683
...
@@ -449,7 +449,7 @@ class FlaxVisionTextDualEncoderModel(FlaxPreTrainedModel):
...
@@ -449,7 +449,7 @@ class FlaxVisionTextDualEncoderModel(FlaxPreTrainedModel):
>>> # initialize a model from pretrained ViT and BERT models. Note that the projection layers will be randomly initialized.
>>> # initialize a model from pretrained ViT and BERT models. Note that the projection layers will be randomly initialized.
>>> model = FlaxVisionTextDualEncoderModel.from_vision_text_pretrained(
>>> model = FlaxVisionTextDualEncoderModel.from_vision_text_pretrained(
...
"bert-base-uncased",
"google/vit-base-patch16-224"
... "google/vit-base-patch16-224"
, "bert-base-uncased"
... )
... )
>>> # saving model after fine-tuning
>>> # saving model after fine-tuning
>>> model.save_pretrained("./vit-bert")
>>> model.save_pretrained("./vit-bert")
...
...
src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py
View file @
2ca62683
...
@@ -469,7 +469,7 @@ class VisionTextDualEncoderModel(PreTrainedModel):
...
@@ -469,7 +469,7 @@ class VisionTextDualEncoderModel(PreTrainedModel):
>>> # initialize a model from pretrained ViT and BERT models. Note that the projection layers will be randomly initialized.
>>> # initialize a model from pretrained ViT and BERT models. Note that the projection layers will be randomly initialized.
>>> model = VisionTextDualEncoderModel.from_vision_text_pretrained(
>>> model = VisionTextDualEncoderModel.from_vision_text_pretrained(
...
"bert-base-uncased",
"google/vit-base-patch16-224"
... "google/vit-base-patch16-224"
, "bert-base-uncased"
... )
... )
>>> # saving model after fine-tuning
>>> # saving model after fine-tuning
>>> model.save_pretrained("./vit-bert")
>>> model.save_pretrained("./vit-bert")
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment