"git@developer.sourcefind.cn:OpenDAS/fairseq.git" did not exist on "392bdd6ce0deb107d7b30ac14bdd7b4ac27aca01"
Unverified Commit c8fdfe45 authored by Chanchana Sornsoontorn's avatar Chanchana Sornsoontorn Committed by GitHub
Browse files

Correct `Transformer2DModel.forward` docstring (#3074)

️chore(transformer_2d) update function signature for encoder_hidden_states
parent bba1c1de
...@@ -225,7 +225,7 @@ class Transformer2DModel(ModelMixin, ConfigMixin): ...@@ -225,7 +225,7 @@ class Transformer2DModel(ModelMixin, ConfigMixin):
hidden_states ( When discrete, `torch.LongTensor` of shape `(batch size, num latent pixels)`. hidden_states ( When discrete, `torch.LongTensor` of shape `(batch size, num latent pixels)`.
When continuous, `torch.FloatTensor` of shape `(batch size, channel, height, width)`): Input When continuous, `torch.FloatTensor` of shape `(batch size, channel, height, width)`): Input
hidden_states hidden_states
encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*): encoder_hidden_states ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*):
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention. self-attention.
timestep ( `torch.long`, *optional*): timestep ( `torch.long`, *optional*):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment