Unverified Commit c8fdfe45 authored by Chanchana Sornsoontorn's avatar Chanchana Sornsoontorn Committed by GitHub
Browse files

Correct `Transformer2DModel.forward` docstring (#3074)

️chore(transformer_2d) update function signature for encoder_hidden_states
parent bba1c1de
......@@ -225,7 +225,7 @@ class Transformer2DModel(ModelMixin, ConfigMixin):
hidden_states ( When discrete, `torch.LongTensor` of shape `(batch size, num latent pixels)`.
When continuous, `torch.FloatTensor` of shape `(batch size, channel, height, width)`): Input
hidden_states
encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*):
encoder_hidden_states ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*):
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention.
timestep ( `torch.long`, *optional*):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment