Commit b5949373 authored by Myle Ott's avatar Myle Ott Committed by Facebook Github Bot
Browse files

Fix docs (fixes #843)

Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/844

Differential Revision: D16069358

Pulled By: myleott

fbshipit-source-id: 5ca4ab392dbdc4dfdaa27b63e8ff1c3940c91a26
parent ca5b1da5
...@@ -563,7 +563,7 @@ class TransformerEncoderLayer(nn.Module): ...@@ -563,7 +563,7 @@ class TransformerEncoderLayer(nn.Module):
`(batch, src_len)` where padding elements are indicated by ``1``. `(batch, src_len)` where padding elements are indicated by ``1``.
Returns: Returns:
encoded output of shape `(batch, src_len, embed_dim)` encoded output of shape `(seq_len, batch, embed_dim)`
""" """
residual = x residual = x
x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True)
...@@ -677,7 +677,7 @@ class TransformerDecoderLayer(nn.Module): ...@@ -677,7 +677,7 @@ class TransformerDecoderLayer(nn.Module):
`(batch, src_len)` where padding elements are indicated by ``1``. `(batch, src_len)` where padding elements are indicated by ``1``.
Returns: Returns:
encoded output of shape `(batch, src_len, embed_dim)` encoded output of shape `(seq_len, batch, embed_dim)`
""" """
residual = x residual = x
x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment