Commit 096d7d30 authored by zheng's avatar zheng Committed by Facebook Github Bot
Browse files

Fix the type annotations of three parameters found in two constructors (#1268)

Summary:
As their names suggest, the parameters `embedding_dim`, `ffn_embedding_dim`, and `num_attention_heads` should have type `int`, not `float`.

Also validated by https://github.com/pytorch/fairseq/blob/b5f41f828b0ec9b67fa60aceb0778073d1b368b2/fairseq/modules/sparse_transformer_sentence_encoder.py#L22#L24.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1268

Differential Revision: D18372518

Pulled By: myleott

fbshipit-source-id: 666739b6270a975536785886068a975e07312bb0
parent aaa37f05
......@@ -14,9 +14,9 @@ class SparseTransformerSentenceEncoderLayer(TransformerSentenceEncoderLayer):
def __init__(
self,
embedding_dim: float = 768,
ffn_embedding_dim: float = 3072,
num_attention_heads: float = 8,
embedding_dim: int = 768,
ffn_embedding_dim: int = 3072,
num_attention_heads: int = 8,
dropout: float = 0.1,
attention_dropout: float = 0.1,
activation_dropout: float = 0.1,
......
......@@ -22,9 +22,9 @@ class TransformerSentenceEncoderLayer(nn.Module):
def __init__(
self,
embedding_dim: float = 768,
ffn_embedding_dim: float = 3072,
num_attention_heads: float = 8,
embedding_dim: int = 768,
ffn_embedding_dim: int = 3072,
num_attention_heads: int = 8,
dropout: float = 0.1,
attention_dropout: float = 0.1,
activation_dropout: float = 0.1,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment