Commit 7a31fe06 authored by Siddharth Dalmia's avatar Siddharth Dalmia Committed by Facebook Github Bot
Browse files

vggblock support without pooling and pooling_kernel_size missing self (#839)

Summary:
1) VggBlock was not supported if pooling kernel size was None.
2) Since we modify pooling kernel size by using _pair. We should use self.pooling_kernel_size. But I agree it doesn't matter as pytorch is robust to this.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/839

Differential Revision: D16934112

Pulled By: okhonko

fbshipit-source-id: b6b95163b0e7f7203d76d535f01a41912382bdc3
parent 9e5edc10
...@@ -103,11 +103,12 @@ class VGGBlock(torch.nn.Module): ...@@ -103,11 +103,12 @@ class VGGBlock(torch.nn.Module):
input_dim = per_channel_dim input_dim = per_channel_dim
self.layers.append(nn.ReLU()) self.layers.append(nn.ReLU())
pool_op = nn.MaxPool2d(kernel_size=pooling_kernel_size, ceil_mode=True) if self.pooling_kernel_size is not None:
self.layers.append(pool_op) pool_op = nn.MaxPool2d(kernel_size=self.pooling_kernel_size, ceil_mode=True)
self.total_output_dim, self.output_dim = infer_conv_output_dim( self.layers.append(pool_op)
pool_op, input_dim, out_channels self.total_output_dim, self.output_dim = infer_conv_output_dim(
) pool_op, input_dim, out_channels
)
def forward(self, x): def forward(self, x):
for i, _ in enumerate(self.layers): for i, _ in enumerate(self.layers):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment