Commit 50591a29 authored by Teng Li's avatar Teng Li Committed by Facebook Github Bot
Browse files

Enable check_reduction for imagenet flow and fairseq

Summary:
As the title says, better to enable this for certain use cases to make
sure things are right

Reviewed By: myleott, pietern

Differential Revision: D13351753

fbshipit-source-id: cf495960fda71ebd679c23212e19703c93a9dbdc
parent 776e9ce3
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
# the root directory of this source tree. An additional grant of patent rights # the root directory of this source tree. An additional grant of patent rights
# can be found in the PATENTS file in the same directory. # can be found in the PATENTS file in the same directory.
import inspect
from torch.nn import parallel from torch.nn import parallel
from fairseq.distributed_utils import c10d_status from fairseq.distributed_utils import c10d_status
...@@ -46,6 +47,10 @@ def DistributedFairseqModel(args, model): ...@@ -46,6 +47,10 @@ def DistributedFairseqModel(args, model):
broadcast_buffers=False, broadcast_buffers=False,
bucket_cap_mb=args.bucket_cap_mb, bucket_cap_mb=args.bucket_cap_mb,
) )
# Maintain backward compatibility for 0.4 or earlier
if 'check_reduction' in inspect.getargspec(ddp_class)[0]:
init_kwargs['check_reduction'] = True
elif args.ddp_backend == 'no_c10d': elif args.ddp_backend == 'no_c10d':
ddp_class = LegacyDistributedDataParallel ddp_class = LegacyDistributedDataParallel
init_kwargs = dict( init_kwargs = dict(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment