**distributed_data_parallel.py** and **run.sh** show an example using `FP16_Optimizer` with`torch.nn.parallel.DistributedDataParallel` and the Pytorch multiprocess launcher script,[torch.distributed.launch](https://pytorch.org/docs/master/distributed.html#launch-utility).