Unverified Commit 0661abc5 authored by Jaimeen Ahn's avatar Jaimeen Ahn Committed by GitHub
Browse files

Variable Correction for Consistency in Distillation Example (#11444)

As the error comes from the inconsistency of variable meaning number of gpus in parser and its actual usage in the train.py script, 'gpus' and 'n_gpu' respectively,  the correction makes the example work
parent 1d30ec95
......@@ -163,7 +163,7 @@ python -m torch.distributed.launch \
--master_port $MASTER_PORT \
train.py \
--force \
--gpus $WORLD_SIZE \
--n_gpu $WORLD_SIZE \
--student_type distilbert \
--student_config training_configs/distilbert-base-uncased.json \
--teacher_type bert \
......
......@@ -210,7 +210,7 @@ def main():
help="For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']."
"See details at https://nvidia.github.io/apex/amp.html",
)
parser.add_argument("--gpus", type=int, default=1, help="Number of GPUs in the node.")
parser.add_argument("--n_gpu", type=int, default=1, help="Number of GPUs in the node.")
parser.add_argument("--local_rank", type=int, default=-1, help="Distributed training - Local rank")
parser.add_argument("--seed", type=int, default=56, help="Random seed")
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment