Unverified Commit 6c186b3b authored by Janusz Lisiecki's avatar Janusz Lisiecki Committed by GitHub
Browse files

Fix lack of proper loading of best_prec1 from the checkpoint (#1000)



- resume() is a nested function and when it loads best_prec1
  it creates a local variable that hides the one from the parent
  function (which refers to the global one). This PR adds `global`
  to modify the global variable as intended
Signed-off-by: default avatarJanusz Lisiecki <jlisiecki@nvidia.com>
parent 6b7e77b0
......@@ -182,6 +182,7 @@ def main():
print("=> loading checkpoint '{}'".format(args.resume))
checkpoint = torch.load(args.resume, map_location = lambda storage, loc: storage.cuda(args.gpu))
args.start_epoch = checkpoint['epoch']
global best_prec1
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment