Commit 437c2386 authored by Myle Ott's avatar Myle Ott Committed by Facebook Github Bot
Browse files

Speed up saving checkpoints (#703)

Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/703

It's better to write one checkpoint and copy it, rather than repeatedly pickling the model via torch.save.

Differential Revision: D15213778

fbshipit-source-id: 27dad39853b09dab7f0e11c030313019f035dbb0
parent cf17068a
......@@ -14,6 +14,7 @@ import itertools
import math
import os
import random
import shutil
import torch
......@@ -307,8 +308,9 @@ def save_checkpoint(args, trainer, epoch_itr, val_loss):
checkpoints = [os.path.join(args.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond]
if len(checkpoints) > 0:
for cp in checkpoints:
trainer.save_checkpoint(cp, extra_state)
trainer.save_checkpoint(checkpoints[0], extra_state)
for cp in checkpoints[1:]:
shutil.copyfile(checkpoints[0], cp)
write_timer.stop()
print('| saved checkpoint {} (epoch {} @ {} updates) (writing took {} seconds)'.format(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment