Unverified Commit b04cf65b authored by S22's avatar S22 Committed by GitHub
Browse files

[Bugfix] Fix sparseadam bug in metapath2vec(pytorch) (#2607)



It seems that there is a bug in Sparseadam in PyTorch, and it can be temporarily fixed by: 
optimizer = optim.SparseAdam(list(self.skip_gram_model.parameters()), lr=self.initial_lr)
Co-authored-by: default avatarJinjing Zhou <VoVAllen@users.noreply.github.com>
parent 47c93805
......@@ -36,7 +36,7 @@ class Metapath2VecTrainer:
def train(self):
optimizer = optim.SparseAdam(self.skip_gram_model.parameters(), lr=self.initial_lr)
optimizer = optim.SparseAdam(list(self.skip_gram_model.parameters()), lr=self.initial_lr)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, len(self.dataloader))
for iteration in range(self.iterations):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment