[Bugfix] Fix sparseadam bug in metapath2vec(pytorch) (#2607)
It seems that there is a bug in Sparseadam in PyTorch, and it can be temporarily fixed by:
optimizer = optim.SparseAdam(list(self.skip_gram_model.parameters()), lr=self.initial_lr)
Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com>
Showing
Please register or sign in to comment