"git@developer.sourcefind.cn:OpenDAS/fairscale.git" did not exist on "a2408eb819059632364dc2c15adcbd55239ae088"
Unverified Commit 22272de6 authored by Rhett Ying's avatar Rhett Ying Committed by GitHub
Browse files

[Fix] normalize by dst if edge_weight is None (#3744)

* [Fix] normalize by dst if edge_weight is None

* [Doc] fix math formula display issue
parent 8a8d36e9
...@@ -86,7 +86,7 @@ class APPNPConv(nn.Module): ...@@ -86,7 +86,7 @@ class APPNPConv(nn.Module):
edge_weight: torch.Tensor, optional edge_weight: torch.Tensor, optional
edge_weight to use in the message passing process. This is equivalent to edge_weight to use in the message passing process. This is equivalent to
using weighted adjacency matrix in the equation above, and using weighted adjacency matrix in the equation above, and
:math:\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2} :math:`\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2}`
is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`. is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`.
Returns Returns
...@@ -114,10 +114,9 @@ class APPNPConv(nn.Module): ...@@ -114,10 +114,9 @@ class APPNPConv(nn.Module):
if edge_weight is None: if edge_weight is None:
feat = feat * src_norm feat = feat * src_norm
graph.ndata['h'] = feat graph.ndata['h'] = feat
if edge_weight is None: w = th.ones(graph.number_of_edges(),
edge_weight = th.ones(graph.number_of_edges(), 1) 1) if edge_weight is None else edge_weight
graph.edata['w'] = self.edge_drop( graph.edata['w'] = self.edge_drop(w).to(feat.device)
edge_weight).to(feat.device)
graph.update_all(fn.u_mul_e('h', 'w', 'm'), graph.update_all(fn.u_mul_e('h', 'w', 'm'),
fn.sum('m', 'h')) fn.sum('m', 'h'))
feat = graph.ndata.pop('h') feat = graph.ndata.pop('h')
......
...@@ -192,7 +192,7 @@ class GCN2Conv(nn.Module): ...@@ -192,7 +192,7 @@ class GCN2Conv(nn.Module):
edge_weight: torch.Tensor, optional edge_weight: torch.Tensor, optional
edge_weight to use in the message passing process. This is equivalent to edge_weight to use in the message passing process. This is equivalent to
using weighted adjacency matrix in the equation above, and using weighted adjacency matrix in the equation above, and
:math:\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2} :math:`\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2}`
is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`. is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`.
......
...@@ -150,7 +150,7 @@ class SGConv(nn.Module): ...@@ -150,7 +150,7 @@ class SGConv(nn.Module):
edge_weight: torch.Tensor, optional edge_weight: torch.Tensor, optional
edge_weight to use in the message passing process. This is equivalent to edge_weight to use in the message passing process. This is equivalent to
using weighted adjacency matrix in the equation above, and using weighted adjacency matrix in the equation above, and
:math:\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2} :math:`\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2}`
is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`. is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`.
Returns Returns
......
...@@ -108,7 +108,7 @@ class TAGConv(nn.Module): ...@@ -108,7 +108,7 @@ class TAGConv(nn.Module):
edge_weight: torch.Tensor, optional edge_weight: torch.Tensor, optional
edge_weight to use in the message passing process. This is equivalent to edge_weight to use in the message passing process. This is equivalent to
using weighted adjacency matrix in the equation above, and using weighted adjacency matrix in the equation above, and
:math:\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2} :math:`\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2}`
is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`. is based on :class:`dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm`.
Returns Returns
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment