Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
41c9c3b9
Commit
41c9c3b9
authored
Nov 30, 2018
by
Mufei Li
Committed by
Minjie Wang
Nov 29, 2018
Browse files
[Doc] Fix batched_graph doc (#193)
parent
8dc6784b
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
47 additions
and
35 deletions
+47
-35
python/dgl/batched_graph.py
python/dgl/batched_graph.py
+47
-35
No files found.
python/dgl/batched_graph.py
View file @
41c9c3b9
...
@@ -24,13 +24,15 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -24,13 +24,15 @@ class BatchedDGLGraph(DGLGraph):
The nodes and edges are re-indexed with a new id in the batched graph with the
The nodes and edges are re-indexed with a new id in the batched graph with the
rule below:
rule below:
| Graph 1 | Graph 2 |...| Graph k
====== ========== ======================== === ==========================
--------------------------------------------------------------------------------
item Graph 1 Graph 2 ... Graph k
raw id | 0, ..., N1 | 0 , ..., N2 |...| ..., Nk
====== ========== ======================== === ==========================
new id | 0, ..., N1 | N1 + 1, ..., N1 + N2 + 1 |...| ..., N1 + ... + Nk + k - 1
raw id 0, ..., N1 0, ..., N2 ... ..., Nk
new id 0, ..., N1 N1 + 1, ..., N1 + N2 + 1 ... ..., N1 + ... + Nk + k - 1
====== ========== ======================== === ==========================
The batched graph is read-only, i.e. one cannot further add nodes and edges.
The batched graph is read-only, i.e. one cannot further add nodes and edges.
A RuntimeError will be raised if one attempts.
A
``
RuntimeError
``
will be raised if one attempts.
To modify the features in :class:`BatchedDGLGraph` has no effect on the original
To modify the features in :class:`BatchedDGLGraph` has no effect on the original
graphs. See the examples below about how to work around.
graphs. See the examples below about how to work around.
...
@@ -38,7 +40,7 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -38,7 +40,7 @@ class BatchedDGLGraph(DGLGraph):
Parameters
Parameters
----------
----------
graph_list : iterable
graph_list : iterable
A collection of :class:`~dgl.DGLGraph
s
` to be batched.
A collection of :class:`~dgl.DGLGraph`
objects
to be batched.
node_attrs : None, str or iterable, optional
node_attrs : None, str or iterable, optional
The node attributes to be batched. If ``None``, the :class:`BatchedDGLGraph` object
The node attributes to be batched. If ``None``, the :class:`BatchedDGLGraph` object
will not have any node attributes. By default, all node attributes will be batched.
will not have any node attributes. By default, all node attributes will be batched.
...
@@ -49,7 +51,7 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -49,7 +51,7 @@ class BatchedDGLGraph(DGLGraph):
Examples
Examples
--------
--------
Create two :class:`~dgl.DGLGraph
s
` objects.
Create two :class:`~dgl.DGLGraph` objects.
**Instantiation:**
**Instantiation:**
...
@@ -67,7 +69,7 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -67,7 +69,7 @@ class BatchedDGLGraph(DGLGraph):
>>> g2.ndata['hv'] = th.tensor([[2.], [3.], [4.]]) # Initialize node features
>>> g2.ndata['hv'] = th.tensor([[2.], [3.], [4.]]) # Initialize node features
>>> g2.edata['he'] = th.tensor([[1.], [2.]]) # Initialize edge features
>>> g2.edata['he'] = th.tensor([[1.], [2.]]) # Initialize edge features
Merge two :class:`~dgl.DGLGraph
s
` objects into one :class:`BatchedDGLGraph` object.
Merge two :class:`~dgl.DGLGraph` objects into one :class:`BatchedDGLGraph` object.
When merging a list of graphs, we can choose to include only a subset of the attributes.
When merging a list of graphs, we can choose to include only a subset of the attributes.
>>> bg = dgl.batch([g1, g2], edge_attrs=None)
>>> bg = dgl.batch([g1, g2], edge_attrs=None)
...
@@ -89,6 +91,7 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -89,6 +91,7 @@ class BatchedDGLGraph(DGLGraph):
**Property:**
**Property:**
We can still get a brief summary of the graphs that constitute the batched graph.
We can still get a brief summary of the graphs that constitute the batched graph.
>>> bg.batch_size
>>> bg.batch_size
2
2
>>> bg.batch_num_nodes
>>> bg.batch_num_nodes
...
@@ -100,11 +103,11 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -100,11 +103,11 @@ class BatchedDGLGraph(DGLGraph):
Another common demand for graph neural networks is graph readout, which is a
Another common demand for graph neural networks is graph readout, which is a
function that takes in the node attributes and/or edge attributes for a graph
function that takes in the node attributes and/or edge attributes for a graph
and outputs a vector summarizing the information in the graph.
`BatchedDGLGraph`
and outputs a vector summarizing the information in the graph.
also supports performing readout for a batch of graphs at once.
:class:`BatchedDGLGraph`
also supports performing readout for a batch of graphs at once.
Below we take the built-in readout function :func:`sum_nodes` as an example, which
Below we take the built-in readout function :func:`sum_nodes` as an example, which
sums a particular node attribute for each graph.
sums
over
a particular
kind of
node attribute for each graph.
>>> dgl.sum_nodes(bg, 'hv') # Sum the node attribute 'hv' for each graph.
>>> dgl.sum_nodes(bg, 'hv') # Sum the node attribute 'hv' for each graph.
tensor([[1.], # 0 + 1
tensor([[1.], # 0 + 1
...
@@ -113,7 +116,7 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -113,7 +116,7 @@ class BatchedDGLGraph(DGLGraph):
**Message passing:**
**Message passing:**
For message passing and related operations, :class:`BatchedDGLGraph` acts exactly
For message passing and related operations, :class:`BatchedDGLGraph` acts exactly
the same as :class:`~dgl.DGLGraph
s
`.
the same as :class:`~dgl.DGLGraph`.
**Update Attributes:**
**Update Attributes:**
...
@@ -126,7 +129,8 @@ class BatchedDGLGraph(DGLGraph):
...
@@ -126,7 +129,8 @@ class BatchedDGLGraph(DGLGraph):
Instead, we can decompose the batched graph back into a list of graphs and use them
Instead, we can decompose the batched graph back into a list of graphs and use them
to replace the original graphs.
to replace the original graphs.
>>> g1, g2 = dgl.unbatch(bg) # returns a list of DGLGraphs
>>> g1, g2 = dgl.unbatch(bg) # returns a list of DGLGraph objects
>>> g2.edata['he']
>>> g2.edata['he']
tensor([[0., 0.],
tensor([[0., 0.],
[0., 0.]])}
[0., 0.]])}
...
@@ -286,7 +290,7 @@ def unbatch(graph):
...
@@ -286,7 +290,7 @@ def unbatch(graph):
Returns
Returns
-------
-------
list
list
A list of :class:`~dgl.DGLGraph
s
` objects whose attributes are obtained
A list of :class:`~dgl.DGLGraph` objects whose attributes are obtained
by partitioning the attributes of the :attr:`graph`. The length of the
by partitioning the attributes of the :attr:`graph`. The length of the
list is the same as the batch size of :attr:`graph`.
list is the same as the batch size of :attr:`graph`.
...
@@ -323,13 +327,13 @@ def unbatch(graph):
...
@@ -323,13 +327,13 @@ def unbatch(graph):
edge_frame
=
edge_frames
[
i
])
for
i
in
range
(
bsize
)]
edge_frame
=
edge_frames
[
i
])
for
i
in
range
(
bsize
)]
def
batch
(
graph_list
,
node_attrs
=
ALL
,
edge_attrs
=
ALL
):
def
batch
(
graph_list
,
node_attrs
=
ALL
,
edge_attrs
=
ALL
):
"""Batch a collection of :class:`~dgl.DGLGraph
s
` and return a
"""Batch a collection of :class:`~dgl.DGLGraph` and return a
:class:`BatchedDGLGraph` object that is independent of the :attr:`graph_list`.
:class:`BatchedDGLGraph` object that is independent of the :attr:`graph_list`.
Parameters
Parameters
----------
----------
graph_list : iterable
graph_list : iterable
A collection of :class:`~dgl.DGLGraph
s
` to be batched.
A collection of :class:`~dgl.DGLGraph` to be batched.
node_attrs : None, str or iterable
node_attrs : None, str or iterable
The node attributes to be batched. If ``None``, the :class:`BatchedDGLGraph`
The node attributes to be batched. If ``None``, the :class:`BatchedDGLGraph`
object will not have any node attributes. By default, all node attributes will
object will not have any node attributes. By default, all node attributes will
...
@@ -381,7 +385,7 @@ def _sum_on(graph, on, input, weight):
...
@@ -381,7 +385,7 @@ def _sum_on(graph, on, input, weight):
def
sum_nodes
(
graph
,
input
,
weight
=
None
):
def
sum_nodes
(
graph
,
input
,
weight
=
None
):
"""Sums all the values of node field :attr:`input` in :attr:`graph`, optionally
"""Sums all the values of node field :attr:`input` in :attr:`graph`, optionally
multiplies the field by a scalar node field :attr`weight`.
multiplies the field by a scalar node field :attr
:
`weight`.
Parameters
Parameters
----------
----------
...
@@ -393,7 +397,7 @@ def sum_nodes(graph, input, weight=None):
...
@@ -393,7 +397,7 @@ def sum_nodes(graph, input, weight=None):
The weight field. If None, no weighting will be performed,
The weight field. If None, no weighting will be performed,
otherwise, weight each node feature with field :attr:`input`.
otherwise, weight each node feature with field :attr:`input`.
for summation. The weight feature associated in the :attr:`graph`
for summation. The weight feature associated in the :attr:`graph`
should be a tensor of shape [graph.number_of_nodes(), 1].
should be a tensor of shape
``
[graph.number_of_nodes(), 1]
``
.
Returns
Returns
-------
-------
...
@@ -414,7 +418,7 @@ def sum_nodes(graph, input, weight=None):
...
@@ -414,7 +418,7 @@ def sum_nodes(graph, input, weight=None):
>>> import dgl
>>> import dgl
>>> import torch as th
>>> import torch as th
Create two :class:`~dgl.DGLGraph
s
` objects and initialize their
Create two :class:`~dgl.DGLGraph` objects and initialize their
node features.
node features.
>>> g1 = dgl.DGLGraph() # Graph 1
>>> g1 = dgl.DGLGraph() # Graph 1
...
@@ -426,15 +430,17 @@ def sum_nodes(graph, input, weight=None):
...
@@ -426,15 +430,17 @@ def sum_nodes(graph, input, weight=None):
>>> g2.add_nodes(3)
>>> g2.add_nodes(3)
>>> g2.ndata['h'] = th.tensor([[1.], [2.], [3.]])
>>> g2.ndata['h'] = th.tensor([[1.], [2.], [3.]])
Sum over node attribute
'h'
without weighting for each graph in a
Sum over node attribute
:attr:`h`
without weighting for each graph in a
batched graph.
batched graph.
>>> bg = dgl.batch([g1, g2], node_attrs='h')
>>> bg = dgl.batch([g1, g2], node_attrs='h')
>>> dgl.sum_nodes(bg, 'h')
>>> dgl.sum_nodes(bg, 'h')
tensor([[3.], # 1 + 2
tensor([[3.], # 1 + 2
[6.]]) # 1 + 2 + 3
[6.]]) # 1 + 2 + 3
Sum node attribute 'h' with weight from node attribute 'w' for a single
Sum node attribute :attr:`h` with weight from node attribute :attr:`w`
graph.
for a single graph.
>>> dgl.sum_nodes(g1, 'h', 'w')
>>> dgl.sum_nodes(g1, 'h', 'w')
tensor([15.]) # 1 * 3 + 2 * 6
tensor([15.]) # 1 * 3 + 2 * 6
...
@@ -460,7 +466,7 @@ def sum_edges(graph, input, weight=None):
...
@@ -460,7 +466,7 @@ def sum_edges(graph, input, weight=None):
The weight field. If None, no weighting will be performed,
The weight field. If None, no weighting will be performed,
otherwise, weight each edge feature with field :attr:`input`.
otherwise, weight each edge feature with field :attr:`input`.
for summation. The weight feature associated in the :attr:`graph`
for summation. The weight feature associated in the :attr:`graph`
should be a tensor of shape [graph.number_of_edges(), 1].
should be a tensor of shape
``
[graph.number_of_edges(), 1]
``
.
Returns
Returns
-------
-------
...
@@ -481,7 +487,7 @@ def sum_edges(graph, input, weight=None):
...
@@ -481,7 +487,7 @@ def sum_edges(graph, input, weight=None):
>>> import dgl
>>> import dgl
>>> import torch as th
>>> import torch as th
Create two :class:`~dgl.DGLGraph
s
` objects and initialize their
Create two :class:`~dgl.DGLGraph` objects and initialize their
edge features.
edge features.
>>> g1 = dgl.DGLGraph() # Graph 1
>>> g1 = dgl.DGLGraph() # Graph 1
...
@@ -495,15 +501,17 @@ def sum_edges(graph, input, weight=None):
...
@@ -495,15 +501,17 @@ def sum_edges(graph, input, weight=None):
>>> g2.add_edges([0, 1, 2], [1, 2, 0])
>>> g2.add_edges([0, 1, 2], [1, 2, 0])
>>> g2.edata['h'] = th.tensor([[1.], [2.], [3.]])
>>> g2.edata['h'] = th.tensor([[1.], [2.], [3.]])
Sum over edge attribute
'h'
without weighting for each graph in a
Sum over edge attribute
:attr:`h`
without weighting for each graph in a
batched graph.
batched graph.
>>> bg = dgl.batch([g1, g2], edge_attrs='h')
>>> bg = dgl.batch([g1, g2], edge_attrs='h')
>>> dgl.sum_edges(bg, 'h')
>>> dgl.sum_edges(bg, 'h')
tensor([[3.], # 1 + 2
tensor([[3.], # 1 + 2
[6.]]) # 1 + 2 + 3
[6.]]) # 1 + 2 + 3
Sum edge attribute 'h' with weight from edge attribute 'w' for a single
Sum edge attribute :attr:`h` with weight from edge attribute :attr:`w`
graph.
for a single graph.
>>> dgl.sum_edges(g1, 'h', 'w')
>>> dgl.sum_edges(g1, 'h', 'w')
tensor([15.]) # 1 * 3 + 2 * 6
tensor([15.]) # 1 * 3 + 2 * 6
...
@@ -562,7 +570,7 @@ def mean_nodes(graph, input, weight=None):
...
@@ -562,7 +570,7 @@ def mean_nodes(graph, input, weight=None):
The weight field. If None, no weighting will be performed,
The weight field. If None, no weighting will be performed,
otherwise, weight each node feature with field :attr:`input`.
otherwise, weight each node feature with field :attr:`input`.
for calculating mean. The weight feature associated in the :attr:`graph`
for calculating mean. The weight feature associated in the :attr:`graph`
should be a tensor of shape [graph.number_of_nodes(), 1].
should be a tensor of shape
``
[graph.number_of_nodes(), 1]
``
.
Returns
Returns
-------
-------
...
@@ -583,7 +591,7 @@ def mean_nodes(graph, input, weight=None):
...
@@ -583,7 +591,7 @@ def mean_nodes(graph, input, weight=None):
>>> import dgl
>>> import dgl
>>> import torch as th
>>> import torch as th
Create two :class:`~dgl.DGLGraph
s
` objects and initialize their
Create two :class:`~dgl.DGLGraph` objects and initialize their
node features.
node features.
>>> g1 = dgl.DGLGraph() # Graph 1
>>> g1 = dgl.DGLGraph() # Graph 1
...
@@ -595,15 +603,17 @@ def mean_nodes(graph, input, weight=None):
...
@@ -595,15 +603,17 @@ def mean_nodes(graph, input, weight=None):
>>> g2.add_nodes(3)
>>> g2.add_nodes(3)
>>> g2.ndata['h'] = th.tensor([[1.], [2.], [3.]])
>>> g2.ndata['h'] = th.tensor([[1.], [2.], [3.]])
Average over node attribute
'h'
without weighting for each graph in a
Average over node attribute
:attr:`h`
without weighting for each graph in a
batched graph.
batched graph.
>>> bg = dgl.batch([g1, g2], node_attrs='h')
>>> bg = dgl.batch([g1, g2], node_attrs='h')
>>> dgl.mean_nodes(bg, 'h')
>>> dgl.mean_nodes(bg, 'h')
tensor([[1.5000], # (1 + 2) / 2
tensor([[1.5000], # (1 + 2) / 2
[2.0000]]) # (1 + 2 + 3) / 3
[2.0000]]) # (1 + 2 + 3) / 3
Sum node attribute
'h'
with normalized weight from node attribute
'w'
Sum node attribute
:attr:`h`
with normalized weight from node attribute
:attr:`w`
for a single graph.
for a single graph.
>>> dgl.mean_nodes(g1, 'h', 'w') # h1 * (w1 / (w1 + w2)) + h2 * (w2 / (w1 + w2))
>>> dgl.mean_nodes(g1, 'h', 'w') # h1 * (w1 / (w1 + w2)) + h2 * (w2 / (w1 + w2))
tensor([1.6667]) # 1 * (3 / (3 + 6)) + 2 * (6 / (3 + 6))
tensor([1.6667]) # 1 * (3 / (3 + 6)) + 2 * (6 / (3 + 6))
...
@@ -629,7 +639,7 @@ def mean_edges(graph, input, weight=None):
...
@@ -629,7 +639,7 @@ def mean_edges(graph, input, weight=None):
The weight field. If None, no weighting will be performed,
The weight field. If None, no weighting will be performed,
otherwise, weight each edge feature with field :attr:`input`.
otherwise, weight each edge feature with field :attr:`input`.
for calculating mean. The weight feature associated in the :attr:`graph`
for calculating mean. The weight feature associated in the :attr:`graph`
should be a tensor of shape [graph.number_of_edges(), 1].
should be a tensor of shape
``
[graph.number_of_edges(), 1]
``
.
Returns
Returns
-------
-------
...
@@ -650,7 +660,7 @@ def mean_edges(graph, input, weight=None):
...
@@ -650,7 +660,7 @@ def mean_edges(graph, input, weight=None):
>>> import dgl
>>> import dgl
>>> import torch as th
>>> import torch as th
Create two :class:`~dgl.DGLGraph
s
` objects and initialize their
Create two :class:`~dgl.DGLGraph` objects and initialize their
edge features.
edge features.
>>> g1 = dgl.DGLGraph() # Graph 1
>>> g1 = dgl.DGLGraph() # Graph 1
...
@@ -664,15 +674,17 @@ def mean_edges(graph, input, weight=None):
...
@@ -664,15 +674,17 @@ def mean_edges(graph, input, weight=None):
>>> g2.add_edges([0, 1, 2], [1, 2, 0])
>>> g2.add_edges([0, 1, 2], [1, 2, 0])
>>> g2.edata['h'] = th.tensor([[1.], [2.], [3.]])
>>> g2.edata['h'] = th.tensor([[1.], [2.], [3.]])
Average over edge attribute
'h'
without weighting for each graph in a
Average over edge attribute
:attr:`h`
without weighting for each graph in a
batched graph.
batched graph.
>>> bg = dgl.batch([g1, g2], edge_attrs='h')
>>> bg = dgl.batch([g1, g2], edge_attrs='h')
>>> dgl.mean_edges(bg, 'h')
>>> dgl.mean_edges(bg, 'h')
tensor([[1.5000], # (1 + 2) / 2
tensor([[1.5000], # (1 + 2) / 2
[2.0000]]) # (1 + 2 + 3) / 3
[2.0000]]) # (1 + 2 + 3) / 3
Sum edge attribute
'h'
with normalized weight from edge attribute
'w'
Sum edge attribute
:attr:`h`
with normalized weight from edge attribute
:attr:`w`
for a single graph.
for a single graph.
>>> dgl.mean_edges(g1, 'h', 'w') # h1 * (w1 / (w1 + w2)) + h2 * (w2 / (w1 + w2))
>>> dgl.mean_edges(g1, 'h', 'w') # h1 * (w1 / (w1 + w2)) + h2 * (w2 / (w1 + w2))
tensor([1.6667]) # 1 * (3 / (3 + 6)) + 2 * (6 / (3 + 6))
tensor([1.6667]) # 1 * (3 / (3 + 6)) + 2 * (6 / (3 + 6))
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment