"...pytorch/git@developer.sourcefind.cn:OpenDAS/dgl.git" did not exist on "aad12df6a63b7c2269bc8ed68b10b9099b5df46d"
Commit 707334ce authored by brett koonce's avatar brett koonce Committed by Quan (Andy) Gan
Browse files

minor spelling tweaks (#349)

* minor spelling tweaks

* Update CONTRIBUTORS.md
parent 192bd952
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Contribution is always welcomed. A good starting place is the roadmap issue, where Contribution is always welcomed. A good starting place is the roadmap issue, where
you can find our current milestones. All contributions must go through pull requests you can find our current milestones. All contributions must go through pull requests
and be reviewed by the committors. See our [contribution guide](https://docs.dgl.ai/contribute.html) for more details. and be reviewed by the committers. See our [contribution guide](https://docs.dgl.ai/contribute.html) for more details.
Once your contribution is accepted and merged, congratulations, you are now a contributor to the DGL project. Once your contribution is accepted and merged, congratulations, you are now a contributor to the DGL project.
We will put your name in the list below and also on our [website](https://www.dgl.ai/ack). We will put your name in the list below and also on our [website](https://www.dgl.ai/ack).
...@@ -13,3 +13,4 @@ Contributors ...@@ -13,3 +13,4 @@ Contributors
[Yifei Ma](https://github.com/yifeim) [Yifei Ma](https://github.com/yifeim)
Hao Jin Hao Jin
[Sheng Zha](https://github.com/szha) [Sheng Zha](https://github.com/szha)
[Brett Koonce](https://github.com/brettkoonce)
...@@ -242,7 +242,7 @@ if __name__ == '__main__': ...@@ -242,7 +242,7 @@ if __name__ == '__main__':
parser.add_argument("--negative-sample", type=int, default=10, parser.add_argument("--negative-sample", type=int, default=10,
help="number of negative samples per positive sample") help="number of negative samples per positive sample")
parser.add_argument("--evaluate-every", type=int, default=500, parser.add_argument("--evaluate-every", type=int, default=500,
help="perform evalution every n epochs") help="perform evaluation every n epochs")
args = parser.parse_args() args = parser.parse_args()
print(args) print(args)
......
...@@ -27,7 +27,7 @@ Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default). ...@@ -27,7 +27,7 @@ Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default).
## Test Results ## Test Results
### Transfomer ### Transformer
- Multi30k: we achieve BLEU score 35.41 with default setting on Multi30k dataset, without using pre-trained embeddings. (if we set the number of layers to 2, the BLEU score could reach 36.45). - Multi30k: we achieve BLEU score 35.41 with default setting on Multi30k dataset, without using pre-trained embeddings. (if we set the number of layers to 2, the BLEU score could reach 36.45).
- WMT14: work in progress - WMT14: work in progress
...@@ -38,7 +38,7 @@ Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default). ...@@ -38,7 +38,7 @@ Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default).
## Notes ## Notes
- Currently we do not support Multi-GPU training(this will be fixed soon), you should only specifiy only one gpu\_id when running the training script. - Currently we do not support Multi-GPU training(this will be fixed soon), you should only specify only one gpu\_id when running the training script.
## Reference ## Reference
......
...@@ -9,7 +9,7 @@ Graph = namedtuple('Graph', ...@@ -9,7 +9,7 @@ Graph = namedtuple('Graph',
['g', 'src', 'tgt', 'tgt_y', 'nids', 'eids', 'nid_arr', 'n_nodes', 'n_edges', 'n_tokens']) ['g', 'src', 'tgt', 'tgt_y', 'nids', 'eids', 'nid_arr', 'n_nodes', 'n_edges', 'n_tokens'])
class GraphPool: class GraphPool:
"Create a graph pool in advance to accelerate graph buildling phase in Transformer." "Create a graph pool in advance to accelerate graph building phase in Transformer."
def __init__(self, n=50, m=50): def __init__(self, n=50, m=50):
''' '''
args: args:
......
...@@ -115,7 +115,7 @@ def empty(shape, dtype="float32", ctx=context(1, 0)): ...@@ -115,7 +115,7 @@ def empty(shape, dtype="float32", ctx=context(1, 0)):
def from_dlpack(dltensor): def from_dlpack(dltensor):
"""Produce an array from a DLPack tensor without memory copy. """Produce an array from a DLPack tensor without memory copy.
Retreives the underlying DLPack tensor's pointer to create an array from the Retrieves the underlying DLPack tensor's pointer to create an array from the
data. Removes the original DLPack tensor's destructor as now the array is data. Removes the original DLPack tensor's destructor as now the array is
responsible for destruction. responsible for destruction.
...@@ -195,7 +195,7 @@ class NDArrayBase(_NDArrayBase): ...@@ -195,7 +195,7 @@ class NDArrayBase(_NDArrayBase):
raise TypeError('type %s not supported' % str(type(value))) raise TypeError('type %s not supported' % str(type(value)))
def copyfrom(self, source_array): def copyfrom(self, source_array):
"""Peform an synchronize copy from the array. """Perform a synchronized copy from the array.
Parameters Parameters
---------- ----------
......
...@@ -73,7 +73,7 @@ class DGLType(ctypes.Structure): ...@@ -73,7 +73,7 @@ class DGLType(ctypes.Structure):
bits = 64 bits = 64
head = "" head = ""
else: else:
raise ValueError("Donot know how to handle type %s" % type_str) raise ValueError("Do not know how to handle type %s" % type_str)
bits = int(head) if head else bits bits = int(head) if head else bits
inst.bits = bits inst.bits = bits
......
...@@ -422,7 +422,7 @@ def scatter_row(data, row_index, value): ...@@ -422,7 +422,7 @@ def scatter_row(data, row_index, value):
pass pass
def scatter_row_inplace(data, row_index, value): def scatter_row_inplace(data, row_index, value):
"""Write the value into the data tensor using the row index inplacely. """Write the value into the data tensor using the row index inplace.
This is an inplace write so it will break the autograd. This is an inplace write so it will break the autograd.
......
...@@ -318,7 +318,7 @@ class ImmutableGraphIndex(object): ...@@ -318,7 +318,7 @@ class ImmutableGraphIndex(object):
Parameters Parameters
---------- ----------
transpose : bool transpose : bool
A flag to tranpose the returned adjacency matrix. A flag to transpose the returned adjacency matrix.
ctx : context ctx : context
The device context of the returned matrix. The device context of the returned matrix.
...@@ -352,7 +352,7 @@ class ImmutableGraphIndex(object): ...@@ -352,7 +352,7 @@ class ImmutableGraphIndex(object):
def from_edge_list(self, elist): def from_edge_list(self, elist):
"""Convert from an edge list. """Convert from an edge list.
Paramters Parameters
--------- ---------
elist : list elist : list
List of (u, v) edge tuple. List of (u, v) edge tuple.
......
...@@ -282,7 +282,7 @@ class RDFReader(object): ...@@ -282,7 +282,7 @@ class RDFReader(object):
def relationList(self): def relationList(self):
""" """
Returns a list of relations, ordered descending by frequenecy Returns a list of relations, ordered descending by frequency
:return: :return:
""" """
res = list(set(self.__graph.predicates())) res = list(set(self.__graph.predicates()))
...@@ -327,7 +327,7 @@ def _load_data(dataset_str='aifb', dataset_path=None): ...@@ -327,7 +327,7 @@ def _load_data(dataset_str='aifb', dataset_path=None):
train_file = os.path.join(dataset_path, 'trainingSet.tsv') train_file = os.path.join(dataset_path, 'trainingSet.tsv')
test_file = os.path.join(dataset_path, 'testSet.tsv') test_file = os.path.join(dataset_path, 'testSet.tsv')
if dataset_str == 'am': if dataset_str == 'am':
label_header = 'label_cateogory' label_header = 'label_category'
nodes_header = 'proxy' nodes_header = 'proxy'
elif dataset_str == 'aifb': elif dataset_str == 'aifb':
label_header = 'label_affiliation' label_header = 'label_affiliation'
......
...@@ -208,7 +208,7 @@ def NeighborSampler(g, batch_size, expand_factor, num_hops=1, ...@@ -208,7 +208,7 @@ def NeighborSampler(g, batch_size, expand_factor, num_hops=1,
"DGLBACKEND" environment variable to "mxnet". "DGLBACKEND" environment variable to "mxnet".
This creates a subgraph data loader that samples subgraphs from the input graph This creates a subgraph data loader that samples subgraphs from the input graph
with neighbor sampling. This simpling method is implemented in C and can perform with neighbor sampling. This sampling method is implemented in C and can perform
sampling very efficiently. sampling very efficiently.
A subgraph grows from a seed vertex. It contains sampled neighbors A subgraph grows from a seed vertex. It contains sampled neighbors
......
...@@ -42,7 +42,7 @@ class Scheme(namedtuple('Scheme', ['shape', 'dtype'])): ...@@ -42,7 +42,7 @@ class Scheme(namedtuple('Scheme', ['shape', 'dtype'])):
def infer_scheme(tensor): def infer_scheme(tensor):
"""Infer column scheme from the given tensor data. """Infer column scheme from the given tensor data.
Paramters Parameters
--------- ---------
tensor : Tensor tensor : Tensor
The tensor data. The tensor data.
...@@ -723,7 +723,7 @@ class FrameRef(MutableMapping): ...@@ -723,7 +723,7 @@ class FrameRef(MutableMapping):
data : dict-like data : dict-like
The row data. The row data.
inplace : bool inplace : bool
True if the update is performed inplacely. True if the update is performed inplace.
""" """
rows = self._getrows(query) rows = self._getrows(query)
for key, col in data.items(): for key, col in data.items():
...@@ -743,7 +743,7 @@ class FrameRef(MutableMapping): ...@@ -743,7 +743,7 @@ class FrameRef(MutableMapping):
Please note that "deleted" rows are not really deleted, but simply removed Please note that "deleted" rows are not really deleted, but simply removed
in the reference. As a result, if two FrameRefs point to the same Frame, deleting in the reference. As a result, if two FrameRefs point to the same Frame, deleting
from one ref will not relect on the other. However, deleting columns is real. from one ref will not reflect on the other. However, deleting columns is real.
Parameters Parameters
---------- ----------
......
...@@ -522,7 +522,7 @@ class GraphIndex(object): ...@@ -522,7 +522,7 @@ class GraphIndex(object):
Parameters Parameters
---------- ----------
transpose : bool transpose : bool
A flag to tranpose the returned adjacency matrix. A flag to transpose the returned adjacency matrix.
ctx : context ctx : context
The context of the returned matrix. The context of the returned matrix.
...@@ -712,7 +712,7 @@ class GraphIndex(object): ...@@ -712,7 +712,7 @@ class GraphIndex(object):
def from_edge_list(self, elist): def from_edge_list(self, elist):
"""Convert from an edge list. """Convert from an edge list.
Paramters Parameters
--------- ---------
elist : list elist : list
List of (u, v) edge tuple. List of (u, v) edge tuple.
...@@ -830,7 +830,7 @@ def disjoint_union(graphs): ...@@ -830,7 +830,7 @@ def disjoint_union(graphs):
"""Return a disjoint union of the input graphs. """Return a disjoint union of the input graphs.
The new graph will include all the nodes/edges in the given graphs. The new graph will include all the nodes/edges in the given graphs.
Nodes/Edges will be relabled by adding the cumsum of the previous graph sizes Nodes/Edges will be relabeled by adding the cumsum of the previous graph sizes
in the given sequence order. For example, giving input [g1, g2, g3], where in the given sequence order. For example, giving input [g1, g2, g3], where
they have 5, 6, 7 nodes respectively. Then node#2 of g2 will become node#7 they have 5, 6, 7 nodes respectively. Then node#2 of g2 will become node#7
in the result graph. Edge ids are re-assigned similarly. in the result graph. Edge ids are re-assigned similarly.
......
...@@ -507,7 +507,7 @@ class ImmutableGraphIndex(object): ...@@ -507,7 +507,7 @@ class ImmutableGraphIndex(object):
Parameters Parameters
---------- ----------
transpose : bool transpose : bool
A flag to tranpose the returned adjacency matrix. A flag to transpose the returned adjacency matrix.
Returns Returns
------- -------
...@@ -707,7 +707,7 @@ def disjoint_union(graphs): ...@@ -707,7 +707,7 @@ def disjoint_union(graphs):
"""Return a disjoint union of the input graphs. """Return a disjoint union of the input graphs.
The new graph will include all the nodes/edges in the given graphs. The new graph will include all the nodes/edges in the given graphs.
Nodes/Edges will be relabled by adding the cumsum of the previous graph sizes Nodes/Edges will be relabeled by adding the cumsum of the previous graph sizes
in the given sequence order. For example, giving input [g1, g2, g3], where in the given sequence order. For example, giving input [g1, g2, g3], where
they have 5, 6, 7 nodes respectively. Then node#2 of g2 will become node#7 they have 5, 6, 7 nodes respectively. Then node#2 of g2 will become node#7
in the result graph. Edge ids are re-assigned similarly. in the result graph. Edge ids are re-assigned similarly.
......
...@@ -89,7 +89,7 @@ def prop_nodes_topo(graph, ...@@ -89,7 +89,7 @@ def prop_nodes_topo(graph,
message_func='default', message_func='default',
reduce_func='default', reduce_func='default',
apply_node_func='default'): apply_node_func='default'):
"""Message propagation using node frontiers generated by topolocial order. """Message propagation using node frontiers generated by topological order.
Parameters Parameters
---------- ----------
......
...@@ -199,7 +199,7 @@ def build_adj_matrix_uv(graph, edges, reduce_nodes): ...@@ -199,7 +199,7 @@ def build_adj_matrix_uv(graph, edges, reduce_nodes):
in the graph. Therefore, when doing SPMV, the src node data in the graph. Therefore, when doing SPMV, the src node data
should be all the node features. should be all the node features.
Paramters Parameters
--------- ---------
graph : DGLGraph graph : DGLGraph
The graph The graph
...@@ -276,7 +276,7 @@ def build_inc_matrix_eid(m, eid, dst, reduce_nodes): ...@@ -276,7 +276,7 @@ def build_inc_matrix_eid(m, eid, dst, reduce_nodes):
[0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1]], shape=(5, 7)) [0, 0, 0, 0, 0, 1, 1]], shape=(5, 7))
Paramters Parameters
--------- ---------
m : int m : int
The source dimension size of the incidence matrix. The source dimension size of the incidence matrix.
......
...@@ -179,7 +179,7 @@ def dfs_labeled_edges_generator( ...@@ -179,7 +179,7 @@ def dfs_labeled_edges_generator(
There are three labels: FORWARD(0), REVERSE(1), NONTREE(2) There are three labels: FORWARD(0), REVERSE(1), NONTREE(2)
A FORWARD edge is one in which `u` has been visised but `v` has not. A A FORWARD edge is one in which `u` has been visited but `v` has not. A
REVERSE edge is one in which both `u` and `v` have been visited and the REVERSE edge is one in which both `u` and `v` have been visited and the
edge is in the DFS tree. A NONTREE edge is one in which both `u` and `v` edge is in the DFS tree. A NONTREE edge is one in which both `u` and `v`
have been visited but the edge is NOT in the DFS tree. have been visited but the edge is NOT in the DFS tree.
......
...@@ -122,7 +122,7 @@ print(G.nodes[[10, 11]].data['feat']) ...@@ -122,7 +122,7 @@ print(G.nodes[[10, 11]].data['feat'])
# -------------------------------------------------- # --------------------------------------------------
# To perform node classification, we use the Graph Convolutional Network # To perform node classification, we use the Graph Convolutional Network
# (GCN) developed by `Kipf and Welling <https://arxiv.org/abs/1609.02907>`_. Here # (GCN) developed by `Kipf and Welling <https://arxiv.org/abs/1609.02907>`_. Here
# we provide the simpliest definition of a GCN framework, but we recommend the # we provide the simplest definition of a GCN framework, but we recommend the
# reader to read the original paper for more details. # reader to read the original paper for more details.
# #
# - At layer :math:`l`, each node :math:`v_i^l` carries a feature vector :math:`h_i^l`. # - At layer :math:`l`, each node :math:`v_i^l` carries a feature vector :math:`h_i^l`.
......
...@@ -172,7 +172,7 @@ print(g_multi.edata['w']) ...@@ -172,7 +172,7 @@ print(g_multi.edata['w'])
# #
# * Nodes and edges can be added but not removed; we will support removal in # * Nodes and edges can be added but not removed; we will support removal in
# the future. # the future.
# * Updating a feature of different schemes raise error on indivdual node (or # * Updating a feature of different schemes raise error on individual node (or
# node subset). # node subset).
......
...@@ -8,7 +8,7 @@ Graph Convolutional Network ...@@ -8,7 +8,7 @@ Graph Convolutional Network
Yu Gai, Quan Gan, Zheng Zhang Yu Gai, Quan Gan, Zheng Zhang
This is a gentle introduction of using DGL to implement Graph Convolutional This is a gentle introduction of using DGL to implement Graph Convolutional
Networks (Kipf & Welling et al., `Semi-Supervised Classificaton with Graph Networks (Kipf & Welling et al., `Semi-Supervised Classification with Graph
Convolutional Networks <https://arxiv.org/pdf/1609.02907.pdf>`_). We build upon Convolutional Networks <https://arxiv.org/pdf/1609.02907.pdf>`_). We build upon
the :doc:`earlier tutorial <../../basics/3_pagerank>` on DGLGraph and demonstrate the :doc:`earlier tutorial <../../basics/3_pagerank>` on DGLGraph and demonstrate
how DGL combines graph with deep neural network and learn structural representations. how DGL combines graph with deep neural network and learn structural representations.
......
...@@ -23,7 +23,7 @@ Line Graph Neural Network ...@@ -23,7 +23,7 @@ Line Graph Neural Network
# `Supervised Community Detection with Line Graph Neural Networks <https://arxiv.org/abs/1705.08415>`__. # `Supervised Community Detection with Line Graph Neural Networks <https://arxiv.org/abs/1705.08415>`__.
# One of the highlight of their model is # One of the highlight of their model is
# to augment the vanilla graph neural network(GNN) architecture to operate on # to augment the vanilla graph neural network(GNN) architecture to operate on
# the line graph of edge adajcencies, defined with non-backtracking operator. # the line graph of edge adjacencies, defined with non-backtracking operator.
# #
# In addition to its high performance, LGNN offers an opportunity to # In addition to its high performance, LGNN offers an opportunity to
# illustrate how DGL can implement an advanced graph algorithm by flexibly # illustrate how DGL can implement an advanced graph algorithm by flexibly
...@@ -44,7 +44,7 @@ Line Graph Neural Network ...@@ -44,7 +44,7 @@ Line Graph Neural Network
# What's the difference between community detection and node classification? # What's the difference between community detection and node classification?
# Comparing to node classification, community detection focuses on retrieving # Comparing to node classification, community detection focuses on retrieving
# cluster information in the graph, rather than assigning a specific label to # cluster information in the graph, rather than assigning a specific label to
# a node. For example, as long as a node is clusetered with its community # a node. For example, as long as a node is clustered with its community
# members, it doesn't matter whether the node is assigned as "community A", # members, it doesn't matter whether the node is assigned as "community A",
# or "community B", while assigning all "great movies" to label "bad movies" # or "community B", while assigning all "great movies" to label "bad movies"
# will be a disaster in a movie network classification task. # will be a disaster in a movie network classification task.
...@@ -61,7 +61,7 @@ Line Graph Neural Network ...@@ -61,7 +61,7 @@ Line Graph Neural Network
# we use `CORA <https://linqs.soe.ucsc.edu/data>`__ # we use `CORA <https://linqs.soe.ucsc.edu/data>`__
# to illustrate a simple community detection task. To refresh our memory, # to illustrate a simple community detection task. To refresh our memory,
# CORA is a scientific publication dataset, with 2708 papers belonging to 7 # CORA is a scientific publication dataset, with 2708 papers belonging to 7
# different mahcine learning sub-fields. Here, we formulate CORA as a # different machine learning sub-fields. Here, we formulate CORA as a
# directed graph, with each node being a paper, and each edge being a # directed graph, with each node being a paper, and each edge being a
# citation link (A->B means A cites B). Here is a visualization of the whole # citation link (A->B means A cites B). Here is a visualization of the whole
# CORA dataset. # CORA dataset.
...@@ -155,8 +155,8 @@ visualize(label1, nx_G1) ...@@ -155,8 +155,8 @@ visualize(label1, nx_G1)
# #
# In this supervised setting, the model naturally predicts a "label" for # In this supervised setting, the model naturally predicts a "label" for
# each community. However, community assignment should be equivariant to # each community. However, community assignment should be equivariant to
# label permutations. To acheive this, in each forward process, we take # label permutations. To achieve this, in each forward process, we take
# the minimum among losses calcuated from all possible permutations of # the minimum among losses calculated from all possible permutations of
# labels. # labels.
# #
# Mathematically, this means # Mathematically, this means
...@@ -180,7 +180,7 @@ visualize(label1, nx_G1) ...@@ -180,7 +180,7 @@ visualize(label1, nx_G1)
# What's a line-graph ? # What's a line-graph ?
# ~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~
# In graph theory, line graph is a graph representation that encodes the # In graph theory, line graph is a graph representation that encodes the
# edge adjacency sturcutre in the original graph. # edge adjacency structure in the original graph.
# #
# Specifically, a line-graph :math:`L(G)` turns an edge of the original graph `G` # Specifically, a line-graph :math:`L(G)` turns an edge of the original graph `G`
# into a node. This is illustrated with the graph below (taken from the # into a node. This is illustrated with the graph below (taken from the
...@@ -214,11 +214,11 @@ visualize(label1, nx_G1) ...@@ -214,11 +214,11 @@ visualize(label1, nx_G1)
# where an edge is formed if :math:`B_{node1, node2} = 1`. # where an edge is formed if :math:`B_{node1, node2} = 1`.
# #
# #
# One layer in LGNN -- algorithm sturcture # One layer in LGNN -- algorithm structure
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# #
# LGNN chains up a series of line-graph neural network layers. The graph # LGNN chains up a series of line-graph neural network layers. The graph
# reprentation :math:`x` and its line-graph companion :math:`y` evolve with # representation :math:`x` and its line-graph companion :math:`y` evolve with
# the dataflow as follows, # the dataflow as follows,
# #
# .. figure:: https://i.imgur.com/bZGGIGp.png # .. figure:: https://i.imgur.com/bZGGIGp.png
...@@ -282,7 +282,7 @@ visualize(label1, nx_G1) ...@@ -282,7 +282,7 @@ visualize(label1, nx_G1)
# denote as :math:`\text{radius}(x)` # denote as :math:`\text{radius}(x)`
# - :math:`[\{Pm,Pd\}y^{(k)}]\theta^{(k)}_{3+J,l}`, fusing another # - :math:`[\{Pm,Pd\}y^{(k)}]\theta^{(k)}_{3+J,l}`, fusing another
# graph's embedding information using incidence matrix # graph's embedding information using incidence matrix
# :math:`\{Pm, Pd\}`, followed with a linear porjection, # :math:`\{Pm, Pd\}`, followed with a linear projection,
# denote as :math:`\text{fuse}(y)`. # denote as :math:`\text{fuse}(y)`.
# #
# - In addition, each of the terms are performed again with different # - In addition, each of the terms are performed again with different
...@@ -337,7 +337,7 @@ visualize(label1, nx_G1) ...@@ -337,7 +337,7 @@ visualize(label1, nx_G1)
# doing one step message passing. As a generalization, :math:`2^j` adjacency # doing one step message passing. As a generalization, :math:`2^j` adjacency
# operations can be formulated as performing :math:`2^j` step of message # operations can be formulated as performing :math:`2^j` step of message
# passing. Therefore, the summation is equivalent to summing nodes' # passing. Therefore, the summation is equivalent to summing nodes'
# representation of :math:`2^j, j=0, 1, 2..` step messsage passing, i.e. # representation of :math:`2^j, j=0, 1, 2..` step message passing, i.e.
# gathering information in :math:`2^{j}` neighbourhood of each node. # gathering information in :math:`2^{j}` neighbourhood of each node.
# #
# In ``__init__``, we define the projection variables used in each # In ``__init__``, we define the projection variables used in each
...@@ -597,8 +597,8 @@ visualize(label1, nx_G1) ...@@ -597,8 +597,8 @@ visualize(label1, nx_G1)
# In the ``collate_fn`` for PyTorch Dataloader, we batch graphs using DGL's # In the ``collate_fn`` for PyTorch Dataloader, we batch graphs using DGL's
# batched_graph API. To refresh our memory, DGL batches graphs by merging them # batched_graph API. To refresh our memory, DGL batches graphs by merging them
# into a large graph, with each smaller graph's adjacency matrix being a block # into a large graph, with each smaller graph's adjacency matrix being a block
# along the diagonal of the large graph's adjacency matrix. We concatentate # along the diagonal of the large graph's adjacency matrix. We concatenate
# :math`\{Pm,Pd\}` as block diagonal matrix in corespondance to DGL batched # :math`\{Pm,Pd\}` as block diagonal matrix in correspondence to DGL batched
# graph API. # graph API.
def collate_fn(batch): def collate_fn(batch):
...@@ -614,9 +614,9 @@ def collate_fn(batch): ...@@ -614,9 +614,9 @@ def collate_fn(batch):
# #
# What's the business with :math:`\{Pm, Pd\}`? # What's the business with :math:`\{Pm, Pd\}`?
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Rougly speaking, there is a relationship between how :math:`g` and # Roughly speaking, there is a relationship between how :math:`g` and
# :math:`lg` (the line graph) working together with loopy brief propagation. # :math:`lg` (the line graph) working together with loopy brief propagation.
# Here, we implement :math:`\{Pm, Pd\}` as scipy coo sparse matrix in the datset, # Here, we implement :math:`\{Pm, Pd\}` as scipy coo sparse matrix in the dataset,
# and stack them as tensors when batching. Another batching solution is to # and stack them as tensors when batching. Another batching solution is to
# treat :math:`\{Pm, Pd\}` as the adjacency matrix of a bipartie graph, which maps # treat :math:`\{Pm, Pd\}` as the adjacency matrix of a bipartite graph, which maps
# line graph's feature to graph's, and vice versa. # line graph's feature to graph's, and vice versa.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment