Commit 707334ce authored by brett koonce's avatar brett koonce Committed by Quan (Andy) Gan
Browse files

minor spelling tweaks (#349)

* minor spelling tweaks

* Update CONTRIBUTORS.md
parent 192bd952
......@@ -2,7 +2,7 @@
Contribution is always welcomed. A good starting place is the roadmap issue, where
you can find our current milestones. All contributions must go through pull requests
and be reviewed by the committors. See our [contribution guide](https://docs.dgl.ai/contribute.html) for more details.
and be reviewed by the committers. See our [contribution guide](https://docs.dgl.ai/contribute.html) for more details.
Once your contribution is accepted and merged, congratulations, you are now a contributor to the DGL project.
We will put your name in the list below and also on our [website](https://www.dgl.ai/ack).
......@@ -13,3 +13,4 @@ Contributors
[Yifei Ma](https://github.com/yifeim)
Hao Jin
[Sheng Zha](https://github.com/szha)
[Brett Koonce](https://github.com/brettkoonce)
......@@ -242,7 +242,7 @@ if __name__ == '__main__':
parser.add_argument("--negative-sample", type=int, default=10,
help="number of negative samples per positive sample")
parser.add_argument("--evaluate-every", type=int, default=500,
help="perform evalution every n epochs")
help="perform evaluation every n epochs")
args = parser.parse_args()
print(args)
......
......@@ -27,7 +27,7 @@ Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default).
## Test Results
### Transfomer
### Transformer
- Multi30k: we achieve BLEU score 35.41 with default setting on Multi30k dataset, without using pre-trained embeddings. (if we set the number of layers to 2, the BLEU score could reach 36.45).
- WMT14: work in progress
......@@ -38,7 +38,7 @@ Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default).
## Notes
- Currently we do not support Multi-GPU training(this will be fixed soon), you should only specifiy only one gpu\_id when running the training script.
- Currently we do not support Multi-GPU training(this will be fixed soon), you should only specify only one gpu\_id when running the training script.
## Reference
......
......@@ -9,7 +9,7 @@ Graph = namedtuple('Graph',
['g', 'src', 'tgt', 'tgt_y', 'nids', 'eids', 'nid_arr', 'n_nodes', 'n_edges', 'n_tokens'])
class GraphPool:
"Create a graph pool in advance to accelerate graph buildling phase in Transformer."
"Create a graph pool in advance to accelerate graph building phase in Transformer."
def __init__(self, n=50, m=50):
'''
args:
......
......@@ -115,7 +115,7 @@ def empty(shape, dtype="float32", ctx=context(1, 0)):
def from_dlpack(dltensor):
"""Produce an array from a DLPack tensor without memory copy.
Retreives the underlying DLPack tensor's pointer to create an array from the
Retrieves the underlying DLPack tensor's pointer to create an array from the
data. Removes the original DLPack tensor's destructor as now the array is
responsible for destruction.
......@@ -195,7 +195,7 @@ class NDArrayBase(_NDArrayBase):
raise TypeError('type %s not supported' % str(type(value)))
def copyfrom(self, source_array):
"""Peform an synchronize copy from the array.
"""Perform a synchronized copy from the array.
Parameters
----------
......
......@@ -73,7 +73,7 @@ class DGLType(ctypes.Structure):
bits = 64
head = ""
else:
raise ValueError("Donot know how to handle type %s" % type_str)
raise ValueError("Do not know how to handle type %s" % type_str)
bits = int(head) if head else bits
inst.bits = bits
......
......@@ -422,7 +422,7 @@ def scatter_row(data, row_index, value):
pass
def scatter_row_inplace(data, row_index, value):
"""Write the value into the data tensor using the row index inplacely.
"""Write the value into the data tensor using the row index inplace.
This is an inplace write so it will break the autograd.
......
......@@ -318,7 +318,7 @@ class ImmutableGraphIndex(object):
Parameters
----------
transpose : bool
A flag to tranpose the returned adjacency matrix.
A flag to transpose the returned adjacency matrix.
ctx : context
The device context of the returned matrix.
......@@ -352,7 +352,7 @@ class ImmutableGraphIndex(object):
def from_edge_list(self, elist):
"""Convert from an edge list.
Paramters
Parameters
---------
elist : list
List of (u, v) edge tuple.
......
......@@ -282,7 +282,7 @@ class RDFReader(object):
def relationList(self):
"""
Returns a list of relations, ordered descending by frequenecy
Returns a list of relations, ordered descending by frequency
:return:
"""
res = list(set(self.__graph.predicates()))
......@@ -327,7 +327,7 @@ def _load_data(dataset_str='aifb', dataset_path=None):
train_file = os.path.join(dataset_path, 'trainingSet.tsv')
test_file = os.path.join(dataset_path, 'testSet.tsv')
if dataset_str == 'am':
label_header = 'label_cateogory'
label_header = 'label_category'
nodes_header = 'proxy'
elif dataset_str == 'aifb':
label_header = 'label_affiliation'
......
......@@ -208,7 +208,7 @@ def NeighborSampler(g, batch_size, expand_factor, num_hops=1,
"DGLBACKEND" environment variable to "mxnet".
This creates a subgraph data loader that samples subgraphs from the input graph
with neighbor sampling. This simpling method is implemented in C and can perform
with neighbor sampling. This sampling method is implemented in C and can perform
sampling very efficiently.
A subgraph grows from a seed vertex. It contains sampled neighbors
......
......@@ -42,7 +42,7 @@ class Scheme(namedtuple('Scheme', ['shape', 'dtype'])):
def infer_scheme(tensor):
"""Infer column scheme from the given tensor data.
Paramters
Parameters
---------
tensor : Tensor
The tensor data.
......@@ -723,7 +723,7 @@ class FrameRef(MutableMapping):
data : dict-like
The row data.
inplace : bool
True if the update is performed inplacely.
True if the update is performed inplace.
"""
rows = self._getrows(query)
for key, col in data.items():
......@@ -743,7 +743,7 @@ class FrameRef(MutableMapping):
Please note that "deleted" rows are not really deleted, but simply removed
in the reference. As a result, if two FrameRefs point to the same Frame, deleting
from one ref will not relect on the other. However, deleting columns is real.
from one ref will not reflect on the other. However, deleting columns is real.
Parameters
----------
......
......@@ -522,7 +522,7 @@ class GraphIndex(object):
Parameters
----------
transpose : bool
A flag to tranpose the returned adjacency matrix.
A flag to transpose the returned adjacency matrix.
ctx : context
The context of the returned matrix.
......@@ -712,7 +712,7 @@ class GraphIndex(object):
def from_edge_list(self, elist):
"""Convert from an edge list.
Paramters
Parameters
---------
elist : list
List of (u, v) edge tuple.
......@@ -830,7 +830,7 @@ def disjoint_union(graphs):
"""Return a disjoint union of the input graphs.
The new graph will include all the nodes/edges in the given graphs.
Nodes/Edges will be relabled by adding the cumsum of the previous graph sizes
Nodes/Edges will be relabeled by adding the cumsum of the previous graph sizes
in the given sequence order. For example, giving input [g1, g2, g3], where
they have 5, 6, 7 nodes respectively. Then node#2 of g2 will become node#7
in the result graph. Edge ids are re-assigned similarly.
......
......@@ -507,7 +507,7 @@ class ImmutableGraphIndex(object):
Parameters
----------
transpose : bool
A flag to tranpose the returned adjacency matrix.
A flag to transpose the returned adjacency matrix.
Returns
-------
......@@ -707,7 +707,7 @@ def disjoint_union(graphs):
"""Return a disjoint union of the input graphs.
The new graph will include all the nodes/edges in the given graphs.
Nodes/Edges will be relabled by adding the cumsum of the previous graph sizes
Nodes/Edges will be relabeled by adding the cumsum of the previous graph sizes
in the given sequence order. For example, giving input [g1, g2, g3], where
they have 5, 6, 7 nodes respectively. Then node#2 of g2 will become node#7
in the result graph. Edge ids are re-assigned similarly.
......
......@@ -89,7 +89,7 @@ def prop_nodes_topo(graph,
message_func='default',
reduce_func='default',
apply_node_func='default'):
"""Message propagation using node frontiers generated by topolocial order.
"""Message propagation using node frontiers generated by topological order.
Parameters
----------
......
......@@ -199,7 +199,7 @@ def build_adj_matrix_uv(graph, edges, reduce_nodes):
in the graph. Therefore, when doing SPMV, the src node data
should be all the node features.
Paramters
Parameters
---------
graph : DGLGraph
The graph
......@@ -276,7 +276,7 @@ def build_inc_matrix_eid(m, eid, dst, reduce_nodes):
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1]], shape=(5, 7))
Paramters
Parameters
---------
m : int
The source dimension size of the incidence matrix.
......
......@@ -179,7 +179,7 @@ def dfs_labeled_edges_generator(
There are three labels: FORWARD(0), REVERSE(1), NONTREE(2)
A FORWARD edge is one in which `u` has been visised but `v` has not. A
A FORWARD edge is one in which `u` has been visited but `v` has not. A
REVERSE edge is one in which both `u` and `v` have been visited and the
edge is in the DFS tree. A NONTREE edge is one in which both `u` and `v`
have been visited but the edge is NOT in the DFS tree.
......
......@@ -122,7 +122,7 @@ print(G.nodes[[10, 11]].data['feat'])
# --------------------------------------------------
# To perform node classification, we use the Graph Convolutional Network
# (GCN) developed by `Kipf and Welling <https://arxiv.org/abs/1609.02907>`_. Here
# we provide the simpliest definition of a GCN framework, but we recommend the
# we provide the simplest definition of a GCN framework, but we recommend the
# reader to read the original paper for more details.
#
# - At layer :math:`l`, each node :math:`v_i^l` carries a feature vector :math:`h_i^l`.
......
......@@ -172,7 +172,7 @@ print(g_multi.edata['w'])
#
# * Nodes and edges can be added but not removed; we will support removal in
# the future.
# * Updating a feature of different schemes raise error on indivdual node (or
# * Updating a feature of different schemes raise error on individual node (or
# node subset).
......
......@@ -8,7 +8,7 @@ Graph Convolutional Network
Yu Gai, Quan Gan, Zheng Zhang
This is a gentle introduction of using DGL to implement Graph Convolutional
Networks (Kipf & Welling et al., `Semi-Supervised Classificaton with Graph
Networks (Kipf & Welling et al., `Semi-Supervised Classification with Graph
Convolutional Networks <https://arxiv.org/pdf/1609.02907.pdf>`_). We build upon
the :doc:`earlier tutorial <../../basics/3_pagerank>` on DGLGraph and demonstrate
how DGL combines graph with deep neural network and learn structural representations.
......
......@@ -23,7 +23,7 @@ Line Graph Neural Network
# `Supervised Community Detection with Line Graph Neural Networks <https://arxiv.org/abs/1705.08415>`__.
# One of the highlight of their model is
# to augment the vanilla graph neural network(GNN) architecture to operate on
# the line graph of edge adajcencies, defined with non-backtracking operator.
# the line graph of edge adjacencies, defined with non-backtracking operator.
#
# In addition to its high performance, LGNN offers an opportunity to
# illustrate how DGL can implement an advanced graph algorithm by flexibly
......@@ -44,7 +44,7 @@ Line Graph Neural Network
# What's the difference between community detection and node classification?
# Comparing to node classification, community detection focuses on retrieving
# cluster information in the graph, rather than assigning a specific label to
# a node. For example, as long as a node is clusetered with its community
# a node. For example, as long as a node is clustered with its community
# members, it doesn't matter whether the node is assigned as "community A",
# or "community B", while assigning all "great movies" to label "bad movies"
# will be a disaster in a movie network classification task.
......@@ -61,7 +61,7 @@ Line Graph Neural Network
# we use `CORA <https://linqs.soe.ucsc.edu/data>`__
# to illustrate a simple community detection task. To refresh our memory,
# CORA is a scientific publication dataset, with 2708 papers belonging to 7
# different mahcine learning sub-fields. Here, we formulate CORA as a
# different machine learning sub-fields. Here, we formulate CORA as a
# directed graph, with each node being a paper, and each edge being a
# citation link (A->B means A cites B). Here is a visualization of the whole
# CORA dataset.
......@@ -155,8 +155,8 @@ visualize(label1, nx_G1)
#
# In this supervised setting, the model naturally predicts a "label" for
# each community. However, community assignment should be equivariant to
# label permutations. To acheive this, in each forward process, we take
# the minimum among losses calcuated from all possible permutations of
# label permutations. To achieve this, in each forward process, we take
# the minimum among losses calculated from all possible permutations of
# labels.
#
# Mathematically, this means
......@@ -180,7 +180,7 @@ visualize(label1, nx_G1)
# What's a line-graph ?
# ~~~~~~~~~~~~~~~~~~~~~
# In graph theory, line graph is a graph representation that encodes the
# edge adjacency sturcutre in the original graph.
# edge adjacency structure in the original graph.
#
# Specifically, a line-graph :math:`L(G)` turns an edge of the original graph `G`
# into a node. This is illustrated with the graph below (taken from the
......@@ -214,11 +214,11 @@ visualize(label1, nx_G1)
# where an edge is formed if :math:`B_{node1, node2} = 1`.
#
#
# One layer in LGNN -- algorithm sturcture
# One layer in LGNN -- algorithm structure
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# LGNN chains up a series of line-graph neural network layers. The graph
# reprentation :math:`x` and its line-graph companion :math:`y` evolve with
# representation :math:`x` and its line-graph companion :math:`y` evolve with
# the dataflow as follows,
#
# .. figure:: https://i.imgur.com/bZGGIGp.png
......@@ -282,7 +282,7 @@ visualize(label1, nx_G1)
# denote as :math:`\text{radius}(x)`
# - :math:`[\{Pm,Pd\}y^{(k)}]\theta^{(k)}_{3+J,l}`, fusing another
# graph's embedding information using incidence matrix
# :math:`\{Pm, Pd\}`, followed with a linear porjection,
# :math:`\{Pm, Pd\}`, followed with a linear projection,
# denote as :math:`\text{fuse}(y)`.
#
# - In addition, each of the terms are performed again with different
......@@ -337,7 +337,7 @@ visualize(label1, nx_G1)
# doing one step message passing. As a generalization, :math:`2^j` adjacency
# operations can be formulated as performing :math:`2^j` step of message
# passing. Therefore, the summation is equivalent to summing nodes'
# representation of :math:`2^j, j=0, 1, 2..` step messsage passing, i.e.
# representation of :math:`2^j, j=0, 1, 2..` step message passing, i.e.
# gathering information in :math:`2^{j}` neighbourhood of each node.
#
# In ``__init__``, we define the projection variables used in each
......@@ -597,8 +597,8 @@ visualize(label1, nx_G1)
# In the ``collate_fn`` for PyTorch Dataloader, we batch graphs using DGL's
# batched_graph API. To refresh our memory, DGL batches graphs by merging them
# into a large graph, with each smaller graph's adjacency matrix being a block
# along the diagonal of the large graph's adjacency matrix. We concatentate
# :math`\{Pm,Pd\}` as block diagonal matrix in corespondance to DGL batched
# along the diagonal of the large graph's adjacency matrix. We concatenate
# :math`\{Pm,Pd\}` as block diagonal matrix in correspondence to DGL batched
# graph API.
def collate_fn(batch):
......@@ -614,9 +614,9 @@ def collate_fn(batch):
#
# What's the business with :math:`\{Pm, Pd\}`?
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Rougly speaking, there is a relationship between how :math:`g` and
# Roughly speaking, there is a relationship between how :math:`g` and
# :math:`lg` (the line graph) working together with loopy brief propagation.
# Here, we implement :math:`\{Pm, Pd\}` as scipy coo sparse matrix in the datset,
# Here, we implement :math:`\{Pm, Pd\}` as scipy coo sparse matrix in the dataset,
# and stack them as tensors when batching. Another batching solution is to
# treat :math:`\{Pm, Pd\}` as the adjacency matrix of a bipartie graph, which maps
# treat :math:`\{Pm, Pd\}` as the adjacency matrix of a bipartite graph, which maps
# line graph's feature to graph's, and vice versa.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment