Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
9c095a72
Commit
9c095a72
authored
Oct 18, 2019
by
Aaron Markham
Committed by
Mufei Li
Oct 19, 2019
Browse files
[Doc] minor spelling updates (#935)
* minor spelling updates * Update docs/source/features/builtin.rst
parent
02fb0581
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
7 deletions
+7
-7
docs/source/features/builtin.rst
docs/source/features/builtin.rst
+7
-7
No files found.
docs/source/features/builtin.rst
View file @
9c095a72
...
@@ -6,7 +6,7 @@ Builtin message passing functions
...
@@ -6,7 +6,7 @@ Builtin message passing functions
In DGL, message passing is expressed by two APIs:
In DGL, message passing is expressed by two APIs:
- ``send(edges, message_func)`` for computing the messages along the given edges.
- ``send(edges, message_func)`` for computing the messages along the given edges.
- ``recv(nodes, reduce_func)`` for collecting the in
-
coming messages, perform aggregation and so on.
- ``recv(nodes, reduce_func)`` for collecting the incoming messages, perform aggregation and so on.
Although the two-stage abstraction can cover all the models that are defined in the message
Although the two-stage abstraction can cover all the models that are defined in the message
passing paradigm, it is inefficient due to storing explicit messages. See our
passing paradigm, it is inefficient due to storing explicit messages. See our
...
@@ -16,7 +16,7 @@ details and performance results.
...
@@ -16,7 +16,7 @@ details and performance results.
Our solution, also explained in the blogpost, is to fuse the two stages into one kernel so no
Our solution, also explained in the blogpost, is to fuse the two stages into one kernel so no
explicit messages are generated and stored. To achieve this, we recommend using our builtin
explicit messages are generated and stored. To achieve this, we recommend using our builtin
message/reduce functions so that DGL can analyze and map them to fused dedicated kernels. Here
message/reduce functions so that DGL can analyze and map them to fused dedicated kernels. Here
are some examples (in
pyt
orch syntax):
are some examples (in
PyT
orch syntax):
.. code:: python
.. code:: python
...
@@ -62,14 +62,14 @@ following user-defined function:
...
@@ -62,14 +62,14 @@ following user-defined function:
Broadcasting is supported for binary message function, which means the tensor arguments
Broadcasting is supported for binary message function, which means the tensor arguments
can be automatically expanded to be of equal sizes. The supported broadcasting semantic
can be automatically expanded to be of equal sizes. The supported broadcasting semantic
is standard as in `
n
um
p
y's <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_
is standard as in `
N
um
P
y's <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_
and `
pyt
orch's <https://pytorch.org/docs/stable/notes/broadcasting.html>`_. For unfamiliar
and `
PyT
orch's <https://pytorch.org/docs/stable/notes/broadcasting.html>`_. For unfamiliar
users, we highly suggest reading those documents as broadcasting is very useful. In the
users, we highly suggest reading those documents as broadcasting is very useful. In the
above example, ``fn.u_mul_e`` will perform broadcasted multiplication automatically because
above example, ``fn.u_mul_e`` will perform broadcasted multiplication automatically because
the node feature ``'h'`` and the edge feature ``'w'`` are of different, but
broadcastable shapes
.
the node feature ``'h'`` and the edge feature ``'w'`` are of different, but
shapes that can be broadcast
.
All DGL's builtin functions support both CPU and GPU and backward computation so they
All DGL's built
-
in functions support both CPU and GPU and backward computation so they
can be used in any autograd system. Also, builtin functions can be used not only in ``update_all``
can be used in any
`
autograd
`
system. Also, builtin functions can be used not only in ``update_all``
or ``apply_edges`` as shown in the example, but wherever message/reduce functions are
or ``apply_edges`` as shown in the example, but wherever message/reduce functions are
required (e.g. ``pull``, ``push``, ``send_and_recv``, etc.).
required (e.g. ``pull``, ``push``, ``send_and_recv``, etc.).
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment