Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
1db75887
Unverified
Commit
1db75887
authored
Jan 13, 2021
by
Da Zheng
Committed by
GitHub
Jan 14, 2021
Browse files
fix. (#2534)
Co-authored-by:
Minjie Wang
<
wmjlyjemaine@gmail.com
>
parent
bedccc23
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
10 deletions
+4
-10
docs/source/api/python/dgl.function.rst
docs/source/api/python/dgl.function.rst
+4
-10
No files found.
docs/source/api/python/dgl.function.rst
View file @
1db75887
...
@@ -5,17 +5,11 @@
...
@@ -5,17 +5,11 @@
dgl.function
dgl.function
==================================
==================================
In DGL, message passing is expressed by two APIs:
In DGL, message passing is mainly expressed by ``update_all(message_func, reduce_func)``.
This API computes messages on all edges and sends to the destination nodes; the nodes
that receive messages perform aggregation and update their own node data.
- ``send(edges, message_func)`` for computing the messages along the given edges.
Internally, DGL fuses the message generation and aggregation into one kernel so no
- ``recv(nodes, reduce_func)`` for collecting the incoming messages, perform aggregation and so on.
Although the two-stage abstraction can cover all the models that are defined in the message
passing paradigm, it is inefficient because it requires storing explicit messages. See the DGL
`blog post <https://www.dgl.ai/blog/2019/05/04/kernel.html>`_ for more
details and performance results.
Our solution, also explained in the blog post, is to fuse the two stages into one kernel so no
explicit messages are generated and stored. To achieve this, we recommend using our **built-in
explicit messages are generated and stored. To achieve this, we recommend using our **built-in
message and reduce functions** so that DGL can analyze and map them to fused dedicated kernels. Here
message and reduce functions** so that DGL can analyze and map them to fused dedicated kernels. Here
are some examples (in PyTorch syntax).
are some examples (in PyTorch syntax).
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment