Commit 7784f066 authored by John Andrilla's avatar John Andrilla Committed by Minjie Wang
Browse files

[Doc] Small graphs readme.txt, edit pass (#1030)

* [Doc] Small graphs readme.txt, edit pass

Edit for grammar and style. Should this be an .rst on .txt?

* Update README.txt
parent fff3dd95
.. _tutorials2-index: .. _tutorials2-index:
Dealing with many small graphs Batching many small graphs
============================== ==============================
* **Tree-LSTM** `[paper] <https://arxiv.org/abs/1503.00075>`__ `[tutorial] * **Tree-LSTM** `[paper] <https://arxiv.org/abs/1503.00075>`__ `[tutorial]
<2_small_graph/3_tree-lstm.html>`__ `[code] <2_small_graph/3_tree-lstm.html>`__ `[PyTorch code]
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/tree_lstm>`__: <https://github.com/dmlc/dgl/blob/master/examples/pytorch/tree_lstm>`__:
sentences of natural languages have inherent structures, which are thrown Sentences have inherent structures that are thrown
away by treating them simply as sequences. Tree-LSTM is a powerful model away by treating them simply as sequences. Tree-LSTM is a powerful model
that learns the representation by leveraging prior syntactic structures that learns the representation by using prior syntactic structures such as a parse-tree.
(e.g. parse-tree). The challenge to train it well is that simply by padding The challenge in training is that simply by padding
a sentence to the maximum length no longer works, since trees of different a sentence to the maximum length no longer works. Trees of different
sentences have different sizes and topologies. DGL solves this problem by sentences have different sizes and topologies. DGL solves this problem by
throwing the trees into a bigger "container" graph, and use message-passing adding the trees to a bigger container graph, and then using message-passing
to explore maximum parallelism. The key API we use is batching. to explore maximum parallelism. Batching is a key API for this.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment