"tests/git@developer.sourcefind.cn:OpenDAS/dgl.git" did not exist on "41a38486a5ed9298093d9f0bc415751269c7d577"
Commit f45178c3 authored by John Andrilla's avatar John Andrilla Committed by Minjie Wang
Browse files

[Doc] Generative models, edit for readability (#1033)

Edit pass for grammar and style
parent 65a904c5
...@@ -4,18 +4,18 @@ Generative models ...@@ -4,18 +4,18 @@ Generative models
================== ==================
* **DGMG** `[paper] <https://arxiv.org/abs/1803.03324>`__ `[tutorial] * **DGMG** `[paper] <https://arxiv.org/abs/1803.03324>`__ `[tutorial]
<3_generative_model/5_dgmg.html>`__ `[code] <3_generative_model/5_dgmg.html>`__ `[PyTorch code]
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/dgmg>`__: <https://github.com/dmlc/dgl/tree/master/examples/pytorch/dgmg>`__:
this model belongs to the important family that deals with structural This model belongs to the family that deals with structural
generation. DGMG is interesting because its state-machine approach is the generation. Deep generative models of graphs (DGMG) uses a state-machine approach.
most general. It is also very challenging because, unlike Tree-LSTM, every It is also very challenging because, unlike Tree-LSTM, every
sample has a dynamic, probability-driven structure that is not available sample has a dynamic, probability-driven structure that is not available
before training. We are able to progressively leverage intra- and before training. You can progressively leverage intra- and
inter-graph parallelism to steadily improve the performance. inter-graph parallelism to steadily improve the performance.
* **JTNN** `[paper] <https://arxiv.org/abs/1802.04364>`__ `[code] * **JTNN** `[paper] <https://arxiv.org/abs/1802.04364>`__ `[PyTorch code]
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/jtnn>`__: <https://github.com/dmlc/dgl/tree/master/examples/pytorch/jtnn>`__:
unlike DGMG, this paper generates molecular graphs using the framework of This network generates molecular graphs using the framework of
variational auto-encoder. Perhaps more interesting is its approach to build a variational auto-encoder. The junction tree neural network (JTNN) builds
structure hierarchically, in the case of molecular, with junction tree as structure hierarchically. In the case of molecular graphs, it uses a junction tree as
the middle scaffolding. the middle scaffolding.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment