README.txt 1.32 KB
Newer Older
1
2
3
4
.. _tutorials4-index:


Old (new) wines in new bottle
5
6
=============================

7
8
* **Capsule** `[paper] <https://arxiv.org/abs/1710.09829>`__ `[tutorial]
  <4_old_wines/2_capsule.html>`__ `[code]
Minjie Wang's avatar
Minjie Wang committed
9
  <https://github.com/dmlc/dgl/tree/master/examples/pytorch/capsule>`__:
10
11
12
13
14
15
  this new computer vision model has two key ideas -- enhancing the feature
  representation in a vector form (instead of a scalar) called *capsule*, and
  replacing max-pooling with dynamic routing. The idea of dynamic routing is to
  integrate a lower level capsule to one (or several) of a higher level one
  with non-parametric message-passing. We show how the later can be nicely
  implemented with DGL APIs.
16

Zihao Ye's avatar
Zihao Ye committed
17
18

* **Transformer** `[paper] <https://arxiv.org/abs/1706.03762>`__ `[tutorial] <4_old_wines/7_transformer.html>`__ 
Minjie Wang's avatar
Minjie Wang committed
19
  `[code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer>`__ and **Universal Transformer** 
Zihao Ye's avatar
Zihao Ye committed
20
  `[paper] <https://arxiv.org/abs/1807.03819>`__ `[tutorial] <4_old_wines/7_transformer.html>`__
Minjie Wang's avatar
Minjie Wang committed
21
  `[code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__:
22
23
24
25
  these two models replace RNN with several layers of multi-head attention to
  encode and discover structures among tokens of a sentence. These attention
  mechanisms can similarly formulated as graph operations with
  message-passing.