Unverified Commit af23c457 authored by Minjie Wang's avatar Minjie Wang Committed by GitHub
Browse files

[Release] update version (#297)

* update version; add news.md; modify contributing.md

* change urls to dmlc
parent f896e490
## Contributing to DGL
If you are interested in contributing to DGL, your contributions will fall
into two categories:
1. You want to propose a new Feature and implement it
- post about your intended feature, and we shall discuss the design and
implementation. Once we agree that the plan looks good, go ahead and implement it.
2. You want to implement a feature or bug-fix for an outstanding issue
- Look at the outstanding issues
- Especially look at the Low Priority and Medium Priority issues
- Pick an issue and comment on the task that you want to work on this feature
- If you need more context on a particular issue, please ask and we shall provide.
Once you finish implementing a feature or bugfix, please send a Pull Request.
Contribution is always welcomed. A good starting place is the roadmap issue, where
you can find our current milestones. All contributions must go through pull requests
and be reviewed by the committors.
For document improvement, simply PR the change and prepend the title with `[Doc]`.
For new features, we suggest first create an issue using the feature request template.
Follow the template to describe the features you want to implement and your plans.
We also suggest pick the features from the roadmap issue because they are more likely
to be incoporated in the next release.
For bug fix, we suggest first create an issue using the bug report template if the
bug has not been reported yet. Please reply the issue that you'd like to help. Once
the task is assigned, make the change in your fork and PR the codes. Remember to
also refer to the issue where the bug is reported.
Once your PR is merged, congratulation, you are now an contributor to the DGL project.
We will put your name in the list below and also on our [website](https://www.dgl.ai/ack).
Contributors
------------
[Yizhi Liu](https://github.com/yzhliu)
[Yifei Ma](https://github.com/yifeim)
Hao Jin
[Sheng Zha](https://github.com/szha)
DGL release and change logs
==========
Refer to the roadmap issue for the on-going versions and features.
0.1.3
-----
Bug fix
* Compatible with Pytorch v1.0
* Bug fix in networkx graph conversion.
0.1.2
-----
First open release.
* Basic graph APIs.
* Basic message passing APIs.
* Pytorch backend.
* MXNet backend.
* Optimization using SPMV.
* Model examples w/ Pytorch:
- GCN
- GAT
- JTNN
- DGMG
- Capsule
- LGNN
- RGCN
- Transformer
- TreeLSTM
* Model examples w/ MXNet:
- GCN
- GAT
- RGCN
- SSE
package:
name: dgl
version: "0.1.2"
version: "0.1.3"
source:
git_rev: 0.1.2
git_url: https://github.com/jermainewang/dgl.git
git_rev: 0.1.x
git_url: https://github.com/dmlc/dgl.git
requirements:
build:
......@@ -21,5 +21,5 @@ requirements:
- networkx
about:
home: https://github.com/jermainewang/dgl.git
home: https://github.com/dmlc/dgl.git
license_file: ../../LICENSE
......@@ -33,7 +33,7 @@
#endif
// DGL version
#define DGL_VERSION "0.1.2"
#define DGL_VERSION "0.1.3"
// DGL Runtime is DLPack compatible.
......
......@@ -87,4 +87,4 @@ def find_lib_path(name=None, search_path=None, optional=False):
# We use the version of the incoming release for code
# that is under development.
# The following line is set by dgl/python/update_version.py
__version__ = "0.1.2"
__version__ = "0.1.3"
......@@ -72,7 +72,7 @@ setup(
'scipy>=1.1.0',
'networkx>=2.1',
],
url='https://github.com/jermainewang/dgl',
url='https://github.com/dmlc/dgl',
distclass=BinaryDistribution,
classifiers=[
'Development Status :: 3 - Alpha',
......
......@@ -11,7 +11,7 @@ import re
# current version
# We use the version of the incoming release for code
# that is under development
__version__ = "0.1.2"
__version__ = "0.1.3"
# Implementations
def update(file_name, pattern, repl):
......
......@@ -56,7 +56,7 @@ base. This tutorial shows how to implement R-GCN with DGL.
#
# This tutorial will focus on the first task to show how to generate entity
# representation. `Complete
# code <https://github.com/jermainewang/dgl/tree/rgcn/examples/pytorch/rgcn>`_
# code <https://github.com/dmlc/dgl/tree/rgcn/examples/pytorch/rgcn>`_
# for both tasks can be found in DGL's github repository.
#
# Key ideas of R-GCN
......@@ -356,4 +356,4 @@ for epoch in range(n_epochs):
# The implementation is similar to the above but with an extra DistMult layer
# stacked on top of the R-GCN layers. You may find the complete
# implementation of link prediction with R-GCN in our `example
# code <https://github.com/jermainewang/dgl/blob/master/examples/pytorch/rgcn/link_predict.py>`_.
# code <https://github.com/dmlc/dgl/blob/master/examples/pytorch/rgcn/link_predict.py>`_.
......@@ -610,7 +610,7 @@ def collate_fn(batch):
######################################################################################
# You can check out the complete code
# `here <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/line_graph>`_.
# `here <https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`_.
#
# What's the business with :math:`\{Pm, Pd\}`?
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......
......@@ -540,7 +540,7 @@ for i in range(n_epochs):
# scaled SSE to a graph with 50 million nodes and 150 million edges in a
# single P3.8x large instance and one epoch only takes about 160 seconds.
#
# See full examples `here <https://github.com/jermainewang/dgl/tree/master/examples/mxnet/sse>`_.
# See full examples `here <https://github.com/dmlc/dgl/tree/master/examples/mxnet/sse>`_.
#
# .. |image0| image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/img/floodfill-paths.gif
# .. |image1| image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/img/neighbor-sampling.gif
......
......@@ -5,18 +5,18 @@ Graph Neural Network and its variant
* **GCN** `[paper] <https://arxiv.org/abs/1609.02907>`__ `[tutorial]
<1_gnn/1_gcn.html>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/pytorch/gcn>`__:
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gcn>`__:
this is the vanilla GCN. The tutorial covers the basic uses of DGL APIs.
* **GAT** `[paper] <https://arxiv.org/abs/1710.10903>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/pytorch/gat>`__:
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gat>`__:
the key extension of GAT w.r.t vanilla GCN is deploying multi-head attention
among neighborhood of a node, thus greatly enhances the capacity and
expressiveness of the model.
* **R-GCN** `[paper] <https://arxiv.org/abs/1703.06103>`__ `[tutorial]
<1_gnn/4_rgcn.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/rgcn>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn>`__:
the key difference of RGNN is to allow multi-edges among two entities of a
graph, and edges with distinct relationships are encoded differently. This
is an interesting extension of GCN that can have a lot of applications of
......@@ -24,7 +24,7 @@ Graph Neural Network and its variant
* **LGNN** `[paper] <https://arxiv.org/abs/1705.08415>`__ `[tutorial]
<1_gnn/6_line_graph.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/line_graph>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`__:
this model focuses on community detection by inspecting graph structures. It
uses representations of both the original graph and its line-graph
companion. In addition to demonstrate how an algorithm can harness multiple
......@@ -34,7 +34,7 @@ Graph Neural Network and its variant
* **SSE** `[paper] <http://proceedings.mlr.press/v80/dai18a/dai18a.pdf>`__ `[tutorial]
<1_gnn/8_sse_mx.html>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/mxnet/sse>`__:
<https://github.com/dmlc/dgl/blob/master/examples/mxnet/sse>`__:
the emphasize here is *giant* graph that cannot fit comfortably on one GPU
card. SSE is an example to illustrate the co-design of both algorithm and
system: sampling to guarantee asymptotic convergence while lowering the
......
......@@ -372,5 +372,5 @@ for epoch in range(epochs):
##############################################################################
# To train the model on full dataset with different settings(CPU/GPU,
# etc.), please refer to our repo's
# `example <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/tree_lstm>`__.
# `example <https://github.com/dmlc/dgl/tree/master/examples/pytorch/tree_lstm>`__.
# Besides, we also provide an implementation of the Child-Sum Tree LSTM.
......@@ -6,7 +6,7 @@ Dealing with many small graphs
* **Tree-LSTM** `[paper] <https://arxiv.org/abs/1503.00075>`__ `[tutorial]
<2_small_graph/3_tree-lstm.html>`__ `[code]
<https://github.com/jermainewang/dgl/blob/master/examples/pytorch/tree_lstm>`__:
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/tree_lstm>`__:
sentences of natural languages have inherent structures, which are thrown
away by treating them simply as sequences. Tree-LSTM is a powerful model
that learns the representation by leveraging prior syntactic structures
......
......@@ -762,7 +762,7 @@ print('Among 100 graphs generated, {}% are valid.'.format(num_valid))
#######################################################################################
# For the complete implementation, see `dgl DGMG example
# <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/dgmg>`__.
# <https://github.com/dmlc/dgl/tree/master/examples/pytorch/dgmg>`__.
#
# Batched Graph Generation
# ---------------------------
......
......@@ -5,7 +5,7 @@ Generative models
* **DGMG** `[paper] <https://arxiv.org/abs/1803.03324>`__ `[tutorial]
<3_generative_model/5_dgmg.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/dgmg>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/dgmg>`__:
this model belongs to the important family that deals with structural
generation. DGMG is interesting because its state-machine approach is the
most general. It is also very challenging because, unlike Tree-LSTM, every
......@@ -14,7 +14,7 @@ Generative models
inter-graph parallelism to steadily improve the performance.
* **JTNN** `[paper] <https://arxiv.org/abs/1802.04364>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/jtnn>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/jtnn>`__:
unlike DGMG, this paper generates molecular graphs using the framework of
variational auto-encoder. Perhaps more interesting is its approach to build
structure hierarchically, in the case of molecular, with junction tree as
......
......@@ -257,8 +257,8 @@ plt.close()
# |image5|
#
# The full code of this visualization is provided at
# `link <https://github.com/jermainewang/dgl/blob/master/examples/pytorch/capsule/simple_routing.py>`__; the complete
# code that trains on MNIST is at `link <https://github.com/jermainewang/dgl/tree/tutorial/examples/pytorch/capsule>`__.
# `link <https://github.com/dmlc/dgl/blob/master/examples/pytorch/capsule/simple_routing.py>`__; the complete
# code that trains on MNIST is at `link <https://github.com/dmlc/dgl/tree/tutorial/examples/pytorch/capsule>`__.
#
# .. |image0| image:: https://i.imgur.com/55Ovkdh.png
# .. |image1| image:: https://i.imgur.com/9tc6GLl.png
......
......@@ -120,7 +120,7 @@ Transformer Tutorial
# In this tutorial, we show a simplified version of the implementation in
# order to highlight the most important design points (for instance we
# only show single-head attention); the complete code can be found
# `here <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer>`__.
# `here <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer>`__.
# The overall structure is similar to the one from `The Annotated
# Transformer <http://nlp.seas.harvard.edu/2018/04/03/attention.html>`__.
#
......@@ -576,7 +576,7 @@ Transformer Tutorial
#
# Note that we do not involve inference module in this tutorial (which
# requires beam search), please refer to the `Github
# Repo <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer>`__
# Repo <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer>`__
# for full implementation.
#
# .. code:: python
......@@ -851,7 +851,7 @@ Transformer Tutorial
# that satisfy the given predicate.
#
# for the full implementation, please refer to our `Github
# Repo <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__.
# Repo <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__.
#
# The figure below shows the effect of Adaptive Computational
# Time(different positions of a sentence were revised different times):
......
......@@ -5,7 +5,7 @@ Old (new) wines in new bottle
-----------------------------
* **Capsule** `[paper] <https://arxiv.org/abs/1710.09829>`__ `[tutorial]
<4_old_wines/2_capsule.html>`__ `[code]
<https://github.com/jermainewang/dgl/tree/master/examples/pytorch/capsule>`__:
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/capsule>`__:
this new computer vision model has two key ideas -- enhancing the feature
representation in a vector form (instead of a scalar) called *capsule*, and
replacing max-pooling with dynamic routing. The idea of dynamic routing is to
......@@ -15,9 +15,9 @@ Old (new) wines in new bottle
* **Transformer** `[paper] <https://arxiv.org/abs/1706.03762>`__ `[tutorial] <4_old_wines/7_transformer.html>`__
`[code] <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer>`__ and **Universal Transformer**
`[code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer>`__ and **Universal Transformer**
`[paper] <https://arxiv.org/abs/1807.03819>`__ `[tutorial] <4_old_wines/7_transformer.html>`__
`[code] <https://github.com/jermainewang/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__:
`[code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/transformer/modules/act.py>`__:
these two models replace RNN with several layers of multi-head attention to
encode and discover structures among tokens of a sentence. These attention
mechanisms can similarly formulated as graph operations with
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment