Unverified Commit eb0c954c authored by zhjwy9343's avatar zhjwy9343 Committed by GitHub
Browse files

Update README.md (#1307)



* Update README.md

 add News section for zz request.

* Update README.md

* Update README.md
Co-authored-by: default avatarMinjie Wang <minjie.wang@nyu.edu>
parent c23a61bd
...@@ -9,11 +9,13 @@ ...@@ -9,11 +9,13 @@
DGL is an easy-to-use, high performance and scalable Python package for deep learning on graphs. DGL is framework agnostic, meaning if a deep graph model is a component of an end-to-end application, the rest of the logics can be implemented in any major frameworks, such as PyTorch, Apache MXNet or TensorFlow. DGL is an easy-to-use, high performance and scalable Python package for deep learning on graphs. DGL is framework agnostic, meaning if a deep graph model is a component of an end-to-end application, the rest of the logics can be implemented in any major frameworks, such as PyTorch, Apache MXNet or TensorFlow.
<p align="center"> <p align="center">
<img src="https://i.imgur.com/DwA1NbZ.png" alt="DGL v0.4 architecture" width="600"> <img src="http://data.dgl.ai/asset/image/DGL-Arch.png" alt="DGL v0.4 architecture" width="600">
<br> <br>
<b>Figure</b>: DGL Overall Architecture <b>Figure</b>: DGL Overall Architecture
</p> </p>
## <img src="http://data.dgl.ai/asset/image/new.png" width="30">DGL News
03/02/2020: DGL has be chosen as the implemenation base for [Graph Neural Network benchmark framework](https://arxiv.org/abs/2003.00982), which enchmarks framework to novel medium-scale graph datasets from mathematical modeling, computer vision, chemistry and combinatorial problems. Models implemented are [here](https://github.com/graphdeeplearning/benchmarking-gnns).
## Using DGL ## Using DGL
...@@ -101,16 +103,16 @@ class GATLayer(nn.Module): ...@@ -101,16 +103,16 @@ class GATLayer(nn.Module):
Table: Training time(in seconds) for 200 epochs and memory consumption(GB) Table: Training time(in seconds) for 200 epochs and memory consumption(GB)
High memory utilization allows DGL to push the limit of single-GPU performance, as seen in below images. High memory utilization allows DGL to push the limit of single-GPU performance, as seen in below images.
| <img src="https://i.imgur.com/CvXc9Uu.png" width="400"> | <img src="https://i.imgur.com/HnCfJyU.png" width="400"> | | <img src="http://data.dgl.ai/asset/image/DGLvsPyG-time1.png" width="400"> | <img src="http://data.dgl.ai/asset/image/DGLvsPyG-time2.png" width="400"> |
| -------- | -------- | | -------- | -------- |
**Scalability**: DGL has fully leveraged multiple GPUs in both one machine and clusters for increasing training speed, and has better performance than alternatives, as seen in below images. **Scalability**: DGL has fully leveraged multiple GPUs in both one machine and clusters for increasing training speed, and has better performance than alternatives, as seen in below images.
<p align="center"> <p align="center">
<img src="https://i.imgur.com/IGERtVX.png" width="600"> <img src="http://data.dgl.ai/asset/image/one-four-GPUs.png" width="600">
</p> </p>
| <img src="https://i.imgur.com/BugYro2.png"> | <img src="https://i.imgur.com/KQ4nVdX.png"> | | <img src="http://data.dgl.ai/asset/image/one-four-GPUs-DGLvsGraphVite.png"> | <img src="http://data.dgl.ai/asset/image/one-fourMachines.png"> |
| :---------------------------------------: | -- | | :---------------------------------------: | -- |
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment