Unverified Commit 17aab812 authored by Minjie Wang's avatar Minjie Wang Committed by GitHub
Browse files

[Doc] update working with multiple backend section (#1128)



* update work with different backend section

* fix some warnings

* Update backend.rst

* Update index.rst
Co-authored-by: default avatarVoVAllen <VoVAllen@users.noreply.github.com>
parent e4ef8d1a
...@@ -47,6 +47,7 @@ Get Started ...@@ -47,6 +47,7 @@ Get Started
:glob: :glob:
install/index install/index
install/backend
Follow the :doc:`instructions<install/index>` to install DGL. The :doc:`DGL at a glance<tutorials/basics/1_first>` Follow the :doc:`instructions<install/index>` to install DGL. The :doc:`DGL at a glance<tutorials/basics/1_first>`
is the most common place to get started with. Each tutorial is accompanied with a runnable is the most common place to get started with. Each tutorial is accompanied with a runnable
......
Working with different backends
===============================
DGL supports PyTorch, MXNet and Tensorflow backends. To change them, set the ``DGLBACKEND``
environcment variable. The default backend is PyTorch.
PyTorch backend
---------------
Export ``DGLBACKEND`` as ``pytorch`` to specify PyTorch backend. The required PyTorch
version is 0.4.1 or later. See `pytorch.org <https://pytorch.org>`_ for installation instructions.
MXNet backend
-------------
Export ``DGLBACKEND`` as ``mxnet`` to specify MXNet backend. The required MXNet version is
1.5 or later. See `mxnet.apache.org <https://mxnet.apache.org/get_started>`_ for installation
instructions.
MXNet uses uint32 as the default data type for integer tensors, which only supports graph of
size smaller than 2^32. To enable large graph training, *build* MXNet with ``USE_INT64_TENSOR_SIZE=1``
flag. See `this FAQ <https://mxnet.apache.org/api/faq/large_tensor_support>`_ for more information.
Tensorflow backend
------------------
Export ``DGLBACKEND`` as ``tensorflow`` to specify Tensorflow backend. The required Tensorflow
version is 2.0 or later. See `tensorflow.org <https://www.tensorflow.org/install>`_ for installation
instructions. In addition, Tensorflow backend requires ``tfdlpack`` package installed as follows and set ``TF_FORCE_GPU_ALLOW_GROWTH`` to ``true`` to prevent Tensorflow take over the whole GPU memory:
.. code:: bash
pip install tfdlpack # when using tensorflow cpu version
or
.. code:: bash
pip install tfdlpack-gpu # when using tensorflow gpu version
export TF_FORCE_GPU_ALLOW_GROWTH=true # and add this to your .bashrc/.zshrc file if needed
Install DGL Install DGL
============ ===========
This topic explains how to install DGL. We recommend installing DGL by using ``conda`` or ``pip``. This topic explains how to install DGL. We recommend installing DGL by using ``conda`` or ``pip``.
...@@ -36,6 +36,8 @@ After the ``conda`` environment is activated, run one of the following commands. ...@@ -36,6 +36,8 @@ After the ``conda`` environment is activated, run one of the following commands.
conda install -c dglteam dgl # For CPU Build conda install -c dglteam dgl # For CPU Build
conda install -c dglteam dgl-cuda9.0 # For CUDA 9.0 Build conda install -c dglteam dgl-cuda9.0 # For CUDA 9.0 Build
conda install -c dglteam dgl-cuda10.0 # For CUDA 10.0 Build conda install -c dglteam dgl-cuda10.0 # For CUDA 10.0 Build
conda install -c dglteam dgl-cuda10.1 # For CUDA 10.1 Build
Install from pip Install from pip
---------------- ----------------
...@@ -52,7 +54,8 @@ For CUDA builds, run one of the following commands and specify the CUDA version. ...@@ -52,7 +54,8 @@ For CUDA builds, run one of the following commands and specify the CUDA version.
pip install dgl # For CPU Build pip install dgl # For CPU Build
pip install dgl-cu90 # For CUDA 9.0 Build pip install dgl-cu90 # For CUDA 9.0 Build
pip install dgl-cu92 # For CUDA 9.2 Build pip install dgl-cu92 # For CUDA 9.2 Build
pip install dgl-cu100 # For CUDA 10.0 Build pip install dgl-cu100 # For CUDA 10.0 Build
pip install dgl-cu101 # For CUDA 10.1 Build
For the most current nightly build from master branch, run one of the following commands. For the most current nightly build from master branch, run one of the following commands.
...@@ -62,41 +65,9 @@ For the most current nightly build from master branch, run one of the following ...@@ -62,41 +65,9 @@ For the most current nightly build from master branch, run one of the following
pip install --pre dgl-cu90 # For CUDA 9.0 Build pip install --pre dgl-cu90 # For CUDA 9.0 Build
pip install --pre dgl-cu92 # For CUDA 9.2 Build pip install --pre dgl-cu92 # For CUDA 9.2 Build
pip install --pre dgl-cu100 # For CUDA 10.0 Build pip install --pre dgl-cu100 # For CUDA 10.0 Build
pip install --pre dgl-cu101 # For CUDA 10.1 Build
Working with different backends
-------------------------------
DGL supports PyTorch and MXNet. Here's how to change them.
Switching backend
`````````````````
The backend is controlled by ``DGLBACKEND`` environment variable, which defaults to
``pytorch``. The following values are supported.
+---------+---------+--------------------------------------------------+
| Value | Backend | Constraints |
+=========+=========+==================================================+
| pytorch | PyTorch | Requires 0.4.1 or later. See |
| | | `pytorch.org <https://pytorch.org>`_ |
+---------+---------+--------------------------------------------------+
| mxnet | MXNet | Requires either MXNet 1.5 for CPU |
| | | |
| | | .. code:: bash |
| | | |
| | | pip install mxnet |
| | | |
| | | or MXNet for GPU with CUDA version, e.g. for CUDA 9.2 |
| | | |
| | | .. code:: bash |
| | | |
| | | pip install mxnet-cu90 |
| | | |
+---------+---------+--------------------------------------------------+
| numpy | NumPy | Does not support gradient computation |
+---------+---------+--------------------------------------------------+
.. _install-from-source: .. _install-from-source:
Install from source Install from source
......
...@@ -23,7 +23,7 @@ At the end of this tutorial, we hope you get a brief feeling of how DGL works. ...@@ -23,7 +23,7 @@ At the end of this tutorial, we hope you get a brief feeling of how DGL works.
############################################################################### ###############################################################################
# Tutorial problem description # Tutorial problem description
# --------------------------- # ----------------------------
# #
# The tutorial is based on the "Zachary's karate club" problem. The karate club # The tutorial is based on the "Zachary's karate club" problem. The karate club
# is a social network that includes 34 members and documents pairwise links # is a social network that includes 34 members and documents pairwise links
......
...@@ -12,7 +12,7 @@ In this tutorial, you learn how to create a graph and how to read and write node ...@@ -12,7 +12,7 @@ In this tutorial, you learn how to create a graph and how to read and write node
############################################################################### ###############################################################################
# Creating a graph # Creating a graph
# -------------- # ----------------
# The design of :class:`DGLGraph` was influenced by other graph libraries. You # The design of :class:`DGLGraph` was influenced by other graph libraries. You
# can create a graph from networkx and convert it into a :class:`DGLGraph` and # can create a graph from networkx and convert it into a :class:`DGLGraph` and
# vice versa. # vice versa.
...@@ -71,7 +71,7 @@ plt.show() ...@@ -71,7 +71,7 @@ plt.show()
############################################################################### ###############################################################################
# Assigning a feature # Assigning a feature
# ------------------ # -------------------
# You can also assign features to nodes and edges of a :class:`DGLGraph`. The # You can also assign features to nodes and edges of a :class:`DGLGraph`. The
# features are represented as dictionary of names (strings) and tensors, # features are represented as dictionary of names (strings) and tensors,
# called **fields**. # called **fields**.
...@@ -138,7 +138,7 @@ g.edata.pop('w') ...@@ -138,7 +138,7 @@ g.edata.pop('w')
############################################################################### ###############################################################################
# Working with multigraphs # Working with multigraphs
# ~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~
# Many graph applications need parallel edges. To enable this, construct :class:`DGLGraph` # Many graph applications need parallel edges. To enable this, construct :class:`DGLGraph`
# with ``multigraph=True``. # with ``multigraph=True``.
......
...@@ -118,7 +118,7 @@ def pagerank_naive(g): ...@@ -118,7 +118,7 @@ def pagerank_naive(g):
############################################################################### ###############################################################################
# Batching semantics for a large graph # Batching semantics for a large graph
# ----------------------------------- # ------------------------------------
# The above code does not scale to a large graph because it iterates over all # The above code does not scale to a large graph because it iterates over all
# the nodes. DGL solves this by allowing you to compute on a *batch* of nodes or # the nodes. DGL solves this by allowing you to compute on a *batch* of nodes or
# edges. For example, the following codes trigger message and reduce functions # edges. For example, the following codes trigger message and reduce functions
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
.. currentmodule:: dgl .. currentmodule:: dgl
Tutorial: Batched graph classification with DGL Tutorial: Batched graph classification with DGL
===================================== ================================================
**Author**: `Mufei Li <https://github.com/mufeili>`_, **Author**: `Mufei Li <https://github.com/mufeili>`_,
`Minjie Wang <https://jermainewang.github.io/>`_, `Minjie Wang <https://jermainewang.github.io/>`_,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment