distributed-tools.rst 2.13 KB
Newer Older
1
2
.. _guide-distributed-tools:

3
7.2 Tools for launching distributed training/inference
4
5
------------------------------------------------------

6
7
8
DGL provides a launching script ``launch.py`` under
`dgl/tools <https://github.com/dmlc/dgl/tree/master/tools>`__ to launch a distributed
training job in a cluster. This script makes the following assumptions:
9

10
11
12
13
* The partitioned data and the training script have been provisioned to the cluster or
  a shared storage (e.g., NFS) accessible to all the worker machines.
* The machine that invokes ``launch.py`` has passwordless ssh access
  to all other machines. The launching machine must be one of the worker machines.
14
15
16

Below shows an example of launching a distributed training job in a cluster.

17
.. code:: bash
18

19
20
21
22
23
24
25
26
    python3 tools/launch.py               \
      --workspace /my/workspace/          \
      --num_trainers 2                    \
      --num_samplers 4                    \
      --num_servers 1                     \
      --part_config data/mygraph.json     \
      --ip_config ip_config.txt           \
      "python3 my_train_script.py"
27

28
29
30
31
32
33
34
35
The argument specifies the workspace path, where to find the partition metadata JSON
and machine IP configurations, how many trainer, sampler, and server processes to be launched
on each machine. The last argument is the command to launch which is usually the
model training/evaluation script.

Each line of ``ip_config.txt`` is the IP address of a machine in the cluster.
Optionally, the IP address can be followed by a network port (default is ``30050``).
A typical example is as follows:
36
37
38
39
40
41
42
43

.. code:: none

    172.31.19.1
    172.31.23.205
    172.31.29.175
    172.31.16.98

44
45
46
47
48
49
50
51
The workspace specified in the launch script is the working directory in the
machines, which contains the training script, the IP configuration file, the
partition configuration file as well as the graph partitions. All paths of the
files should be specified as relative paths to the workspace.

The launch script creates a specified number of training jobs
(``--num_trainers``) on each machine.  In addition, users need to specify the
number of sampler processes for each trainer (``--num_samplers``).