"examples/git@developer.sourcefind.cn:OpenDAS/torchaudio.git" did not exist on "a5664ca9c3ad9116ccb26befdf620cd9c71a6952"
Unverified Commit dd166ee6 authored by Shaden Smith's avatar Shaden Smith Committed by GitHub
Browse files

README and RTD improvements. (#198)

parent bf4797c2
...@@ -12,6 +12,8 @@ deepspeed.egg-info/ ...@@ -12,6 +12,8 @@ deepspeed.egg-info/
# Website # Website
docs/_site/ docs/_site/
docs/build
docs/code-docs/_build
docs/code-docs/build docs/code-docs/build
.sass-cache/ .sass-cache/
.jekyll-cache/ .jekyll-cache/
......
[![Build Status](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_apis/build/status/microsoft.DeepSpeed?branchName=master)](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_build/latest?definitionId=1&branchName=master) [![Build Status](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_apis/build/status/microsoft.DeepSpeed?branchName=master)](https://dev.azure.com/DeepSpeedMSFT/DeepSpeed/_build/latest?definitionId=1&branchName=master)
[![License MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/Microsoft/DeepSpeed/blob/master/LICENSE) [![License MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/Microsoft/DeepSpeed/blob/master/LICENSE)
DeepSpeed is a deep learning optimization library that makes distributed training easy, [DeepSpeed](https://www.deepspeed.ai/) is a deep learning optimization library that makes distributed training easy,
efficient, and effective. efficient, and effective.
<p align="center"><i><b>10x Larger Models</b></i></p> <p align="center"><i><b>10x Larger Models</b></i></p>
...@@ -15,19 +15,20 @@ a language model (LM) with over 17B parameters called ...@@ -15,19 +15,20 @@ a language model (LM) with over 17B parameters called
[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft), [Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft),
establishing a new SOTA in the LM category. establishing a new SOTA in the LM category.
# News
* [Turing-NLG: A 17-billion-parameter language model by Microsoft](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/)
* [ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
# Table of Contents
# Table of Contents
| Section | Description | | Section | Description |
| --------------------------------------- | ------------------------------------------- | | --------------------------------------- | ------------------------------------------- |
| [Why DeepSpeed?](#why-deepspeed) | DeepSpeed overview | | [Why DeepSpeed?](#why-deepspeed) | DeepSpeed overview |
| [Getting Started](#getting-started) | DeepSpeed first steps | | [Features](#features) | DeepSpeed features |
| [Further Reading](#further-reading) | DeepSpeed features, tutorials, etc. | | [Further Reading](#further-reading) | DeepSpeed documentation, tutorials, etc. |
| [Contributing](#contributing) | Instructions for contributing to DeepSpeed | | [Contributing](#contributing) | Instructions for contributing to DeepSpeed |
| [Publications](#publications) | DeepSpeed publications | | [Publications](#publications) | DeepSpeed publications |
# Why DeepSpeed? # Why DeepSpeed?
Training advanced deep learning models is challenging. Beyond model design, Training advanced deep learning models is challenging. Beyond model design,
model scientists also need to set up the state-of-the-art training techniques model scientists also need to set up the state-of-the-art training techniques
...@@ -65,9 +66,7 @@ optimizations on advanced hyperparameter tuning and optimizers. For example: ...@@ -65,9 +66,7 @@ optimizations on advanced hyperparameter tuning and optimizers. For example:
| 256 V100 GPUs | NVIDIA | 3.9 | | 256 V100 GPUs | NVIDIA | 3.9 |
| 256 V100 GPUs | DeepSpeed | **3.7** | | 256 V100 GPUs | DeepSpeed | **3.7** |
<!---*Read more*: [BERT tutorial](../../Tutorials/bert_pretraining/deepspeed_bert_training.md)--> *Read more*: [BERT pre-training tutorial](https://www.deepspeed.ai/tutorials/bert-pretraining/)
*BERT Tutorial*: Coming Soon
* DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA * DeepSpeed trains GPT2 (1.5 billion parameters) 3.75x faster than state-of-art, NVIDIA
Megatron on Azure GPUs. Megatron on Azure GPUs.
...@@ -105,9 +104,8 @@ combination. ZeRO boosts the scaling capability and efficiency further. ...@@ -105,9 +104,8 @@ combination. ZeRO boosts the scaling capability and efficiency further.
DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering DeepSpeed to fit models using lower degree of model parallelism and higher batch size, offering
significant performance gains compared to using model parallelism alone. significant performance gains compared to using model parallelism alone.
*Read more*: [technical report](https://arxiv.org/abs/1910.02054), *Read more*: [technical report](https://arxiv.org/abs/1910.02054)
and [GPT tutorial](https://www.deepspeed.ai/tutorials/megatron/). and [GPT tutorial](https://www.deepspeed.ai/tutorials/megatron/)
<!-- and [QANet tutorial](../../Tutorials/QANetTutorial.md). -->
![DeepSpeed-vs-Megatron](./docs/assets/images/DeepSpeed-vs-Megatron.png) ![DeepSpeed-vs-Megatron](./docs/assets/images/DeepSpeed-vs-Megatron.png)
<p align="center"> <p align="center">
...@@ -121,303 +119,60 @@ optimizers such as [LAMB](https://arxiv.org/abs/1904.00962). These improve the ...@@ -121,303 +119,60 @@ optimizers such as [LAMB](https://arxiv.org/abs/1904.00962). These improve the
effectiveness of model training and reduce the number of samples required to effectiveness of model training and reduce the number of samples required to
convergence to desired accuracy. convergence to desired accuracy.
*Read more*: [Tuning tutorial](https://www.deepspeed.ai/tutorials/1Cycle/), *Read more*: [Tuning tutorial](https://www.deepspeed.ai/tutorials/1Cycle/) and [BERT pre-training tutorial](https://www.deepspeed.ai/tutorials/bert-pretraining/)
<!---
and *BERT Tutorial*: Coming Soon.
[BERT tutorial](../../Tutorials/BingBertSquad/BingBertSquadTutorial.md),
[QANet tutorial](../../Tutorials/QANet/QANetTutorial.md)
-->
## Usability
## Good Usability
Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to six billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.3 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM. Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. Compared to current model parallelism libraries, DeepSpeed does not require a code redesign or model refactoring. It also does not put limitations on model dimensions (such as number of attention heads, hidden sizes, and others), batch size, or any other training parameters. For models of up to six billion parameters, you can use ZeRO-powered data parallelism conveniently without requiring model parallelism, while in contrast, standard data parallelism will run out of memory for models with more than 1.3 billion parameters. In addition, DeepSpeed conveniently supports flexible combination of ZeRO-powered data parallelism with custom model parallelisms, such as tensor slicing of NVIDIA's Megatron-LM.
## Features # Features
Below we provide a brief feature list, see our detailed [feature Below we provide a brief feature list, see our detailed [feature
overview](https://www.deepspeed.ai/features/) for descriptions and usage. overview](https://www.deepspeed.ai/features/) for descriptions and usage.
* [Distributed Training with Mixed Precision](https://www.deepspeed.ai/features/#distributed-training-with-mixed-precision) * [Distributed Training with Mixed Precision](https://www.deepspeed.ai/features/#distributed-training-with-mixed-precision)
* 16-bit mixed precision * 16-bit mixed precision
* Single-GPU/Multi-GPU/Multi-Node * Single-GPU/Multi-GPU/Multi-Node
* [Model Parallelism](https://www.deepspeed.ai/features/#model-parallelism) * [Model Parallelism](https://www.deepspeed.ai/features/#model-parallelism)
* Support for Custom Model Parallelism * Support for Custom Model Parallelism
* Integration with Megatron-LM * Integration with Megatron-LM
* [Memory and Bandwidth Optimizations](https://www.deepspeed.ai/features/#memory-and-bandwidth-optimizations) * [Memory and Bandwidth Optimizations](https://www.deepspeed.ai/features/#memory-and-bandwidth-optimizations)
* The Zero Redundancy Optimizer (ZeRO) * The Zero Redundancy Optimizer (ZeRO)
* Constant Buffer Optimization (CBO) * Constant Buffer Optimization (CBO)
* Smart Gradient Accumulation * Smart Gradient Accumulation
* [Training Features](https://www.deepspeed.ai/features/#training-features) * [Training Features](https://www.deepspeed.ai/features/#training-features)
* Simplified training API * Simplified training API
* Gradient Clipping * Gradient Clipping
* Automatic loss scaling with mixed precision * Automatic loss scaling with mixed precision
* [Training Optimizers](https://www.deepspeed.ai/features/#training-optimizers) * [Training Optimizers](https://www.deepspeed.ai/features/#training-optimizers)
* Fused Adam optimizer and arbitrary `torch.optim.Optimizer` * Fused Adam optimizer and arbitrary `torch.optim.Optimizer`
* Memory bandwidth optimized FP16 Optimizer * Memory bandwidth optimized FP16 Optimizer
* Large Batch Training with LAMB Optimizer * Large Batch Training with LAMB Optimizer
* Memory efficient Training with ZeRO Optimizer * Memory efficient Training with ZeRO Optimizer
* [Training Agnostic Checkpointing](https://www.deepspeed.ai/features/#training-agnostic-checkpointing) * [Training Agnostic Checkpointing](https://www.deepspeed.ai/features/#training-agnostic-checkpointing)
* [Advanced Parameter Search](https://www.deepspeed.ai/features/#advanced-parameter-search) * [Advanced Parameter Search](https://www.deepspeed.ai/features/#advanced-parameter-search)
* Learning Rate Range Test * Learning Rate Range Test
* 1Cycle Learning Rate Schedule * 1Cycle Learning Rate Schedule
* [Simplified Data Loader](https://www.deepspeed.ai/features/#simplified-data-loader) * [Simplified Data Loader](https://www.deepspeed.ai/features/#simplified-data-loader)
* [Performance Analysis and Debugging](https://www.deepspeed.ai/features/#performance-analysis-and-debugging) * [Performance Analysis and Debugging](https://www.deepspeed.ai/features/#performance-analysis-and-debugging)
# Getting Started
## Installation
* Please see our [Azure tutorial](https://www.deepspeed.ai/tutorials/azure/) to get started with DeepSpeed on Azure!
* If you're not on Azure, we recommend using our docker image via `docker pull deepspeed/deepspeed:latest` which contains a pre-installed version of DeepSpeed and all the necessary dependencies.
* If you want to install DeepSpeed manually, we provide an install script `install.sh` to help install on a local machine or across an entire cluster.
## Writing DeepSpeed Models
DeepSpeed model training is accomplished using the DeepSpeed engine. The engine
can wrap any arbitrary model of type `torch.nn.module` and has a minimal set of APIs
for training and checkpointing the model. Please see the tutorials for detailed
examples.
To initialize the DeepSpeed engine: # Further Reading
```python
model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args,
model=model,
model_parameters=params)
```
`deepspeed.inialize` ensures that all of the necessary setup required for
distributed data parallel or mixed precision training are done
appropriately under the hood. In addition to wrapping the model, DeepSpeed can
construct and manage the training optimizer, data loader, and the learning rate
scheduler based on the parameters passed to `deepspeed.initialze` and the
DeepSpeed [configuration file](#deepspeed-configuration).
### Training
Once the DeepSpeed engine has been initialized, it can be used to train the
model using three simple APIs for forward propagation (`()`), backward
propagation (`backward`), and weight updates (`step`).
```python
for step, batch in enumerate(data_loader):
#forward() method
loss = model_engine(batch)
#runs backpropagation
model_engine.backward(loss)
#weight update
model_engine.step()
```
Under the hood, DeepSpeed automatically performs the necessary operations
required for distributed data parallel training, in mixed precision, with a
pre-defined learning rate schedule:
* **Gradient Averaging**: in distributed data parallel training, `backward`
ensures that gradients are averaged across data parallel processes after
training on an `train_batch_size`.
* **Loss Scaling**: in FP16/mixed precision training, the DeepSpeed
engine automatically handles scaling the loss to avoid precision loss in the
gradients.
* **Learning Rate Schedule**: if using DeepSpeed's learning rate
schedule, then DeepSpeed automatically handles any updates to the learning
rate when `step` is executed.
### Model Checkpointing
Saving and loading the training state is handled via the `save_checkpoint` and
`load_checkpoint` API in DeepSpeed which takes two arguments to uniquely
identify a checkpoint:
* `ckpt_dir`: the directory where checkpoints will be saved.
* `ckpt_id`: an identifier that uniquely identifies a checkpoint in the directory.
In the following code snippet, we use the loss value as the checkpoint identifier.
```python
#load checkpoint
_, client_sd = model_engine.load_checkpoint(args.load_dir, args.ckpt_id)
step = client_sd['step']
#advance data loader to ckpt step
dataloader_to_step(data_loader, step + 1)
for step, batch in enumerate(data_loader):
#forward() method
loss = model_engine(batch)
#runs backpropagation
model_engine.backward(loss)
#weight update
model_engine.step()
#save checkpoint
if step % args.save_interval:
client_sd['step'] = step
ckpt_id = loss.item()
model_engine.save_checkpoint(args.save_dir, ckpt_id, client_sd = client_sd)
```
DeepSpeed can automatically save and restore the model, optimizer, and the
learning rate scheduler states while hiding away these details from the user.
However, the user may want to save other data in addition to these that are
unique to a given model training. To support these items, `save_checkpoint`
accepts a client state dictionary `client_sd` for saving. These items can be
retrieved from `load_checkpoint` as a return argument. In the example above,
the `step` value is stored as part of the `client_sd`.
## DeepSpeed Configuration
DeepSpeed features can be enabled, disabled, or configured using a config JSON
file that should be specified as `args.deepspeed_config`. Available configs are at
[deepspeed/pt/deepspeed_constants.py](deepspeed/pt/deepspeed_constants.py).
A sample config file is shown below. For a full set of features see [core API
doc](https://deepspeed.readthedocs.io/en/latest/).
```json
{
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015
}
},
"fp16": {
"enabled": true
},
"zero_optimization": true
}
```
## Multi-Node Environment Variables
When training across multiple nodes we have found it useful to support
propagating user-defined environment variables. By default DeepSpeed will
propagate all NCCL and PYTHON related environment variables that are set. If
you would like to propagate additional variables you can specify them in a
dot-file named `.deepspeed_env` that contains a new-line separated list of
`VAR=VAL` entries. The DeepSpeed launcher will look in the local path you are
executing from and also in your home directory (`~/`).
As a concrete example, some clusters require special NCCL variables to set
prior to training. The user can simply add these variables to a
`.deepspeed_env` file in their home directory that looks like this:
```
NCCL_IB_DISABLE=1
NCCL_SOCKET_IFNAME=eth0
```
DeepSpeed will then make sure that these environment variables are set when
launching each process on every node across their training job.
# Launching DeepSpeed Training
DeepSpeed installs the entry point `deepspeed` to launch distributed training.
We illustrate an example usage of DeepSpeed with the following assumptions:
1. You have already integrated DeepSpeed into your model
2. `client_entry.py` is the entry script for your model
3. `client args` is the `argparse` command line arguments
4. `ds_config.json` is the configuration file for DeepSpeed
## Resource Configuration (multi-node)
DeepSpeed configures multi-node compute resources with hostfiles that are compatible with
[OpenMPI](https://www.open-mpi.org/) and [Horovod](https://github.com/horovod/horovod).
A hostfile is a list of *hostnames* (or SSH aliases), which are machines accessible via passwordless
SSH, and *slot counts*, which specify the number of GPUs available on the system. For
example,
```
worker-1 slots=4
worker-2 slots=4
```
specifies that two machines named *worker-1* and *worker-2* each have four GPUs to use
for training.
Hostfiles are specified with the `--hostfile` command line option. If no hostfile is
specified, DeepSpeed searches for `/job/hostfile`. If no hostfile is specified or found,
DeepSpeed queries the number of GPUs on the local machine to discover the number of local
slots available.
The following command launches a PyTorch training job across all available nodes and GPUs
specified in `myhostfile`:
```bash
deepspeed <client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json --hostfile=myhostfile
```
Alternatively, DeepSpeed allows you to restrict distributed training of your model to a
subset of the available nodes and GPUs. This feature is enabled through two command line
arguments: `--num_nodes` and `--num_gpus`. For example, distributed training can be
restricted to use only two nodes with the following command:
```bash
deepspeed --num_nodes=2 \
<client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
```
You can instead include or exclude specific resources using the `--include` and
`--exclude` flags. For example, to use all available resources **except** GPU 0 on node
*worker-2* and GPUs 0 and 1 on *worker-3*:
```bash
deepspeed --exclude="worker-2:0@worker-3:0,1" \
<client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
```
Similarly, you can use **only** GPUs 0 and 1 on *worker-2*:
```bash
deepspeed --include="worker-2:0,1" \
<client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
```
### MPI Compatibility
As described above, DeepSpeed provides its own parallel launcher to help launch
multi-node/multi-gpu training jobs. If you prefer to launch your training job
using MPI (e.g., mpirun), we provide support for this. It should be noted that
DeepSpeed will still use the torch distributed NCCL backend and *not* the MPI
backend. To launch your training job with mpirun + DeepSpeed you simply pass us
an additional flag `--deepspeed_mpi`. DeepSpeed will then use
[mpi4py](https://pypi.org/project/mpi4py/) to discover the MPI environment (e.g.,
rank, world size) and properly initialize torch distributed for training. In this
case you will explicitly invoke `python` to launch your model script instead of using
the `deepspeed` launcher, here is an example:
```bash
mpirun <mpi-args> python \
<client_entry.py> <client args> \
--deepspeed_mpi --deepspeed --deepspeed_config ds_config.json
```
If you want to use this feature of DeepSpeed, please ensure that mpi4py is
installed via `pip install mpi4py`.
## Resource Configuration (single-node)
In the case that we are only running on a single node (with one or more GPUs)
DeepSpeed *does not* require a hostfile as described above. If a hostfile is
not detected or passed in then DeepSpeed will query the number of GPUs on the
local machine to discover the number of slots available. The `--include` and
`--exclude` arguments work as normal, but the user should specify 'localhost'
as the hostname.
All DeepSpeed documentation can be found on our website: [deepspeed.ai](https://www.deepspeed.ai/)
# Further Reading
| Article | Description | | Article | Description |
| ---------------------------------------------------------------------------------------------- | -------------------------------------------- | | ---------------------------------------------------------------------------------------------- | -------------------------------------------- |
| [DeepSpeed Features](https://www.deepspeed.ai/features/) | DeepSpeed features | | [DeepSpeed Features](https://www.deepspeed.ai/features/) | DeepSpeed features |
| [Getting Started](https://www.deepspeed.ai/getting-started/) | First steps with DeepSpeed |
| [DeepSpeed JSON Configuration](https://www.deepspeed.ai/docs/config-json/) | Configuring DeepSpeed | | [DeepSpeed JSON Configuration](https://www.deepspeed.ai/docs/config-json/) | Configuring DeepSpeed |
| [API Documentation](https://deepspeed.readthedocs.io/en/latest/) | Generated DeepSpeed API documentation | | [API Documentation](https://deepspeed.readthedocs.io/en/latest/) | Generated DeepSpeed API documentation |
| [CIFAR-10 Tutorial](https://www.deepspeed.ai/tutorials/cifar-10) | Getting started with CIFAR-10 and DeepSpeed | | [CIFAR-10 Tutorial](https://www.deepspeed.ai/tutorials/cifar-10) | Getting started with CIFAR-10 and DeepSpeed |
| [Megatron-LM Tutorial](https://www.deepspeed.ai/tutorials/megatron/) | Train GPT2 with DeepSpeed and Megatron-LM | | [Megatron-LM Tutorial](https://www.deepspeed.ai/tutorials/megatron/) | Train GPT2 with DeepSpeed and Megatron-LM |
| [BERT Pre-training Tutorial](https://www.deepspeed.ai/tutorials/bert-pretraining/) | Pre-train BERT with DeepSpeed |
| [Learning Rate Range Test Tutorial](https://www.deepspeed.ai/tutorials/lrrt/) | Faster training with large learning rates | | [Learning Rate Range Test Tutorial](https://www.deepspeed.ai/tutorials/lrrt/) | Faster training with large learning rates |
| [1Cycle Tutorial](https://www.deepspeed.ai/tutorials/1Cycle/) | SOTA learning schedule in DeepSpeed | | [1Cycle Tutorial](https://www.deepspeed.ai/tutorials/1Cycle/) | SOTA learning schedule in DeepSpeed |
......
...@@ -34,7 +34,7 @@ def initialize(args, ...@@ -34,7 +34,7 @@ def initialize(args,
mpu=None, mpu=None,
dist_init_required=None, dist_init_required=None,
collate_fn=None): collate_fn=None):
r"""Initialize the DeepSpeed Engine. """Initialize the DeepSpeed Engine.
Arguments: Arguments:
args: a dictionary containing local_rank and deepspeed_config args: a dictionary containing local_rank and deepspeed_config
...@@ -63,21 +63,19 @@ def initialize(args, ...@@ -63,21 +63,19 @@ def initialize(args,
mini-batch of Tensor(s). Used when using batched loading from a mini-batch of Tensor(s). Used when using batched loading from a
map-style dataset. map-style dataset.
Return: Returns:
The following tuple is returned by this function. A tuple of ``engine``, ``optimizer``, ``training_dataloader``, ``lr_scheduler``
tuple: engine, engine.optimizer, engine.training_dataloader, engine.lr_scheduler
engine: DeepSpeed runtime engine which wraps the client model for distributed training.
engine.optimizer: Wrapped optimizer if a user defined optimizer is passed or
if optimizer is specified in json config else None.
engine.training_dataloader: DeepSpeed dataloader if training data was passed else None. * ``engine``: DeepSpeed runtime engine which wraps the client model for distributed training.
engine.lr_scheduler: Wrapped lr scheduler if user lr scheduler is passed * ``optimizer``: Wrapped optimizer if a user defined ``optimizer`` is supplied, or if
or if lr scheduler specified in json config else None. optimizer is specified in json config else ``None``.
* ``training_dataloader``: DeepSpeed dataloader if ``training_data`` was supplied,
otherwise ``None``.
* ``lr_scheduler``: Wrapped lr scheduler if user ``lr_scheduler`` is passed, or
if ``lr_scheduler`` specified in JSON configuration. Otherwise ``None``.
""" """
print("DeepSpeed info: version={}, git-hash={}, git-branch={}".format( print("DeepSpeed info: version={}, git-hash={}, git-branch={}".format(
__version__, __version__,
......
...@@ -16,8 +16,8 @@ import sys ...@@ -16,8 +16,8 @@ import sys
# -- Project information ----------------------------------------------------- # -- Project information -----------------------------------------------------
project = 'DeepSpeed' project = 'DeepSpeed'
copyright = '2020, Microsoft AI & Research' copyright = '2020, Microsoft'
author = 'Microsoft AI & Research' author = 'Microsoft'
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = '0.1.0' release = '0.1.0'
...@@ -36,6 +36,8 @@ extensions = [ ...@@ -36,6 +36,8 @@ extensions = [
'sphinx_rtd_theme', 'sphinx_rtd_theme',
] ]
pygments_style = 'sphinx'
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
......
...@@ -5,9 +5,30 @@ Subpackages ...@@ -5,9 +5,30 @@ Subpackages
----------- -----------
.. toctree:: .. toctree::
:maxdepth: 4
deepspeed.pt deepspeed.pt
Submodules
----------
deepspeed.git\_version\_info module
-----------------------------------
.. automodule:: deepspeed.git_version_info
:members:
:undoc-members:
:show-inheritance:
deepspeed.install\_config module
--------------------------------
.. automodule:: deepspeed.install_config
:members:
:undoc-members:
:show-inheritance:
Module contents Module contents
--------------- ---------------
......
...@@ -5,8 +5,7 @@ DeepSpeed ...@@ -5,8 +5,7 @@ DeepSpeed
:maxdepth: 2 :maxdepth: 2
:caption: Contents: :caption: Contents:
modules initialize
Indices and tables Indices and tables
......
Initializing DeepSpeed
======================
The entrypoint for all training with DeepSpeed is ``deepspeed.initialize()``.
Example usage:
.. code-block:: python
model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args,
model=net,
model_parameters=net.parameters())
.. autofunction:: deepspeed.initialize
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment