Unverified Commit 59ce90d0 authored by Minjia Zhang's avatar Minjia Zhang Committed by GitHub
Browse files

Minjiaz/zero offload (#382)


Co-authored-by: default avatarJeff Rasley <jerasley@microsoft.com>
parent be4b94be
...@@ -103,6 +103,12 @@ during the backward computation, the activation gradients are short lived while ...@@ -103,6 +103,12 @@ during the backward computation, the activation gradients are short lived while
gradients are long lived. CMO transfers activation checkpoints and parameter gradients gradients are long lived. CMO transfers activation checkpoints and parameter gradients
to contiguous buffers preventing memory fragmentation. to contiguous buffers preventing memory fragmentation.
## ZeRO-Offload
ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. It allows training up to 13-billion-parameter models on a single NVIDIA V100 GPU, 10x larger than the state-of-the-art, while retaining high training throughput of over 30 teraflops per GPU.
For more details see the [ZeRO-Offload release blog]( https://www.microsoft.com/en-us/research/?p=689370&secret=iSlooB), and [tutorial](/tutorials/zero-offload/) on integration with DeepSpeed.
## Additional Memory and Bandwidth Optimizations ## Additional Memory and Bandwidth Optimizations
### Smart Gradient Accumulation ### Smart Gradient Accumulation
......
---
layout: single
title: "10x bigger model training on a single GPU with ZeRO-Offload"
excerpt: ""
categories: news
new_post: true
date: 2020-09-09 00:00:00
---
We introduce a new technology called ZeRO-Offload to enable **10X bigger model training on a single GPU**. ZeRO-Offload extends ZeRO-2 to leverage both CPU and GPU memory for training large models. Using a machine with **a single GPU**, our users now can run **models of up to 13 billion parameters** without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models.
* For more information on ZeRO-Offload, see our [press release]( {{ site.press_release_v3 }} ).
* For more information on how to use ZeRO-Offload, see our [ZeRO-Offload tutorial](https://www.deepspeed.ai/tutorials/zero-offload/).
* The source code for ZeRO-Offload can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed).
...@@ -10,7 +10,6 @@ efficient, and effective. ...@@ -10,7 +10,6 @@ efficient, and effective.
<p align="center"><i><b>10x Larger Models</b></i></p> <p align="center"><i><b>10x Larger Models</b></i></p>
<p align="center"><i><b>10x Faster Training</b></i></p> <p align="center"><i><b>10x Faster Training</b></i></p>
<p align="center"><i><b>Minimal Code Change</b></i></p> <p align="center"><i><b>Minimal Code Change</b></i></p>
DeepSpeed can train DL models with over a hundred billion parameters on current DeepSpeed can train DL models with over a hundred billion parameters on current
generation of GPU clusters, while achieving over 10x in system performance generation of GPU clusters, while achieving over 10x in system performance
compared to the state-of-art. Early adopters of DeepSpeed have already produced compared to the state-of-art. Early adopters of DeepSpeed have already produced
...@@ -157,6 +156,9 @@ overview](/features/) for descriptions and usage. ...@@ -157,6 +156,9 @@ overview](/features/) for descriptions and usage.
* Activation Partitioning * Activation Partitioning
* Constant Buffer Optimization * Constant Buffer Optimization
* Contiguous Memory Optimization * Contiguous Memory Optimization
* [ZeRO-Offload](/features/#zero-offload)
* Leverage both CPU/GPU memory for model training
* Support 10B model training on a single GPU
* [Additional Memory and Bandwidth Optimizations](/features/#additional-memory-and-bandwidth-optimizations) * [Additional Memory and Bandwidth Optimizations](/features/#additional-memory-and-bandwidth-optimizations)
* Smart Gradient Accumulation * Smart Gradient Accumulation
* Communication/Computation Overlap * Communication/Computation Overlap
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment