Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
deepspeed
Commits
0c77f878
Unverified
Commit
0c77f878
authored
May 29, 2020
by
Shaden Smith
Committed by
GitHub
May 29, 2020
Browse files
center images (#244)
parent
c7d0b0ca
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
7 deletions
+7
-7
docs/_posts/2020-05-28-fastest-bert-training.md
docs/_posts/2020-05-28-fastest-bert-training.md
+7
-7
No files found.
docs/_posts/2020-05-28-fastest-bert-training.md
View file @
0c77f878
...
...
@@ -57,7 +57,7 @@ practical scenarios range from a few hundred to a few thousand.

{: .align-center}

{: .align-center}

{: .align-center}
Figure 1: Performance evaluation of BERT-Large on a single V100 GPU, comparing
DeepSpeed with NVIDIA and HuggingFace versions of BERT in mixed-sequence length
...
...
@@ -102,7 +102,7 @@ approach the GPU peak performance, we employ two lines of optimizations in our
own Transformer kernel implementation: advanced fusion, and invertible
operators.

{: .align-center}

{: .align-center}
Figure 2: Transformer Layer with Pre-LayerNorm Architecture
...
...
@@ -133,7 +133,7 @@ shared memory, we reduce the cost of uncoalesced access to main memory to
better exploit memory bandwidth, resulting in 3% to 5% performance improvement
in the end-to-end training.

{: .align-center}

{: .align-center}
Figure 3: QKV’s GEMM and transform Kernel-Fusion
...
...
@@ -198,15 +198,15 @@ optimization, we are able to reduce the activation memory of the operator by
half, and the reduced memory allows us to train with larger batch sizes, which
once again improves GPU efficiency.

{: .align-center}

{: .align-center}

{: .align-center}

{: .align-center}
Figure 4: DeepSpeed invertible SoftMax operation versus Default PyTorch SoftMax operation

{: .align-center}

{: .align-center}

{: .align-center}

{: .align-center}
Figure 5: DeepSpeed invertible LayerNorm operation versus Default PyTorch LayerNorm operation
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment