Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
73efe896
"vscode:/vscode.git/clone" did not exist on "114295c010dd9c94d48add7a0f091ba6ebdf482b"
Unverified
Commit
73efe896
authored
Mar 12, 2024
by
Dries Verachtert
Committed by
GitHub
Mar 12, 2024
Browse files
Fix minor typo: softare => software (#29602)
parent
6cc5411d
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
docs/source/en/perf_train_gpu_one.md
docs/source/en/perf_train_gpu_one.md
+1
-1
No files found.
docs/source/en/perf_train_gpu_one.md
View file @
73efe896
...
@@ -65,7 +65,7 @@ training your model with [`Trainer`] or writing a pure PyTorch loop, in which ca
...
@@ -65,7 +65,7 @@ training your model with [`Trainer`] or writing a pure PyTorch loop, in which ca
with 🤗 Accelerate
](
#using--accelerate
)
.
with 🤗 Accelerate
](
#using--accelerate
)
.
If these methods do not result in sufficient gains, you can explore the following options:
If these methods do not result in sufficient gains, you can explore the following options:
*
[
Look into building your own custom Docker container with efficient softare prebuilds
](
#efficient-software-prebuilds
)
*
[
Look into building your own custom Docker container with efficient soft
w
are prebuilds
](
#efficient-software-prebuilds
)
*
[
Consider a model that uses Mixture of Experts (MoE)
](
#mixture-of-experts
)
*
[
Consider a model that uses Mixture of Experts (MoE)
](
#mixture-of-experts
)
*
[
Convert your model to BetterTransformer to leverage PyTorch native attention
](
#using-pytorch-native-attention-and-flash-attention
)
*
[
Convert your model to BetterTransformer to leverage PyTorch native attention
](
#using-pytorch-native-attention-and-flash-attention
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment