Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
c1535ccb
Unverified
Commit
c1535ccb
authored
Jun 06, 2023
by
Baizhou Zhang
Committed by
GitHub
Jun 06, 2023
Browse files
[doc] fix docs about booster api usage (#3898)
parent
ec9bbc00
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
6 additions
and
6 deletions
+6
-6
colossalai/booster/booster.py
colossalai/booster/booster.py
+2
-2
docs/source/en/features/zero_with_chunk.md
docs/source/en/features/zero_with_chunk.md
+2
-2
docs/source/zh-Hans/features/zero_with_chunk.md
docs/source/zh-Hans/features/zero_with_chunk.md
+2
-2
No files found.
colossalai/booster/booster.py
View file @
c1535ccb
...
...
@@ -25,11 +25,11 @@ class Booster:
Examples:
```python
colossalai.launch(...)
plugin = GeminiPlugin(
stage=3,
...)
plugin = GeminiPlugin(...)
booster = Booster(precision='fp16', plugin=plugin)
model = GPT2()
optimizer = Adam(model.parameters())
optimizer =
Hybrid
Adam(model.parameters())
dataloader = Dataloader(Dataset)
lr_scheduler = LinearWarmupScheduler()
criterion = GPTLMLoss()
...
...
docs/source/en/features/zero_with_chunk.md
View file @
c1535ccb
...
...
@@ -195,7 +195,7 @@ def get_data(batch_size, seq_len, vocab_size):
Finally, we define a model which uses Gemini + ZeRO DDP and define our training loop, As we pre-train GPT in this example, we just use a simple language model loss:
```
python
from
torch
.optim
import
Adam
from
colossalai.nn
.optim
izer
import
Hybrid
Adam
from
colossalai.booster
import
Booster
from
colossalai.zero
import
ColoInitContext
...
...
@@ -211,7 +211,7 @@ def main():
# build criterion
criterion
=
GPTLMLoss
()
optimizer
=
Adam
(
model
.
parameters
(),
lr
=
0.001
)
optimizer
=
Hybrid
Adam
(
model
.
parameters
(),
lr
=
0.001
)
torch
.
manual_seed
(
123
)
default_pg
=
ProcessGroup
(
tp_degree
=
args
.
tp_degree
)
...
...
docs/source/zh-Hans/features/zero_with_chunk.md
View file @
c1535ccb
...
...
@@ -197,7 +197,7 @@ def get_data(batch_size, seq_len, vocab_size):
最后,使用booster注入 Gemini + ZeRO DDP 特性, 并定义训练循环。由于我们在这个例子中对GPT进行预训练,因此只使用了一个简单的语言模型损失函数:
```
python
from
torch
.optim
import
Adam
from
colossalai.nn
.optim
izer
import
Hybrid
Adam
from
colossalai.booster
import
Booster
from
colossalai.zero
import
ColoInitContext
...
...
@@ -213,7 +213,7 @@ def main():
# build criterion
criterion
=
GPTLMLoss
()
optimizer
=
Adam
(
model
.
parameters
(),
lr
=
0.001
)
optimizer
=
Hybrid
Adam
(
model
.
parameters
(),
lr
=
0.001
)
torch
.
manual_seed
(
123
)
default_pg
=
ProcessGroup
(
tp_degree
=
args
.
tp_degree
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment