Unverified Commit 769cddcb authored by digger yu's avatar digger yu Committed by GitHub
Browse files

fix typo docs/ (#4033)

parent 2d40759a
...@@ -9,7 +9,7 @@ When you only have a few GPUs for large model training tasks, **heterogeneous tr ...@@ -9,7 +9,7 @@ When you only have a few GPUs for large model training tasks, **heterogeneous tr
## Usage ## Usage
At present, Gemini supports compatibility with ZeRO parallel mode, and it is really simple to use Gemini: Inject the feathures of `GeminiPlugin` into training components with `booster`. More instructions of `booster` please refer to [**usage of booster**](../basics/booster_api.md). At present, Gemini supports compatibility with ZeRO parallel mode, and it is really simple to use Gemini: Inject the features of `GeminiPlugin` into training components with `booster`. More instructions of `booster` please refer to [**usage of booster**](../basics/booster_api.md).
```python ```python
from torchvision.models import resnet18 from torchvision.models import resnet18
......
...@@ -150,7 +150,7 @@ Colossal-AI 提供了自己的优化器、损失函数和学习率调度器。Py ...@@ -150,7 +150,7 @@ Colossal-AI 提供了自己的优化器、损失函数和学习率调度器。Py
optimizer = colossalai.nn.Lamb(model.parameters(), lr=1.8e-2, weight_decay=0.1) optimizer = colossalai.nn.Lamb(model.parameters(), lr=1.8e-2, weight_decay=0.1)
# build loss # build loss
criterion = torch.nn.CrossEntropyLoss() criterion = torch.nn.CrossEntropyLoss()
# lr_scheduelr # lr_scheduler
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS) lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS)
``` ```
......
...@@ -303,7 +303,7 @@ colossalai.launch_from_torch(config=args.config) ...@@ -303,7 +303,7 @@ colossalai.launch_from_torch(config=args.config)
# build loss # build loss
criterion = torch.nn.CrossEntropyLoss() criterion = torch.nn.CrossEntropyLoss()
# lr_scheduelr # lr_scheduler
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS) lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS)
``` ```
......
...@@ -181,7 +181,7 @@ optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, weight_decay=0.1) ...@@ -181,7 +181,7 @@ optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, weight_decay=0.1)
# build loss # build loss
criterion = torch.nn.CrossEntropyLoss() criterion = torch.nn.CrossEntropyLoss()
# lr_scheduelr # lr_scheduler
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=NUM_EPOCHS) lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=NUM_EPOCHS)
``` ```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment