Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
ac178ca5
Commit
ac178ca5
authored
Sep 04, 2023
by
Hongxin Liu
Browse files
[legacy] move builder and registry to legacy (#4603)
parent
8accecd5
Changes
65
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
13 additions
and
13 deletions
+13
-13
docs/source/zh-Hans/advanced_tutorials/train_vit_using_pipeline_parallelism.md
...dvanced_tutorials/train_vit_using_pipeline_parallelism.md
+6
-6
docs/source/zh-Hans/advanced_tutorials/train_vit_with_hybrid_parallelism.md
...s/advanced_tutorials/train_vit_with_hybrid_parallelism.md
+4
-4
docs/source/zh-Hans/features/gradient_handler.md
docs/source/zh-Hans/features/gradient_handler.md
+1
-1
examples/language/gpt/titans/dataset/webtext.py
examples/language/gpt/titans/dataset/webtext.py
+1
-1
examples/language/gpt/titans/model/embed.py
examples/language/gpt/titans/model/embed.py
+1
-1
No files found.
docs/source/zh-Hans/advanced_tutorials/train_vit_using_pipeline_parallelism.md
View file @
ac178ca5
...
@@ -32,7 +32,7 @@ import colossalai
...
@@ -32,7 +32,7 @@ import colossalai
import
colossalai.nn
as
col_nn
import
colossalai.nn
as
col_nn
import
torch
import
torch
import
torch.nn
as
nn
import
torch.nn
as
nn
from
colossalai.builder
import
build_pipeline_model
from
colossalai.
legacy.
builder
import
build_pipeline_model
from
colossalai.legacy.engine.schedule
import
(
InterleavedPipelineSchedule
,
from
colossalai.legacy.engine.schedule
import
(
InterleavedPipelineSchedule
,
PipelineSchedule
)
PipelineSchedule
)
from
colossalai.logging
import
disable_existing_loggers
,
get_dist_logger
from
colossalai.logging
import
disable_existing_loggers
,
get_dist_logger
...
@@ -48,17 +48,17 @@ from torchvision.datasets import CIFAR10
...
@@ -48,17 +48,17 @@ from torchvision.datasets import CIFAR10
总的来说, 我们提供3种方法来建立一个流水并行的模型:
总的来说, 我们提供3种方法来建立一个流水并行的模型:
1.
`colossalai.builder.build_pipeline_model_from_cfg`
1.
`colossalai.
legacy.
builder.build_pipeline_model_from_cfg`
2.
`colossalai.builder.build_pipeline_model`
2.
`colossalai.
legacy.
builder.build_pipeline_model`
3.
自己按阶段拆分模型
3.
自己按阶段拆分模型
当你的内存能够容纳模型时,你可以使用前两种方法来建立你的模型,否则你必须自己分割模型。前两种方法首先在 CPU 上建立整个模型,然后分割模型,最后你可以直接把模型的相应部分移到 GPU 上。
当你的内存能够容纳模型时,你可以使用前两种方法来建立你的模型,否则你必须自己分割模型。前两种方法首先在 CPU 上建立整个模型,然后分割模型,最后你可以直接把模型的相应部分移到 GPU 上。
`colossalai.builder.build_pipeline_model_from_cfg()`
接收一个模型的配置文件,它可以均匀地(按层)或平衡地(按参数大小)分割模型。
`colossalai.
legacy.
builder.build_pipeline_model_from_cfg()`
接收一个模型的配置文件,它可以均匀地(按层)或平衡地(按参数大小)分割模型。
如果你熟悉
`PyTorch`
, 你可以使用
`colossalai.builder.build_pipeline_model()`
它接收一个
`torch.nn.Sequential`
模型并按层均匀分割。
如果你熟悉
`PyTorch`
, 你可以使用
`colossalai.
legacy.
builder.build_pipeline_model()`
它接收一个
`torch.nn.Sequential`
模型并按层均匀分割。
在本教程中,我们将修改
[
TIMM/ViT
](
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
)
to
`torch.nn.Sequential`
,然后使用
`colossalai.builder.build_pipeline_model()`
来建立流水线模型。
在本教程中,我们将修改
[
TIMM/ViT
](
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
)
to
`torch.nn.Sequential`
,然后使用
`colossalai.
legacy.
builder.build_pipeline_model()`
来建立流水线模型。
当数据是
**一个**
`Tensor`
, 你可以使用你的模型
`forward()`
中的位置参数来获得数据张量。对于流水线的第一阶段,
`forward()`
的第一个位置参数是从数据加载器加载的数据张量。对于其他阶段,
`forward()`
的第一个位置参数是上一阶段的输出张量。注意,如果该阶段不是最后一个阶段,则
`forward()`
的返回必须是一个
`Tensor`
。
当数据是
**一个**
`Tensor`
, 你可以使用你的模型
`forward()`
中的位置参数来获得数据张量。对于流水线的第一阶段,
`forward()`
的第一个位置参数是从数据加载器加载的数据张量。对于其他阶段,
`forward()`
的第一个位置参数是上一阶段的输出张量。注意,如果该阶段不是最后一个阶段,则
`forward()`
的返回必须是一个
`Tensor`
。
...
...
docs/source/zh-Hans/advanced_tutorials/train_vit_with_hybrid_parallelism.md
View file @
ac178ca5
...
@@ -256,8 +256,8 @@ SEQ_LENGTH = (IMG_SIZE // PATCH_SIZE) ** 2 + 1 # add 1 for cls token
...
@@ -256,8 +256,8 @@ SEQ_LENGTH = (IMG_SIZE // PATCH_SIZE) ** 2 + 1 # add 1 for cls token
### 构建流水线模型 (`/hybrid_parallel/model/vit.py`)
### 构建流水线模型 (`/hybrid_parallel/model/vit.py`)
Colossal-AI 提供了两种从现有模型构建流水线模型的方法。
Colossal-AI 提供了两种从现有模型构建流水线模型的方法。
-
`colossalai.builder.build_pipeline_model_from_cfg`
-
`colossalai.
legacy.
builder.build_pipeline_model_from_cfg`
-
`colossalai.builder.build_pipeline_model`
-
`colossalai.
legacy.
builder.build_pipeline_model`
此外,您还可以使用 Colossal-AI 从头开始构建流水线模型。
此外,您还可以使用 Colossal-AI 从头开始构建流水线模型。
```
python
```
python
...
@@ -266,11 +266,11 @@ from typing import Callable
...
@@ -266,11 +266,11 @@ from typing import Callable
import
inspect
import
inspect
import
torch
import
torch
from
colossalai
import
nn
as
col_nn
from
colossalai
import
nn
as
col_nn
from
colossalai.registry
import
LAYERS
,
MODELS
from
colossalai.
legacy.
registry
import
LAYERS
,
MODELS
from
colossalai.logging
import
get_dist_logger
from
colossalai.logging
import
get_dist_logger
from
colossalai.core
import
global_context
as
gpc
from
colossalai.core
import
global_context
as
gpc
from
colossalai.context
import
ParallelMode
from
colossalai.context
import
ParallelMode
from
colossalai.builder.pipeline
import
partition_uniform
from
colossalai.
legacy.
builder.pipeline
import
partition_uniform
from
torch
import
dtype
,
nn
from
torch
import
dtype
,
nn
from
model_zoo.vit.vit
import
ViTBlock
,
ViTEmbedding
,
ViTHead
from
model_zoo.vit.vit
import
ViTBlock
,
ViTEmbedding
,
ViTHead
@
MODELS
.
register_module
@
MODELS
.
register_module
...
...
docs/source/zh-Hans/features/gradient_handler.md
View file @
ac178ca5
...
@@ -25,7 +25,7 @@
...
@@ -25,7 +25,7 @@
3.
实现
`handle_gradient`
3.
实现
`handle_gradient`
```
python
```
python
from
colossalai.registry
import
GRADIENT_HANDLER
from
colossalai.
legacy.
registry
import
GRADIENT_HANDLER
from
colossalai.legacy.engine.gradient_handler
import
BaseGradientHandler
from
colossalai.legacy.engine.gradient_handler
import
BaseGradientHandler
...
...
examples/language/gpt/titans/dataset/webtext.py
View file @
ac178ca5
...
@@ -6,7 +6,7 @@ import torch
...
@@ -6,7 +6,7 @@ import torch
from
torch.utils.data
import
Dataset
from
torch.utils.data
import
Dataset
from
transformers
import
GPT2Tokenizer
from
transformers
import
GPT2Tokenizer
from
colossalai.registry
import
DATASETS
from
colossalai.
legacy.
registry
import
DATASETS
@
DATASETS
.
register_module
@
DATASETS
.
register_module
...
...
examples/language/gpt/titans/model/embed.py
View file @
ac178ca5
...
@@ -8,11 +8,11 @@ from torch.nn.parameter import Parameter
...
@@ -8,11 +8,11 @@ from torch.nn.parameter import Parameter
from
colossalai.context
import
ParallelMode
,
seed
from
colossalai.context
import
ParallelMode
,
seed
from
colossalai.core
import
global_context
as
gpc
from
colossalai.core
import
global_context
as
gpc
from
colossalai.legacy.registry
import
LAYERS
,
LOSSES
,
MODELS
from
colossalai.nn.layer.base_layer
import
ParallelLayer
from
colossalai.nn.layer.base_layer
import
ParallelLayer
from
colossalai.nn.layer.parallel_1d._utils
import
gather_forward_split_backward
,
reduce_grad
,
reduce_input
from
colossalai.nn.layer.parallel_1d._utils
import
gather_forward_split_backward
,
reduce_grad
,
reduce_input
from
colossalai.nn.layer.parallel_1d.layers
import
Linear1D_Row
from
colossalai.nn.layer.parallel_1d.layers
import
Linear1D_Row
from
colossalai.nn.layer.utils
import
divide
from
colossalai.nn.layer.utils
import
divide
from
colossalai.registry
import
LAYERS
,
LOSSES
,
MODELS
from
colossalai.utils
import
get_current_device
from
colossalai.utils
import
get_current_device
...
...
Prev
1
2
3
4
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment