Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
2c45efc3
Unverified
Commit
2c45efc3
authored
Mar 31, 2022
by
Liang Bowen
Committed by
GitHub
Mar 31, 2022
Browse files
html refactor (#555)
parent
d1211148
Changes
133
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
66 additions
and
87 deletions
+66
-87
colossalai/amp/__init__.py
colossalai/amp/__init__.py
+1
-1
colossalai/amp/apex_amp/__init__.py
colossalai/amp/apex_amp/__init__.py
+1
-1
colossalai/builder/builder.py
colossalai/builder/builder.py
+8
-7
colossalai/builder/pipeline.py
colossalai/builder/pipeline.py
+8
-6
colossalai/communication/utils.py
colossalai/communication/utils.py
+3
-3
colossalai/context/parallel_context.py
colossalai/context/parallel_context.py
+2
-2
colossalai/context/random/seed_manager.py
colossalai/context/random/seed_manager.py
+4
-3
colossalai/nn/layer/parallel_3d/_operation.py
colossalai/nn/layer/parallel_3d/_operation.py
+1
-1
colossalai/nn/loss/loss_2d.py
colossalai/nn/loss/loss_2d.py
+1
-1
colossalai/nn/loss/loss_2p5d.py
colossalai/nn/loss/loss_2p5d.py
+1
-1
colossalai/nn/loss/loss_3d.py
colossalai/nn/loss/loss_3d.py
+1
-1
colossalai/nn/loss/loss_moe.py
colossalai/nn/loss/loss_moe.py
+2
-2
colossalai/trainer/_trainer.py
colossalai/trainer/_trainer.py
+1
-2
colossalai/utils/memory_tracer/async_memtracer.py
colossalai/utils/memory_tracer/async_memtracer.py
+16
-15
colossalai/utils/profiler/prof_utils.py
colossalai/utils/profiler/prof_utils.py
+16
-15
docs/colossalai/colossalai.amp.apex_amp.apex_amp.rst
docs/colossalai/colossalai.amp.apex_amp.apex_amp.rst
+0
-5
docs/colossalai/colossalai.amp.apex_amp.rst
docs/colossalai/colossalai.amp.apex_amp.rst
+0
-6
docs/colossalai/colossalai.amp.naive_amp.grad_scaler.base_grad_scaler.rst
...colossalai.amp.naive_amp.grad_scaler.base_grad_scaler.rst
+0
-5
docs/colossalai/colossalai.amp.naive_amp.grad_scaler.constant_grad_scaler.rst
...ssalai.amp.naive_amp.grad_scaler.constant_grad_scaler.rst
+0
-5
docs/colossalai/colossalai.amp.naive_amp.grad_scaler.dynamic_grad_scaler.rst
...ossalai.amp.naive_amp.grad_scaler.dynamic_grad_scaler.rst
+0
-5
No files found.
colossalai/amp/__init__.py
View file @
2c45efc3
...
@@ -19,7 +19,7 @@ def convert_to_amp(model: nn.Module, optimizer: Optimizer, criterion: _Loss, mod
...
@@ -19,7 +19,7 @@ def convert_to_amp(model: nn.Module, optimizer: Optimizer, criterion: _Loss, mod
optimizer (:class:`torch.optim.Optimizer`): your optimizer object.
optimizer (:class:`torch.optim.Optimizer`): your optimizer object.
criterion (:class:`torch.nn.modules.loss._Loss`): your loss function object.
criterion (:class:`torch.nn.modules.loss._Loss`): your loss function object.
mode (:class:`colossalai.amp.AMP_TYPE`): amp mode.
mode (:class:`colossalai.amp.AMP_TYPE`): amp mode.
amp_config (:class:`colossalai.context.Config`
or
dict): configuration for different amp modes
amp_config (
Union[
:class:`colossalai.context.Config`
,
dict
]
): configuration for different amp modes
.
Returns:
Returns:
A tuple (model, optimizer, criterion).
A tuple (model, optimizer, criterion).
...
...
colossalai/amp/apex_amp/__init__.py
View file @
2c45efc3
...
@@ -9,7 +9,7 @@ def convert_to_apex_amp(model: nn.Module, optimizer: Optimizer, amp_config):
...
@@ -9,7 +9,7 @@ def convert_to_apex_amp(model: nn.Module, optimizer: Optimizer, amp_config):
Args:
Args:
model (:class:`torch.nn.Module`): your model object.
model (:class:`torch.nn.Module`): your model object.
optimizer (:class:`torch.optim.Optimizer`): your optimizer object.
optimizer (:class:`torch.optim.Optimizer`): your optimizer object.
amp_config (:class:
colossalai.context.Config
or
dict): configuration for initializing apex_amp.
amp_config (
Union[
:class:
`
colossalai.context.Config
`,
dict
]
): configuration for initializing apex_amp.
The ``amp_config`` should include parameters below:
The ``amp_config`` should include parameters below:
::
::
...
...
colossalai/builder/builder.py
View file @
2c45efc3
...
@@ -29,8 +29,8 @@ def build_from_registry(config, registry: Registry):
...
@@ -29,8 +29,8 @@ def build_from_registry(config, registry: Registry):
is specified by `registry`.
is specified by `registry`.
Note:
Note:
the `config` is used to construct the return object such as `LAYERS`,
the `config` is used to construct the return object such as `LAYERS`,
`OPTIMIZERS`
`OPTIMIZERS`
and other support types in `registry`. The `config` should contain
and other support types in `registry`. The `config` should contain
all required parameters of corresponding object. The details of support
all required parameters of corresponding object. The details of support
types in `registry` and the `mod_type` in `config` could be found in
types in `registry` and the `mod_type` in `config` could be found in
`registry <https://github.com/hpcaitech/ColossalAI/blob/main/colossalai/registry/__init__.py>`_.
`registry <https://github.com/hpcaitech/ColossalAI/blob/main/colossalai/registry/__init__.py>`_.
...
@@ -40,10 +40,11 @@ def build_from_registry(config, registry: Registry):
...
@@ -40,10 +40,11 @@ def build_from_registry(config, registry: Registry):
used in the construction of the return object.
used in the construction of the return object.
registry (:class:`Registry`): A registry specifying the type of the return object
registry (:class:`Registry`): A registry specifying the type of the return object
Returns: A Python object specified by `registry`
Returns:
A Python object specified by `registry`.
Raises:
Raises:
Exception: Raises an Exception if an error occurred when building from registry
Exception: Raises an Exception if an error occurred when building from registry
.
"""
"""
config_
=
config
.
copy
()
# keep the original config untouched
config_
=
config
.
copy
()
# keep the original config untouched
assert
isinstance
(
assert
isinstance
(
...
...
colossalai/builder/pipeline.py
View file @
2c45efc3
...
@@ -163,12 +163,14 @@ def count_layer_params(layers):
...
@@ -163,12 +163,14 @@ def count_layer_params(layers):
def
build_pipeline_model_from_cfg
(
config
,
num_chunks
:
int
=
1
,
partition_method
:
str
=
'parameter'
,
verbose
:
bool
=
False
):
def
build_pipeline_model_from_cfg
(
config
,
num_chunks
:
int
=
1
,
partition_method
:
str
=
'parameter'
,
verbose
:
bool
=
False
):
"""An intializer to split the model into different stages for pipeline parallelism.
"""An in
i
tializer to split the model into different stages for pipeline parallelism.
An example for the model config is shown below. The class VisionTransformerFromConfig should
An example for the model config is shown below. The class VisionTransformerFromConfig should
inherit colossalai.nn.model.ModelFromConfig to allow this initializer to build model from a sequence
inherit colossalai.nn.model.ModelFromConfig to allow this initializer to build model from a sequence
of layer configurations.
of layer configurations.
::
model_config = dict(
model_config = dict(
type='VisionTransformerFromConfig',
type='VisionTransformerFromConfig',
embedding_cfg=dict(...),
embedding_cfg=dict(...),
...
...
colossalai/communication/utils.py
View file @
2c45efc3
...
@@ -45,7 +45,7 @@ def recv_tensor_meta(tensor_shape, prev_rank=None):
...
@@ -45,7 +45,7 @@ def recv_tensor_meta(tensor_shape, prev_rank=None):
prev_rank (int): The rank of the source of the tensor.
prev_rank (int): The rank of the source of the tensor.
Returns:
Returns:
torch.Size: The shape of the tensor to be received.
:class:`
torch.Size
`
: The shape of the tensor to be received.
"""
"""
if
tensor_shape
is
None
:
if
tensor_shape
is
None
:
if
prev_rank
is
None
:
if
prev_rank
is
None
:
...
@@ -71,7 +71,7 @@ def split_tensor_into_1d_equal_chunks(tensor, new_buffer=False):
...
@@ -71,7 +71,7 @@ def split_tensor_into_1d_equal_chunks(tensor, new_buffer=False):
new_buffer (bool, optional): Whether to use a new buffer to store sliced tensor.
new_buffer (bool, optional): Whether to use a new buffer to store sliced tensor.
Returns:
Returns:
torch.Tensor
: The split tensor
:class:`torch.Size`
: The split tensor
"""
"""
partition_size
=
torch
.
numel
(
tensor
)
//
gpc
.
get_world_size
(
ParallelMode
.
PARALLEL_1D
)
partition_size
=
torch
.
numel
(
tensor
)
//
gpc
.
get_world_size
(
ParallelMode
.
PARALLEL_1D
)
start_index
=
partition_size
*
gpc
.
get_local_rank
(
ParallelMode
.
PARALLEL_1D
)
start_index
=
partition_size
*
gpc
.
get_local_rank
(
ParallelMode
.
PARALLEL_1D
)
...
@@ -92,7 +92,7 @@ def gather_split_1d_tensor(tensor):
...
@@ -92,7 +92,7 @@ def gather_split_1d_tensor(tensor):
Args:
Args:
tensor (torch.Tensor): Tensor to be gathered after communication.
tensor (torch.Tensor): Tensor to be gathered after communication.
Returns:
Returns:
gathered (torch.Tensor)
: The gathered tensor
:class:`torch.Size`
: The gathered tensor
.
"""
"""
world_size
=
gpc
.
get_world_size
(
ParallelMode
.
PARALLEL_1D
)
world_size
=
gpc
.
get_world_size
(
ParallelMode
.
PARALLEL_1D
)
numel
=
torch
.
numel
(
tensor
)
numel
=
torch
.
numel
(
tensor
)
...
...
colossalai/context/parallel_context.py
View file @
2c45efc3
colossalai/context/random/seed_manager.py
View file @
2c45efc3
...
@@ -34,6 +34,7 @@ class SeedManager:
...
@@ -34,6 +34,7 @@ class SeedManager:
def
set_state
(
self
,
parallel_mode
:
ParallelMode
,
state
:
Tensor
):
def
set_state
(
self
,
parallel_mode
:
ParallelMode
,
state
:
Tensor
):
"""Sets the state of the seed manager for `parallel_mode`.
"""Sets the state of the seed manager for `parallel_mode`.
Args:
Args:
parallel_mode (:class:`colossalai.context.ParallelMode`): The chosen parallel mode.
parallel_mode (:class:`colossalai.context.ParallelMode`): The chosen parallel mode.
state (:class:`torch.Tensor`): the state to be set.
state (:class:`torch.Tensor`): the state to be set.
...
@@ -66,9 +67,9 @@ class SeedManager:
...
@@ -66,9 +67,9 @@ class SeedManager:
seed (int): The seed to be added.
seed (int): The seed to be added.
overwrtie (bool, optional): Whether allows to overwrite the seed that has been set already
overwrtie (bool, optional): Whether allows to overwrite the seed that has been set already
Raises
Raises
:
AssertionError: Raises an AssertionError if `parallel_mode` is not an instance of
AssertionError: Raises an AssertionError if `parallel_mode` is not an instance of
:class:`colossalai.context.ParallelMode`
:class:`colossalai.context.ParallelMode`
or the seed for `parallel_mode` has been added.
or the seed for `parallel_mode` has been added.
"""
"""
assert
isinstance
(
parallel_mode
,
ParallelMode
),
'A valid ParallelMode must be provided'
assert
isinstance
(
parallel_mode
,
ParallelMode
),
'A valid ParallelMode must be provided'
if
overwrtie
is
False
:
if
overwrtie
is
False
:
...
...
colossalai/nn/layer/parallel_3d/_operation.py
View file @
2c45efc3
colossalai/nn/loss/loss_2d.py
View file @
2c45efc3
...
@@ -27,7 +27,7 @@ class CrossEntropyLoss2D(_Loss):
...
@@ -27,7 +27,7 @@ class CrossEntropyLoss2D(_Loss):
reduce (bool, optional)
reduce (bool, optional)
label_smoothing (float, optional)
label_smoothing (float, optional)
More details about args
,
kwargs and torch.nn.functional.cross_entropy could be found in
More details about
``
args
``, ``
kwargs
``
and
``
torch.nn.functional.cross_entropy
``
could be found in
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
"""
"""
...
...
colossalai/nn/loss/loss_2p5d.py
View file @
2c45efc3
...
@@ -27,7 +27,7 @@ class CrossEntropyLoss2p5D(_Loss):
...
@@ -27,7 +27,7 @@ class CrossEntropyLoss2p5D(_Loss):
reduce (bool, optional)
reduce (bool, optional)
label_smoothing (float, optional)
label_smoothing (float, optional)
More details about args
,
kwargs and torch.nn.functional.cross_entropy could be found in
More details about
``
args
``, ``
kwargs
``
and
``
torch.nn.functional.cross_entropy
``
could be found in
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
"""
"""
def
__init__
(
self
,
reduction
=
True
,
*
args
,
**
kwargs
):
def
__init__
(
self
,
reduction
=
True
,
*
args
,
**
kwargs
):
...
...
colossalai/nn/loss/loss_3d.py
View file @
2c45efc3
...
@@ -27,7 +27,7 @@ class CrossEntropyLoss3D(_Loss):
...
@@ -27,7 +27,7 @@ class CrossEntropyLoss3D(_Loss):
reduce (bool, optional)
reduce (bool, optional)
label_smoothing (float, optional)
label_smoothing (float, optional)
More details about args
,
kwargs and torch.nn.functional.cross_entropy could be found in
More details about
``
args
``, ``
kwargs
``
and
``
torch.nn.functional.cross_entropy
``
could be found in
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
"""
"""
...
...
colossalai/nn/loss/loss_moe.py
View file @
2c45efc3
...
@@ -23,7 +23,7 @@ class MoeCrossEntropyLoss(_Loss):
...
@@ -23,7 +23,7 @@ class MoeCrossEntropyLoss(_Loss):
reduction (str, optional)
reduction (str, optional)
label_smoothing (float, optional)
label_smoothing (float, optional)
More details about args
,
kwargs and torch.nn.functional.cross_entropy could be found in
More details about
``
args
``, ``
kwargs
``
and
``
torch.nn.functional.cross_entropy
``
could be found in
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
"""
"""
...
@@ -40,7 +40,7 @@ class MoeCrossEntropyLoss(_Loss):
...
@@ -40,7 +40,7 @@ class MoeCrossEntropyLoss(_Loss):
input (:class:`torch.tensor`): Predicted unnormalized scores (often referred to as logits).
input (:class:`torch.tensor`): Predicted unnormalized scores (often referred to as logits).
target (:class:`torch.tensor`): Ground truth class indices or class probabilities.
target (:class:`torch.tensor`): Ground truth class indices or class probabilities.
More details about args
,
kwargs and torch.nn.functional.cross_entropy could be found in
More details about
``
args
``, ``
kwargs
``
and
``
torch.nn.functional.cross_entropy
``
could be found in
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
`Cross_entropy <https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy>`_.
"""
"""
main_loss
=
self
.
loss
(
*
args
)
main_loss
=
self
.
loss
(
*
args
)
...
...
colossalai/trainer/_trainer.py
View file @
2c45efc3
...
@@ -307,8 +307,7 @@ class Trainer:
...
@@ -307,8 +307,7 @@ class Trainer:
max_steps (int, optional): Maximum number of running iterations.
max_steps (int, optional): Maximum number of running iterations.
test_dataloader (:class:`torch.utils.data.DataLoader`, optional): DataLoader for validation.
test_dataloader (:class:`torch.utils.data.DataLoader`, optional): DataLoader for validation.
test_interval (int, optional): Interval of validation
test_interval (int, optional): Interval of validation
hooks (list[`BaseHook <https://github.com/hpcaitech/ColossalAI/tree/main/colossalai/trainer/hooks>`_],
hooks (list[BaseHook], optional): A list of hooks used in training.
optional): A list of hooks used in training.
display_progress (bool, optional): If True, a progress bar will be displayed.
display_progress (bool, optional): If True, a progress bar will be displayed.
"""
"""
...
...
colossalai/utils/memory_tracer/async_memtracer.py
View file @
2c45efc3
...
@@ -21,6 +21,7 @@ class AsyncMemoryMonitor:
...
@@ -21,6 +21,7 @@ class AsyncMemoryMonitor:
:type power: int
:type power: int
Usage:
Usage:
::
```python
```python
async_mem_monitor = AsyncMemoryMonitor()
async_mem_monitor = AsyncMemoryMonitor()
...
...
colossalai/utils/profiler/prof_utils.py
View file @
2c45efc3
...
@@ -73,6 +73,7 @@ class ProfilerContext(object):
...
@@ -73,6 +73,7 @@ class ProfilerContext(object):
"""
"""
Profiler context manager
Profiler context manager
Usage:
Usage:
::
```python
```python
world_size = 4
world_size = 4
...
...
docs/colossalai/colossalai.amp.apex_amp.apex_amp.rst
deleted
100644 → 0
View file @
d1211148
colossalai.amp.apex\_amp.apex\_amp
==================================
.. automodule:: colossalai.amp.apex_amp.apex_amp
:members:
docs/colossalai/colossalai.amp.apex_amp.rst
View file @
2c45efc3
...
@@ -3,9 +3,3 @@ colossalai.amp.apex\_amp
...
@@ -3,9 +3,3 @@ colossalai.amp.apex\_amp
.. automodule:: colossalai.amp.apex_amp
.. automodule:: colossalai.amp.apex_amp
:members:
:members:
.. toctree::
:maxdepth: 2
colossalai.amp.apex_amp.apex_amp
docs/colossalai/colossalai.amp.naive_amp.grad_scaler.base_grad_scaler.rst
deleted
100644 → 0
View file @
d1211148
colossalai.amp.naive\_amp.grad\_scaler.base\_grad\_scaler
=========================================================
.. automodule:: colossalai.amp.naive_amp.grad_scaler.base_grad_scaler
:members:
docs/colossalai/colossalai.amp.naive_amp.grad_scaler.constant_grad_scaler.rst
deleted
100644 → 0
View file @
d1211148
colossalai.amp.naive\_amp.grad\_scaler.constant\_grad\_scaler
=============================================================
.. automodule:: colossalai.amp.naive_amp.grad_scaler.constant_grad_scaler
:members:
docs/colossalai/colossalai.amp.naive_amp.grad_scaler.dynamic_grad_scaler.rst
deleted
100644 → 0
View file @
d1211148
colossalai.amp.naive\_amp.grad\_scaler.dynamic\_grad\_scaler
============================================================
.. automodule:: colossalai.amp.naive_amp.grad_scaler.dynamic_grad_scaler
:members:
Prev
1
2
3
4
5
…
7
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment