Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
ac0d30fe
Unverified
Commit
ac0d30fe
authored
Jan 06, 2023
by
ExtremeViscent
Committed by
GitHub
Jan 06, 2023
Browse files
[NFC] polish batch_norm_handler.py code style (#2359)
parent
e11a005c
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
8 deletions
+8
-8
colossalai/auto_parallel/tensor_shard/deprecated/op_handler/batch_norm_handler.py
.../tensor_shard/deprecated/op_handler/batch_norm_handler.py
+8
-8
No files found.
colossalai/auto_parallel/tensor_shard/deprecated/op_handler/batch_norm_handler.py
View file @
ac0d30fe
...
@@ -2,9 +2,9 @@ import operator
...
@@ -2,9 +2,9 @@ import operator
from
functools
import
reduce
from
functools
import
reduce
import
torch
import
torch
from
colossalai.auto_parallel.tensor_shard.deprecated._utils
import
\
ignore_sharding_exception
from
colossalai.auto_parallel.tensor_shard.deprecated._utils
import
ignore_sharding_exception
from
colossalai.auto_parallel.tensor_shard.deprecated.sharding_strategy
import
(
ShardingStrategy
,
StrategiesVector
)
from
colossalai.auto_parallel.tensor_shard.deprecated.sharding_strategy
import
ShardingStrategy
,
StrategiesVector
from
.operator_handler
import
OperatorHandler
from
.operator_handler
import
OperatorHandler
...
@@ -76,19 +76,19 @@ class BatchNormHandler(OperatorHandler):
...
@@ -76,19 +76,19 @@ class BatchNormHandler(OperatorHandler):
Argument:
Argument:
sharding_size_forward(int): The forward activation will be divided
sharding_size_forward(int): The forward activation will be divided
into sharding_size_forward number partions.
into sharding_size_forward number partions.
sharding_size_backward_activation(int): The backward activation will
sharding_size_backward_activation(int): The backward activation will
be divided into sharding_size_backward_activation number partions.
be divided into sharding_size_backward_activation number partions.
sharding_size_weight(int): The backward weight will be divided
sharding_size_weight(int): The backward weight will be divided
into sharding_size_weight number partions.
into sharding_size_weight number partions.
Return:
Return:
memory_cost(Tuple[float]): Memory cost per device with this
memory_cost(Tuple[float]): Memory cost per device with this
specific strategy, the first element of this tuple is forward
specific strategy, the first element of this tuple is forward
memory cost, and the second element of this tuple is backward
memory cost, and the second element of this tuple is backward
memory cost.
memory cost.
memory_cost_forward(float): Memory cost of forward activation per
memory_cost_forward(float): Memory cost of forward activation per
device with this specific strategy.
device with this specific strategy.
memory_cost_backward_activation(float): Memory cost of backward activation
memory_cost_backward_activation(float): Memory cost of backward activation
per device with this specific strategy.
per device with this specific strategy.
'''
'''
# compute the memory cost of this strategy
# compute the memory cost of this strategy
...
@@ -458,7 +458,7 @@ class BatchNormHandler(OperatorHandler):
...
@@ -458,7 +458,7 @@ class BatchNormHandler(OperatorHandler):
norm_handler.register_strategy()
norm_handler.register_strategy()
for strategy in norm_handler.strategies_vector:
for strategy in norm_handler.strategies_vector:
print(f'{strategy.name}, computation_cost: {strategy.compute_cost}, memory_cost: {strategy.memory_cost}')
print(f'{strategy.name}, computation_cost: {strategy.compute_cost}, memory_cost: {strategy.memory_cost}')
Output:
Output:
RS0 = RS0 x S0, computation_cost: 131072, memory_cost: 524288.0
RS0 = RS0 x S0, computation_cost: 131072, memory_cost: 524288.0
RS1 = RS1 x S1, computation_cost: 131072, memory_cost: 524288.0
RS1 = RS1 x S1, computation_cost: 131072, memory_cost: 524288.0
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment