Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
5da03c93
Commit
5da03c93
authored
Nov 08, 2022
by
Ziyue Jiang
Committed by
binmakeswell
Nov 09, 2022
Browse files
[NFC] polish colossalai/amp/torch_amp/_grad_scaler.py code style (#1823)
Co-authored-by:
Ziyue Jiang
<
ziyue.jiang@gmail.com
>
parent
90833b45
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
5 deletions
+7
-5
colossalai/amp/torch_amp/_grad_scaler.py
colossalai/amp/torch_amp/_grad_scaler.py
+7
-5
No files found.
colossalai/amp/torch_amp/_grad_scaler.py
View file @
5da03c93
...
...
@@ -3,16 +3,18 @@
# modified from https://github.com/pytorch/pytorch/blob/master/torch/cuda/amp/grad_scaler.py
# to support tensor parallel
import
torch
from
collections
import
defaultdict
,
abc
import
warnings
from
collections
import
abc
,
defaultdict
from
enum
import
Enum
from
typing
import
Any
,
Dict
,
List
,
Optional
,
Tuple
from
colossalai.context
import
ParallelMode
import
torch
import
torch.distributed
as
dist
from
colossalai.core
import
global_context
as
gpc
from
torch._utils
import
_flatten_dense_tensors
,
_unflatten_dense_tensors
from
packaging
import
version
from
torch._utils
import
_flatten_dense_tensors
,
_unflatten_dense_tensors
from
colossalai.context
import
ParallelMode
from
colossalai.core
import
global_context
as
gpc
class
_MultiDeviceReplicator
(
object
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment