Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
9b1b8903
Commit
9b1b8903
authored
Dec 23, 2022
by
oahzxl
Browse files
update run
parent
51ef8384
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
15 additions
and
5 deletions
+15
-5
chunk_codegen_run.py
chunk_codegen_run.py
+15
-5
No files found.
chunk_codegen_run.py
View file @
9b1b8903
...
...
@@ -32,15 +32,25 @@ def _is_all_param_close(m: torch.nn.Module, gm: GraphModule) -> bool:
def
_test_fwd_and_bwd
(
model
:
torch
.
nn
.
Module
,
gm
:
ColoGraphModule
,
node
,
pair
):
# now_mem = torch.cuda.memory_allocated() / 1024**2
# with torch.no_grad():
# node0 = node.clone()
# pair0 = pair.clone()
# model.graph(node0, pair0, now_mem)
# new_now_mem = torch.cuda.memory_allocated() / 1024**2
# new_max_mem = torch.cuda.max_memory_allocated() / 1024**2
# print("\ncode now:%.2f max:%.2f" %(new_now_mem - now_mem, new_max_mem - now_mem))
torch
.
cuda
.
reset_peak_memory_stats
()
now_mem
=
torch
.
cuda
.
memory_allocated
()
/
1024
**
2
with
torch
.
no_grad
():
node
0
=
node
.
clone
()
pair
0
=
pair
.
clone
()
node1
,
pair1
=
gm
(
node
0
,
pair
0
)
node
1
=
node
.
clone
()
pair
1
=
pair
.
clone
()
gm
(
node
1
,
pair
1
)
new_now_mem
=
torch
.
cuda
.
memory_allocated
()
/
1024
**
2
new_max_mem
=
torch
.
cuda
.
max_memory_allocated
()
/
1024
**
2
print
(
"now:%.2f max:%.2f"
%
(
new_now_mem
-
now_mem
,
new_max_mem
-
now_mem
))
print
(
"
gm
now:%.2f max:%.2f"
%
(
new_now_mem
-
now_mem
,
new_max_mem
-
now_mem
))
# test forward
with
torch
.
no_grad
():
non_fx_out
=
model
(
node
,
pair
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment