Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
a2d32666
Unverified
Commit
a2d32666
authored
Nov 22, 2022
by
Jiarui Fang
Committed by
GitHub
Nov 22, 2022
Browse files
[hotfix] make Gemini work for conv DNN (#1998)
parent
15589111
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
19 additions
and
12 deletions
+19
-12
colossalai/nn/_ops/element_wise.py
colossalai/nn/_ops/element_wise.py
+19
-12
No files found.
colossalai/nn/_ops/element_wise.py
View file @
a2d32666
import
torch
import
torch.nn.functional
as
F
from
torch
import
Tensor
from
colossalai.tensor.op_wrapper
import
colo_op_impl
from
colossalai.tensor
import
ColoTensor
,
ColoTensorSpec
from
._utils
import
GeneralTensor
from
colossalai.tensor.op_wrapper
import
colo_op_impl
from
._utils
import
GeneralTensor
,
convert_to_colo_tensor
def
register_elementwise_op
(
op
):
...
...
@@ -15,8 +17,13 @@ def register_elementwise_op(op):
as ``torch.nn.functional.gelu`` or ``torch.nn.functional.relu``.
This method computes on either a normal tensor or a sharded tensor.
"""
if
'inplace'
in
kwargs
:
# TODO(jiaruifang) inplace will cause bugs
input_tensor
=
input_tensor
.
clone
()
return
op
(
input_tensor
,
*
args
,
**
kwargs
)
else
:
output
=
op
(
input_tensor
,
*
args
,
**
kwargs
)
# return output
if
isinstance
(
input_tensor
,
ColoTensor
):
if
isinstance
(
output
,
str
):
return
output
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment