Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
39ed597b
Commit
39ed597b
authored
Nov 17, 2022
by
Tri Dao
Browse files
[LayerNorm] Compile for both sm70 and sm80
parent
71f674ae
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
csrc/layer_norm/setup.py
csrc/layer_norm/setup.py
+2
-2
No files found.
csrc/layer_norm/setup.py
View file @
39ed597b
...
...
@@ -98,8 +98,8 @@ if os.path.exists(os.path.join(torch_dir, "include", "ATen", "CUDAGeneratorImpl.
raise_if_cuda_home_none
(
"--fast_layer_norm"
)
# Check, if CUDA11 is installed for compute capability 8.0
cc_flag
=
[]
#
cc_flag.append("-gencode")
#
cc_flag.append("arch=compute_70,code=sm_70")
cc_flag
.
append
(
"-gencode"
)
cc_flag
.
append
(
"arch=compute_70,code=sm_70"
)
cc_flag
.
append
(
"-gencode"
)
cc_flag
.
append
(
"arch=compute_80,code=sm_80"
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment