Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
9fc9820a
Commit
9fc9820a
authored
Jun 02, 2023
by
Pierce Freeman
Browse files
Strip cuda name from torch version
parent
5e469978
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
setup.py
setup.py
+2
-1
No files found.
setup.py
View file @
9fc9820a
...
...
@@ -51,11 +51,12 @@ class CustomInstallCommand(install):
# Determine the version numbers that will be used to determine the correct wheel
_
,
cuda_version_raw
=
get_cuda_bare_metal_version
(
CUDA_HOME
)
torch_version
=
torch
.
__version__
torch_version
_raw
=
parse
(
torch
.
__version__
)
python_version
=
f
"cp
{
sys
.
version_info
.
major
}{
sys
.
version_info
.
minor
}
"
platform_name
=
get_platform
()
flash_version
=
get_package_version
()
cuda_version
=
f
"
{
cuda_version_raw
.
major
}{
cuda_version_raw
.
minor
}
"
torch_version
=
f
"
{
torch_version_raw
.
major
}
.
{
torch_version_raw
.
minor
}
.
{
torch_version_raw
.
micro
}
"
# Determine wheel URL based on CUDA version, torch version, python version and OS
wheel_filename
=
f
'flash_attn-
{
flash_version
}
+cu
{
cuda_version
}
torch
{
torch_version
}
-
{
python_version
}
-
{
python_version
}
-
{
platform_name
}
.whl'
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment