Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
OpenFold
Commits
1ffd1974
Unverified
Commit
1ffd1974
authored
Apr 26, 2025
by
Jennifer Wei
Committed by
GitHub
Apr 26, 2025
Browse files
Merge pull request #533 from jnwei/pl_upgrades
Update pl_upgrades to use numpy 2 and update other dependencies
parents
23cf2f61
620a54fb
Changes
23
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
6 additions
and
4 deletions
+6
-4
scripts/install_third_party_dependencies.sh
scripts/install_third_party_dependencies.sh
+1
-1
setup.py
setup.py
+2
-1
tests/test_deepspeed_evo_attention.py
tests/test_deepspeed_evo_attention.py
+3
-2
No files found.
scripts/install_third_party_dependencies.sh
View file @
1ffd1974
...
@@ -14,7 +14,7 @@ gunzip -c tests/test_data/sample_feats.pickle.gz > tests/test_data/sample_feats.
...
@@ -14,7 +14,7 @@ gunzip -c tests/test_data/sample_feats.pickle.gz > tests/test_data/sample_feats.
python setup.py
install
python setup.py
install
echo
"Download CUTLASS, required for Deepspeed Evoformer attention kernel"
echo
"Download CUTLASS, required for Deepspeed Evoformer attention kernel"
git clone https://github.com/NVIDIA/cutlass
--depth
1
git clone https://github.com/NVIDIA/cutlass
--branch
v3.6.0
--depth
1
conda
env
config vars
set
CUTLASS_PATH
=
$PWD
/cutlass
conda
env
config vars
set
CUTLASS_PATH
=
$PWD
/cutlass
# This setting is used to fix a worker assignment issue during data loading
# This setting is used to fix a worker assignment issue during data loading
...
...
setup.py
View file @
1ffd1974
...
@@ -54,6 +54,7 @@ def get_cuda_bare_metal_version(cuda_dir):
...
@@ -54,6 +54,7 @@ def get_cuda_bare_metal_version(cuda_dir):
compute_capabilities
=
set
([
compute_capabilities
=
set
([
(
5
,
2
),
# Titan X
(
5
,
2
),
# Titan X
(
6
,
1
),
# GeForce 1000-series
(
6
,
1
),
# GeForce 1000-series
(
9
,
0
),
# Hopper
])
])
compute_capabilities
.
add
((
7
,
0
))
compute_capabilities
.
add
((
7
,
0
))
...
@@ -112,7 +113,7 @@ else:
...
@@ -112,7 +113,7 @@ else:
setup
(
setup
(
name
=
'openfold'
,
name
=
'openfold'
,
version
=
'2.
0
.0'
,
version
=
'2.
2
.0'
,
description
=
'A PyTorch reimplementation of DeepMind
\'
s AlphaFold 2'
,
description
=
'A PyTorch reimplementation of DeepMind
\'
s AlphaFold 2'
,
author
=
'OpenFold Team'
,
author
=
'OpenFold Team'
,
author_email
=
'jennifer.wei@omsf.io'
,
author_email
=
'jennifer.wei@omsf.io'
,
...
...
tests/test_deepspeed_evo_attention.py
View file @
1ffd1974
...
@@ -315,8 +315,9 @@ class TestDeepSpeedKernel(unittest.TestCase):
...
@@ -315,8 +315,9 @@ class TestDeepSpeedKernel(unittest.TestCase):
# Move the recycling dimension to the end
# Move the recycling dimension to the end
move_dim
=
lambda
t
:
t
.
permute
(
*
range
(
len
(
t
.
shape
))[
1
:],
0
)
move_dim
=
lambda
t
:
t
.
permute
(
*
range
(
len
(
t
.
shape
))[
1
:],
0
)
batch
=
tensor_tree_map
(
move_dim
,
batch
)
batch
=
tensor_tree_map
(
move_dim
,
batch
)
with
torch
.
no_grad
():
# Restrict this test to use only torch.float32 precision due to instability with torch.bfloat16
with
torch
.
cuda
.
amp
.
autocast
(
dtype
=
torch
.
bfloat16
):
# https://github.com/aqlaboratory/openfold/issues/532
with
torch
.
no_grad
(),
torch
.
cuda
.
amp
.
autocast
(
dtype
=
torch
.
float32
):
model
=
compare_utils
.
get_global_pretrained_openfold
()
model
=
compare_utils
.
get_global_pretrained_openfold
()
model
.
globals
.
use_deepspeed_evo_attention
=
False
model
.
globals
.
use_deepspeed_evo_attention
=
False
out_repro
=
model
(
batch
)
out_repro
=
model
(
batch
)
...
...
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment