README.md 9.23 KB
Newer Older
Shenggan's avatar
Shenggan committed
1
2
![](/assets/fold.jpg)

shenggan's avatar
shenggan committed
3
# FastFold
Shenggan's avatar
Shenggan committed
4

Shenggan's avatar
Shenggan committed
5
[![](https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green)](https://arxiv.org/abs/2203.00854)
Shenggan's avatar
Shenggan committed
6
![](https://img.shields.io/badge/Made%20with-ColossalAI-blueviolet?style=flat)
7
![](https://img.shields.io/badge/Habana-support-blue?style=flat&logo=intel&logoColor=blue)
Shenggan's avatar
Shenggan committed
8
9
![](https://img.shields.io/github/v/release/hpcaitech/FastFold)
[![GitHub license](https://img.shields.io/github/license/hpcaitech/FastFold)](https://github.com/hpcaitech/FastFold/blob/main/LICENSE)
Shenggan's avatar
Shenggan committed
10

11
12
13
14
15
16
17
## News :triangular_flag_on_post:
- [2023/01] Compatible with AlphaFold v2.3
- [2023/01] Added support for inference and training of AlphaFold on [Intel Habana](https://habana.ai/) platform. For usage instructions, see [here](#Inference-or-Training-on-Intel-Habana).

<br>

Optimizing Protein Structure Prediction Model Training and Inference on Heterogeneous Clusters
Shenggan's avatar
Shenggan committed
18
19
20
21
22
23

FastFold provides a **high-performance implementation of Evoformer** with the following characteristics.

1. Excellent kernel performance on GPU platform
2. Supporting Dynamic Axial Parallelism(DAP)
    * Break the memory limit of single GPU and reduce the overall training time
Shenggan's avatar
Shenggan committed
24
    * DAP can significantly speed up inference and make ultra-long sequence inference possible
Shenggan's avatar
Shenggan committed
25
3. Ease of use
Shenggan's avatar
Shenggan committed
26
    * Huge performance gains with a few lines changes
Shenggan's avatar
Shenggan committed
27
    * You don't need to care about how the parallel part is implemented
28
29
4. Faster data processing, about 3x times faster on monomer, about 3Nx times faster on multimer with N sequence.
5. Great Reduction on GPU memory, able to inference sequence containing more than **10000** residues.
Shenggan's avatar
Shenggan committed
30
31
32

## Installation

oahzxl's avatar
oahzxl committed
33
To install FastFold, you will need:
LuGY's avatar
LuGY committed
34
+ Python 3.8 or 3.9.
oahzxl's avatar
oahzxl committed
35
36
+ [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.3 or above
+ PyTorch 1.12 or above 
LuGY's avatar
LuGY committed
37

38
39
40
41
42
For now, You can install FastFold:
### Using Conda (Recommended)

We highly recommend installing an Anaconda or Miniconda environment and install PyTorch with conda.
Lines below would create a new conda environment called "fastfold":
Shenggan's avatar
Shenggan committed
43

Fazzie-Maqianli's avatar
Fazzie-Maqianli committed
44
45
46
```shell
git clone https://github.com/hpcaitech/FastFold
cd FastFold
Shenggan's avatar
Shenggan committed
47
conda env create --name=fastfold -f environment.yml
Shenggan's avatar
Shenggan committed
48
conda activate fastfold
49
python setup.py install
Shenggan's avatar
Shenggan committed
50
51
```

52
53
#### Advanced

oahzxl's avatar
oahzxl committed
54
To leverage the power of FastFold, we recommend you to install [Triton](https://github.com/openai/triton).
55

56
57
**NOTE: Triron needs CUDA 11.4 to run.**

58
```bash
59
pip install -U --pre triton
60
61
62
```


63
64
65
66
67
68
69
70
## Use Docker

### Build On Your Own
Run the following command to build a docker image from Dockerfile provided.

> Building FastFold from scratch requires GPU support, you need to use Nvidia Docker Runtime as the default when doing `docker build`. More details can be found [here](https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime).

```shell
71
72
cd FastFold
docker build -t fastfold ./docker
73
74
75
76
77
```

Run the following command to start the docker container in interactive mode.
```shell
docker run -ti --gpus all --rm --ipc=host fastfold bash
Shenggan's avatar
Shenggan committed
78
79
```

Shenggan's avatar
Shenggan committed
80
81
## Usage

82
You can use `Evoformer` as `nn.Module` in your project after `from fastfold.model.fastnn import Evoformer`:
Shenggan's avatar
Shenggan committed
83
84

```python
85
from fastfold.model.fastnn import Evoformer
Shenggan's avatar
Shenggan committed
86
87
88
evoformer_layer = Evoformer()
```

Shenggan's avatar
Shenggan committed
89
If you want to use Dynamic Axial Parallelism, add a line of initialize with `fastfold.distributed.init_dap`.
Shenggan's avatar
Shenggan committed
90
91
92
93
94
95
96

```python
from fastfold.distributed import init_dap

init_dap(args.dap_size)
```

Fazzie-Maqianli's avatar
Fazzie-Maqianli committed
97
98
99
100
101
### Download the dataset
You can down the dataset used to train FastFold  by the script `download_all_data.sh`:

    ./scripts/download_all_data.sh data/

102
103
### Inference

Shenggan's avatar
Shenggan committed
104
You can use FastFold with `inject_fastnn`. This will replace the evoformer from OpenFold with the high performance evoformer from FastFold.
105
106

```python
107
from fastfold.utils import inject_fastnn
108
109
110
111

model = AlphaFold(config)
import_jax_weights_(model, args.param_path, version=args.model_name)

112
model = inject_fastnn(model)
113
114
115
116
117
```

For Dynamic Axial Parallelism, you can refer to `./inference.py`. Here is an example of 2 GPUs parallel inference:

```shell
118
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
119
    --output_dir .outputs/ \
120
    --gpus 2 \
121
    --uniref90_database_path data/uniref90/uniref90.fasta \
122
    --mgnify_database_path data/mgnify/mgy_clusters_2022_05.fa \
123
    --pdb70_database_path data/pdb70/pdb70 \
124
    --uniref30_database_path data/uniref30/UniRef30_2021_03 \
125
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
Shenggan's avatar
Shenggan committed
126
127
128
    --jackhmmer_binary_path `which jackhmmer` \
    --hhblits_binary_path `which hhblits` \
    --hhsearch_binary_path `which hhsearch` \
129
130
131
    --kalign_binary_path `which kalign` \
    --enable_workflow \
    --inplace
132
```
LuGY's avatar
LuGY committed
133
134
135
136
137
or run the script `./inference.sh`, you can change the parameter in the script, especisally those data path.
```shell
./inference.sh
```

138
139
Alphafold's data pre-processing takes a lot of time, so we speed up the data pre-process by [ray](https://docs.ray.io/en/latest/workflows/concepts.html) workflow, which achieves a 3x times faster speed. To run the inference with ray workflow, we add parameter `--enable_workflow` by default.
To reduce memory usage of embedding presentations, we also add parameter `--inplace` to share memory by defaul.
LuGY's avatar
LuGY committed
140

oahzxl's avatar
oahzxl committed
141
142
#### inference with lower memory usage
Alphafold's embedding presentations take up a lot of memory as the sequence length increases. To reduce memory usage, 
143
you should add parameter `--chunk_size [N]` to cmdline or shell script `./inference.sh`. 
oahzxl's avatar
oahzxl committed
144
The smaller you set N, the less memory will be used, but it will affect the speed. We can inference 
145
146
147
a sequence of length 10000 in bf16 with 61GB memory on a Nvidia A100(80GB). For fp32, the max length is 8000.
> You need to set `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:15000` to inference such an extreme long sequence.

oahzxl's avatar
oahzxl committed
148
149
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
150
    --output_dir .outputs/ \
oahzxl's avatar
oahzxl committed
151
152
    --gpus 2 \
    --uniref90_database_path data/uniref90/uniref90.fasta \
153
    --mgnify_database_path data/mgnify/mgy_clusters_2022_05.fa \
oahzxl's avatar
oahzxl committed
154
    --pdb70_database_path data/pdb70/pdb70 \
155
    --uniref30_database_path data/uniref30/UniRef30_2021_03 \
oahzxl's avatar
oahzxl committed
156
157
158
159
160
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
    --jackhmmer_binary_path `which jackhmmer` \
    --hhblits_binary_path `which hhblits` \
    --hhsearch_binary_path `which hhsearch` \
    --kalign_binary_path `which kalign`  \
161
    --enable_workflow \
oahzxl's avatar
oahzxl committed
162
    --inplace
163
    --chunk_size N \
oahzxl's avatar
oahzxl committed
164
```
165

166
167
168
169
170
171
172
173
174
#### inference multimer sequence
Alphafold Multimer is supported. You can the following cmd or shell script `./inference_multimer.sh`.
Workflow and memory parameters mentioned above can also be used.
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
    --output_dir ./ \
    --gpus 2 \
    --model_preset multimer \
    --uniref90_database_path data/uniref90/uniref90.fasta \
175
    --mgnify_database_path data/mgnify/mgy_clusters_2022_05.fa \
176
    --pdb70_database_path data/pdb70/pdb70 \
177
    --uniref30_database_path data/uniref30/UniRef30_2021_03 \
178
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
179
    --uniprot_database_path data/uniprot/uniprot.fasta \
180
181
182
183
184
185
186
187
188
    --pdb_seqres_database_path data/pdb_seqres/pdb_seqres.txt  \
    --param_path data/params/params_model_1_multimer.npz \
    --model_name model_1_multimer \
    --jackhmmer_binary_path `which jackhmmer` \
    --hhblits_binary_path `which hhblits` \
    --hhsearch_binary_path `which hhsearch` \
    --kalign_binary_path `which kalign`
```

189
190
### Inference or Training on Intel Habana

191
To run AlphaFold inference or training on Intel Habana, you can follow the instructions in the [Installation Guide](https://docs.habana.ai/en/latest/Installation_Guide/) to set up your environment on Amazon EC2 DL1 instances or on-premise environments, and please use SynapseAI R1.7.1 to test as it was verified internally.
192
193
194
195

Once you have prepared your dataset and installed fastfold, you can use the following scripts:

```shell
196
cd fastfold/habana/fastnn/custom_op/; python setup.py build (this is for Gaudi, for Gaudi2 please use setup2.py) ; cd -
197
198
199
200
bash habana/inference.sh
bash habana/train.sh
```

Shenggan's avatar
Shenggan committed
201
202
203
204
205
206
207
208
209
## Performance Benchmark

We have included a performance benchmark script in `./benchmark`. You can benchmark the performance of Evoformer using different settings.

```shell
cd ./benchmark
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256
```

Shenggan's avatar
Shenggan committed
210
211
212
213
214
215
216
Benchmark Dynamic Axial Parallelism with 2 GPUs:

```shell
cd ./benchmark
torchrun --nproc_per_node=2 perf.py --msa-length 128 --res-length 256 --dap-size 2
```

Shenggan's avatar
Shenggan committed
217
218
219
220
221
222
223
224
225
226
227
If you want to benchmark with [OpenFold](https://github.com/aqlaboratory/openfold), you need to install OpenFold first and benchmark with option `--openfold`:

```shell
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256 --openfold
```

## Cite us

Cite this paper, if you use FastFold in your research publication.

```
Shenggan's avatar
Shenggan committed
228
229
230
231
232
233
234
235
@misc{cheng2022fastfold,
      title={FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours}, 
      author={Shenggan Cheng and Ruidong Wu and Zhongming Yu and Binrui Li and Xiwen Zhang and Jian Peng and Yang You},
      year={2022},
      eprint={2203.00854},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Shenggan's avatar
Shenggan committed
236
```
237
238
239
240

## Acknowledgments

We would like to extend our special thanks to the Intel Habana team for their support in providing us with technology and resources on the Habana platform.