README.md 4.23 KB
Newer Older
Shenggan's avatar
Shenggan committed
1
2
![](/assets/fold.jpg)

shenggan's avatar
shenggan committed
3
# FastFold
Shenggan's avatar
Shenggan committed
4

Shenggan's avatar
Shenggan committed
5
[![](https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green)](https://arxiv.org/abs/2203.00854)
Shenggan's avatar
Shenggan committed
6
![](https://img.shields.io/badge/Made%20with-ColossalAI-blueviolet?style=flat)
Shenggan's avatar
Shenggan committed
7
8
![](https://img.shields.io/github/v/release/hpcaitech/FastFold)
[![GitHub license](https://img.shields.io/github/license/hpcaitech/FastFold)](https://github.com/hpcaitech/FastFold/blob/main/LICENSE)
Shenggan's avatar
Shenggan committed
9

shenggan's avatar
shenggan committed
10
Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters
Shenggan's avatar
Shenggan committed
11
12
13
14
15
16

FastFold provides a **high-performance implementation of Evoformer** with the following characteristics.

1. Excellent kernel performance on GPU platform
2. Supporting Dynamic Axial Parallelism(DAP)
    * Break the memory limit of single GPU and reduce the overall training time
Shenggan's avatar
Shenggan committed
17
    * DAP can significantly speed up inference and make ultra-long sequence inference possible
Shenggan's avatar
Shenggan committed
18
3. Ease of use
Shenggan's avatar
Shenggan committed
19
    * Huge performance gains with a few lines changes
Shenggan's avatar
Shenggan committed
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
    * You don't need to care about how the parallel part is implemented

## Installation

You will need Python 3.8 or later and [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.1 or above when you are installing from source. 

We highly recommend installing an Anaconda or Miniconda environment and install PyTorch with conda:

```
conda create -n fastfold python=3.8
conda activate fastfold
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
```

You can get the FastFold source and install it with setuptools:

```shell
git clone https://github.com/hpcaitech/FastFold
cd FastFold
Shenggan's avatar
Shenggan committed
39
python setup.py install
Shenggan's avatar
Shenggan committed
40
41
```

Shenggan's avatar
Shenggan committed
42
43
## Usage

44
You can use `Evoformer` as `nn.Module` in your project after `from fastfold.model.fastnn import Evoformer`:
Shenggan's avatar
Shenggan committed
45
46

```python
47
from fastfold.model.fastnn import Evoformer
Shenggan's avatar
Shenggan committed
48
49
50
evoformer_layer = Evoformer()
```

Shenggan's avatar
Shenggan committed
51
If you want to use Dynamic Axial Parallelism, add a line of initialize with `fastfold.distributed.init_dap`.
Shenggan's avatar
Shenggan committed
52
53
54
55
56
57
58

```python
from fastfold.distributed import init_dap

init_dap(args.dap_size)
```

59
60
### Inference

61
You can use FastFold alongwith OpenFold with `inject_fastnn`. This will replace the evoformer in OpenFold with the high performance evoformer from FastFold.
62
63

```python
64
from fastfold.utils import inject_fastnn
65
66
67
68

model = AlphaFold(config)
import_jax_weights_(model, args.param_path, version=args.model_name)

69
model = inject_fastnn(model)
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
```

For Dynamic Axial Parallelism, you can refer to `./inference.py`. Here is an example of 2 GPUs parallel inference:

```shell
torchrun --nproc_per_node=2 inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
    --uniref90_database_path data/uniref90/uniref90.fasta \
    --mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
    --pdb70_database_path data/pdb70/pdb70 \
    --uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
    --output_dir ./ \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
```

Shenggan's avatar
Shenggan committed
88
89
90
91
92
93
94
95
96
## Performance Benchmark

We have included a performance benchmark script in `./benchmark`. You can benchmark the performance of Evoformer using different settings.

```shell
cd ./benchmark
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256
```

Shenggan's avatar
Shenggan committed
97
98
99
100
101
102
103
Benchmark Dynamic Axial Parallelism with 2 GPUs:

```shell
cd ./benchmark
torchrun --nproc_per_node=2 perf.py --msa-length 128 --res-length 256 --dap-size 2
```

Shenggan's avatar
Shenggan committed
104
105
106
107
108
109
110
111
112
113
114
If you want to benchmark with [OpenFold](https://github.com/aqlaboratory/openfold), you need to install OpenFold first and benchmark with option `--openfold`:

```shell
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256 --openfold
```

## Cite us

Cite this paper, if you use FastFold in your research publication.

```
Shenggan's avatar
Shenggan committed
115
116
117
118
119
120
121
122
@misc{cheng2022fastfold,
      title={FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours}, 
      author={Shenggan Cheng and Ruidong Wu and Zhongming Yu and Binrui Li and Xiwen Zhang and Jian Peng and Yang You},
      year={2022},
      eprint={2203.00854},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Shenggan's avatar
Shenggan committed
123
```