README.md 4.25 KB
Newer Older
Shenggan's avatar
Shenggan committed
1
2
![](/assets/fold.jpg)

shenggan's avatar
shenggan committed
3
# FastFold
Shenggan's avatar
Shenggan committed
4

Shenggan's avatar
Shenggan committed
5
[![](https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green)](https://arxiv.org/abs/2203.00854)
Shenggan's avatar
Shenggan committed
6
![](https://img.shields.io/badge/Made%20with-ColossalAI-blueviolet?style=flat)
Shenggan's avatar
Shenggan committed
7
8
![](https://img.shields.io/github/v/release/hpcaitech/FastFold)
[![GitHub license](https://img.shields.io/github/license/hpcaitech/FastFold)](https://github.com/hpcaitech/FastFold/blob/main/LICENSE)
Shenggan's avatar
Shenggan committed
9

shenggan's avatar
shenggan committed
10
Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters
Shenggan's avatar
Shenggan committed
11
12
13
14
15
16

FastFold provides a **high-performance implementation of Evoformer** with the following characteristics.

1. Excellent kernel performance on GPU platform
2. Supporting Dynamic Axial Parallelism(DAP)
    * Break the memory limit of single GPU and reduce the overall training time
Shenggan's avatar
Shenggan committed
17
    * DAP can significantly speed up inference and make ultra-long sequence inference possible
Shenggan's avatar
Shenggan committed
18
3. Ease of use
Shenggan's avatar
Shenggan committed
19
    * Huge performance gains with a few lines changes
Shenggan's avatar
Shenggan committed
20
21
22
23
24
25
    * You don't need to care about how the parallel part is implemented

## Installation

You will need Python 3.8 or later and [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.1 or above when you are installing from source. 

Fazzie-Maqianli's avatar
Fazzie-Maqianli committed
26
27
28
29
```shell
git clone https://github.com/hpcaitech/FastFold
cd FastFold
```
Shenggan's avatar
Shenggan committed
30
31
We highly recommend installing an Anaconda or Miniconda environment and install PyTorch with conda:

Fazzie-Maqianli's avatar
Fazzie-Maqianli committed
32
```shell
Shenggan's avatar
Shenggan committed
33
conda env create --name=fastfold -f environment.yml
Shenggan's avatar
Shenggan committed
34
conda activate fastfold
Shenggan's avatar
Shenggan committed
35
bash scripts/patch_openmm.sh
Shenggan's avatar
Shenggan committed
36
37
38
39
40
```

You can get the FastFold source and install it with setuptools:

```shell
Shenggan's avatar
Shenggan committed
41
python setup.py install
Shenggan's avatar
Shenggan committed
42
43
```

Shenggan's avatar
Shenggan committed
44
45
## Usage

46
You can use `Evoformer` as `nn.Module` in your project after `from fastfold.model.fastnn import Evoformer`:
Shenggan's avatar
Shenggan committed
47
48

```python
49
from fastfold.model.fastnn import Evoformer
Shenggan's avatar
Shenggan committed
50
51
52
evoformer_layer = Evoformer()
```

Shenggan's avatar
Shenggan committed
53
If you want to use Dynamic Axial Parallelism, add a line of initialize with `fastfold.distributed.init_dap`.
Shenggan's avatar
Shenggan committed
54
55
56
57
58
59
60

```python
from fastfold.distributed import init_dap

init_dap(args.dap_size)
```

Fazzie-Maqianli's avatar
Fazzie-Maqianli committed
61
62
63
64
65
### Download the dataset
You can down the dataset used to train FastFold  by the script `download_all_data.sh`:

    ./scripts/download_all_data.sh data/

66
67
### Inference

Shenggan's avatar
Shenggan committed
68
You can use FastFold with `inject_fastnn`. This will replace the evoformer from OpenFold with the high performance evoformer from FastFold.
69
70

```python
71
from fastfold.utils import inject_fastnn
72
73
74
75

model = AlphaFold(config)
import_jax_weights_(model, args.param_path, version=args.model_name)

76
model = inject_fastnn(model)
77
78
79
80
81
```

For Dynamic Axial Parallelism, you can refer to `./inference.py`. Here is an example of 2 GPUs parallel inference:

```shell
82
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
Shenggan's avatar
Shenggan committed
83
    --output_dir ./ \
84
    --gpus 2 \
85
86
87
88
89
    --uniref90_database_path data/uniref90/uniref90.fasta \
    --mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
    --pdb70_database_path data/pdb70/pdb70 \
    --uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
Shenggan's avatar
Shenggan committed
90
91
92
93
    --jackhmmer_binary_path `which jackhmmer` \
    --hhblits_binary_path `which hhblits` \
    --hhsearch_binary_path `which hhsearch` \
    --kalign_binary_path `which kalign`
94
95
```

Shenggan's avatar
Shenggan committed
96
97
98
99
100
101
102
103
104
## Performance Benchmark

We have included a performance benchmark script in `./benchmark`. You can benchmark the performance of Evoformer using different settings.

```shell
cd ./benchmark
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256
```

Shenggan's avatar
Shenggan committed
105
106
107
108
109
110
111
Benchmark Dynamic Axial Parallelism with 2 GPUs:

```shell
cd ./benchmark
torchrun --nproc_per_node=2 perf.py --msa-length 128 --res-length 256 --dap-size 2
```

Shenggan's avatar
Shenggan committed
112
113
114
115
116
117
118
119
120
121
122
If you want to benchmark with [OpenFold](https://github.com/aqlaboratory/openfold), you need to install OpenFold first and benchmark with option `--openfold`:

```shell
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256 --openfold
```

## Cite us

Cite this paper, if you use FastFold in your research publication.

```
Shenggan's avatar
Shenggan committed
123
124
125
126
127
128
129
130
@misc{cheng2022fastfold,
      title={FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours}, 
      author={Shenggan Cheng and Ruidong Wu and Zhongming Yu and Binrui Li and Xiwen Zhang and Jian Peng and Yang You},
      year={2022},
      eprint={2203.00854},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Shenggan's avatar
Shenggan committed
131
```