@@ -13,9 +13,9 @@ FastFold provides a **high-performance implementation of Evoformer** with the fo
1. Excellent kernel performance on GPU platform
2. Supporting Dynamic Axial Parallelism(DAP)
* Break the memory limit of single GPU and reduce the overall training time
* Distributed inference can significantly speed up inference and make extremely long sequence inference possible
* DAP can significantly speed up inference and make ultra-long sequence inference possible
3. Ease of use
*Replace a few lines and you can use FastFold in your project
*Huge performance gains with a few lines changes
* You don't need to care about how the parallel part is implemented
## Installation
...
...
@@ -38,6 +38,24 @@ cd FastFold
python setup.py install--cuda_ext
```
## Usage
You can use `Evoformer` as `nn.Module` in your project after `from fastfold.model import Evoformer`:
```python
fromfastfold.modelimportEvoformer
evoformer_layer=Evoformer()
```
If you want to use Dynamic Axial Parallelism, add a line of initialize with `fastfold.distributed.init_dap` after `torch.distributed.init_process_group`.
If you want to benchmark with [OpenFold](https://github.com/aqlaboratory/openfold), you need to install OpenFold first and benchmark with option `--openfold`: