"llama.cpp/examples/server/tests/features/security.feature" did not exist on "7aa90c0ea35f88a5ef227b773e5a9fe3a0fd7eb2"
Commit 4b5498c1 authored by zhuwenwen's avatar zhuwenwen
Browse files

add dcu_version and change readme

parent 9708740b
![](/assets/fold.jpg)
# FastFold # FastFold
[![](https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green)](https://arxiv.org/abs/2203.00854) ## 安装
![](https://img.shields.io/badge/Made%20with-ColossalAI-blueviolet?style=flat) 安装FastFold,你需要
![](https://img.shields.io/github/v/release/hpcaitech/FastFold) + Python 3.8 或者 3.9.
[![GitHub license](https://img.shields.io/github/license/hpcaitech/FastFold)](https://github.com/hpcaitech/FastFold/blob/main/LICENSE)
Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters
FastFold provides a **high-performance implementation of Evoformer** with the following characteristics.
1. Excellent kernel performance on GPU platform
2. Supporting Dynamic Axial Parallelism(DAP)
* Break the memory limit of single GPU and reduce the overall training time
* DAP can significantly speed up inference and make ultra-long sequence inference possible
3. Ease of use
* Huge performance gains with a few lines changes
* You don't need to care about how the parallel part is implemented
4. Faster data processing, about 3x times faster on monomer, about 3Nx times faster on multimer with N sequence.
5. Great Reduction on GPU memory, able to inference sequence containing more than **10000** residues.
## Installation
To install FastFold, you will need: ### 使用pip安装
+ Python 3.8 or 3.9. fastfold whl包下载目录:[https://cancon.hpccube.com:65024/4/main/fastfold/dtk23.04](https://cancon.hpccube.com:65024/4/main/fastfold/dtk23.04)
+ [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.1 or above 根据对应的pytorch版本和python版本,下载对应fastfold的whl包
+ PyTorch 1.12 or above
For now, You can install FastFold:
### Using Conda (Recommended)
We highly recommend installing an Anaconda or Miniconda environment and install PyTorch with conda.
Lines below would create a new conda environment called "fastfold":
```shell ```shell
git clone https://github.com/hpcaitech/FastFold pip install fastfold* (下载的fastfold的whl包)
cd FastFold
conda env create --name=fastfold -f environment.yml
conda activate fastfold
python setup.py install
``` ```
#### Advanced ### 使用源码安装
To leverage the power of FastFold, we recommend you to install [Triton](https://github.com/openai/triton).
```bash
pip install triton==2.0.0.dev20221005
```
### Using PyPi
You can download FastFold with pre-built CUDA extensions.
Warning, only stable versions available.
#### 编译环境准备
pytorch whl包下载目录:[https://cancon.hpccube.com:65024/4/main/pytorch/dtk23.04](https://cancon.hpccube.com:65024/4/main/pytorch/dtk23.04)
根据python版本,下载对应pytorch的whl包
```shell ```shell
pip install fastfold -f https://release.colossalai.org/fastfold pip install torch* (下载的torch的whl包)
``` ```
## Use Docker
### Build On Your Own
Run the following command to build a docker image from Dockerfile provided.
> Building FastFold from scratch requires GPU support, you need to use Nvidia Docker Runtime as the default when doing `docker build`. More details can be found [here](https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime).
```shell ```shell
cd FastFold pip install setuptools=59.5.0 wheel
docker build -t fastfold ./docker
``` ```
Run the following command to start the docker container in interactive mode. #### 编译安装
```shell ```shell
docker run -ti --gpus all --rm --ipc=host fastfold bash git clone -b dtk-23.04_fastfold0.2.1 https://developer.hpccube.com/codes/aicomponent/fastfold
cd fastfold
export CPLUS_INCLUDE_PATH=$ROCM_PATH/include:$ROCM_PATH/cuda/targets/x86_64-linux/include:$CPLUS_INCLUDE_PATH
python setup.py bdist_wheel
pip install dist/fastfold*
``` ```
## Usage ## Note
+ 若使用 pip install 下载安装过慢,可添加源:-i https://pypi.tuna.tsinghua.edu.cn/simple/
+ ROCM_PATH为dtk的路径,默认为/opt/dtkxxx
You can use `Evoformer` as `nn.Module` in your project after `from fastfold.model.fastnn import Evoformer`: ## 参考
- [README_ORIGIN](README_ORIGIN.md)
```python \ No newline at end of file
from fastfold.model.fastnn import Evoformer
evoformer_layer = Evoformer()
```
If you want to use Dynamic Axial Parallelism, add a line of initialize with `fastfold.distributed.init_dap`.
```python
from fastfold.distributed import init_dap
init_dap(args.dap_size)
```
### Download the dataset
You can down the dataset used to train FastFold by the script `download_all_data.sh`:
./scripts/download_all_data.sh data/
### Inference
You can use FastFold with `inject_fastnn`. This will replace the evoformer from OpenFold with the high performance evoformer from FastFold.
```python
from fastfold.utils import inject_fastnn
model = AlphaFold(config)
import_jax_weights_(model, args.param_path, version=args.model_name)
model = inject_fastnn(model)
```
For Dynamic Axial Parallelism, you can refer to `./inference.py`. Here is an example of 2 GPUs parallel inference:
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign`
```
or run the script `./inference.sh`, you can change the parameter in the script, especisally those data path.
```shell
./inference.sh
```
#### inference with data workflow
Alphafold's data pre-processing takes a lot of time, so we speed up the data pre-process by [ray](https://docs.ray.io/en/latest/workflows/concepts.html) workflow, which achieves a 3x times faster speed. To run the inference with ray workflow, you should install the package and add parameter `--enable_workflow` to cmdline or shell script `./inference.sh`
```shell
pip install ray==2.0.0 pyarrow
```
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign` \
--enable_workflow
```
#### inference with lower memory usage
Alphafold's embedding presentations take up a lot of memory as the sequence length increases. To reduce memory usage,
you should add parameter `--chunk_size [N]` and `--inplace` to cmdline or shell script `./inference.sh`.
The smaller you set N, the less memory will be used, but it will affect the speed. We can inference
a sequence of length 10000 in bf16 with 61GB memory on a Nvidia A100(80GB). For fp32, the max length is 8000.
> You need to set `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:15000` to inference such an extreme long sequence.
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign` \
--chunk_size N \
--inplace
```
#### inference multimer sequence
Alphafold Multimer is supported. You can the following cmd or shell script `./inference_multimer.sh`.
Workflow and memory parameters mentioned above can also be used.
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--model_preset multimer \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--uniprot_database_path data/uniprot/uniprot_sprot.fasta \
--pdb_seqres_database_path data/pdb_seqres/pdb_seqres.txt \
--param_path data/params/params_model_1_multimer.npz \
--model_name model_1_multimer \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign`
```
## Performance Benchmark
We have included a performance benchmark script in `./benchmark`. You can benchmark the performance of Evoformer using different settings.
```shell
cd ./benchmark
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256
```
Benchmark Dynamic Axial Parallelism with 2 GPUs:
```shell
cd ./benchmark
torchrun --nproc_per_node=2 perf.py --msa-length 128 --res-length 256 --dap-size 2
```
If you want to benchmark with [OpenFold](https://github.com/aqlaboratory/openfold), you need to install OpenFold first and benchmark with option `--openfold`:
```shell
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256 --openfold
```
## Cite us
Cite this paper, if you use FastFold in your research publication.
```
@misc{cheng2022fastfold,
title={FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours},
author={Shenggan Cheng and Ruidong Wu and Zhongming Yu and Binrui Li and Xiwen Zhang and Jian Peng and Yang You},
year={2022},
eprint={2203.00854},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# FastFold
## 安装
安装FastFold,你需要
+ Python 3.8 或者 3.9.
### 使用pip安装
fastfold whl包下载目录:[https://cancon.hpccube.com:65024/4/main/fastfold/dtk23.04](https://cancon.hpccube.com:65024/4/main/fastfold/dtk23.04)
根据对应的pytorch版本和python版本,下载对应fastfold的whl包
```shell
pip install fastfold* (下载的fastfold的whl包)
```
### 使用源码安装
#### 编译环境准备
pytorch whl包下载目录:[https://cancon.hpccube.com:65024/4/main/pytorch/dtk23.04](https://cancon.hpccube.com:65024/4/main/pytorch/dtk23.04)
根据python版本,下载对应pytorch的whl包
```shell
pip install torch* (下载的torch的whl包)
```
```shell
pip install setuptools=59.5.0 wheel
```
#### 编译安装
```shell
git clone -b dtk-23.04_fastfold0.2.1 https://developer.hpccube.com/codes/aicomponent/fastfold
cd fastfold
export FASTFOLD_BUILD_VERSION=abix.dtkxxx
export CPLUS_INCLUDE_PATH=$ROCM_PATH/include:$ROCM_PATH/cuda/targets/x86_64-linux/include:$CPLUS_INCLUDE_PATH
python setup.py bdist_wheel
pip install dist/fastfold*
```
## Note
+ 若使用 pip install 下载安装过慢,可添加源:-i https://pypi.tuna.tsinghua.edu.cn/simple/
+ FASTFOLD_BUILD_VERSION为编译的版本号设置,版本号为0.2.1+gitxxx.abix.dtkxxx
gitxxx:为代码自动获取;abi0:使用devtools的gcc编译;abi1:使用非devtools的gcc编译; dtkxxx为dtk的版本号:例如:dtk2304
+ ROCM_PATH为dtk的路径,默认为/opt/dtkxxx
\ No newline at end of file
![](/assets/fold.jpg)
# FastFold
[![](https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green)](https://arxiv.org/abs/2203.00854)
![](https://img.shields.io/badge/Made%20with-ColossalAI-blueviolet?style=flat)
![](https://img.shields.io/github/v/release/hpcaitech/FastFold)
[![GitHub license](https://img.shields.io/github/license/hpcaitech/FastFold)](https://github.com/hpcaitech/FastFold/blob/main/LICENSE)
Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters
FastFold provides a **high-performance implementation of Evoformer** with the following characteristics.
1. Excellent kernel performance on GPU platform
2. Supporting Dynamic Axial Parallelism(DAP)
* Break the memory limit of single GPU and reduce the overall training time
* DAP can significantly speed up inference and make ultra-long sequence inference possible
3. Ease of use
* Huge performance gains with a few lines changes
* You don't need to care about how the parallel part is implemented
4. Faster data processing, about 3x times faster on monomer, about 3Nx times faster on multimer with N sequence.
5. Great Reduction on GPU memory, able to inference sequence containing more than **10000** residues.
## Installation
To install FastFold, you will need:
+ Python 3.8 or 3.9.
+ [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.1 or above
+ PyTorch 1.12 or above
For now, You can install FastFold:
### Using Conda (Recommended)
We highly recommend installing an Anaconda or Miniconda environment and install PyTorch with conda.
Lines below would create a new conda environment called "fastfold":
```shell
git clone https://github.com/hpcaitech/FastFold
cd FastFold
conda env create --name=fastfold -f environment.yml
conda activate fastfold
python setup.py install
```
#### Advanced
To leverage the power of FastFold, we recommend you to install [Triton](https://github.com/openai/triton).
```bash
pip install triton==2.0.0.dev20221005
```
### Using PyPi
You can download FastFold with pre-built CUDA extensions.
Warning, only stable versions available.
```shell
pip install fastfold -f https://release.colossalai.org/fastfold
```
## Use Docker
### Build On Your Own
Run the following command to build a docker image from Dockerfile provided.
> Building FastFold from scratch requires GPU support, you need to use Nvidia Docker Runtime as the default when doing `docker build`. More details can be found [here](https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime).
```shell
cd FastFold
docker build -t fastfold ./docker
```
Run the following command to start the docker container in interactive mode.
```shell
docker run -ti --gpus all --rm --ipc=host fastfold bash
```
## Usage
You can use `Evoformer` as `nn.Module` in your project after `from fastfold.model.fastnn import Evoformer`:
```python
from fastfold.model.fastnn import Evoformer
evoformer_layer = Evoformer()
```
If you want to use Dynamic Axial Parallelism, add a line of initialize with `fastfold.distributed.init_dap`.
```python
from fastfold.distributed import init_dap
init_dap(args.dap_size)
```
### Download the dataset
You can down the dataset used to train FastFold by the script `download_all_data.sh`:
./scripts/download_all_data.sh data/
### Inference
You can use FastFold with `inject_fastnn`. This will replace the evoformer from OpenFold with the high performance evoformer from FastFold.
```python
from fastfold.utils import inject_fastnn
model = AlphaFold(config)
import_jax_weights_(model, args.param_path, version=args.model_name)
model = inject_fastnn(model)
```
For Dynamic Axial Parallelism, you can refer to `./inference.py`. Here is an example of 2 GPUs parallel inference:
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign`
```
or run the script `./inference.sh`, you can change the parameter in the script, especisally those data path.
```shell
./inference.sh
```
#### inference with data workflow
Alphafold's data pre-processing takes a lot of time, so we speed up the data pre-process by [ray](https://docs.ray.io/en/latest/workflows/concepts.html) workflow, which achieves a 3x times faster speed. To run the inference with ray workflow, you should install the package and add parameter `--enable_workflow` to cmdline or shell script `./inference.sh`
```shell
pip install ray==2.0.0 pyarrow
```
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign` \
--enable_workflow
```
#### inference with lower memory usage
Alphafold's embedding presentations take up a lot of memory as the sequence length increases. To reduce memory usage,
you should add parameter `--chunk_size [N]` and `--inplace` to cmdline or shell script `./inference.sh`.
The smaller you set N, the less memory will be used, but it will affect the speed. We can inference
a sequence of length 10000 in bf16 with 61GB memory on a Nvidia A100(80GB). For fp32, the max length is 8000.
> You need to set `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:15000` to inference such an extreme long sequence.
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign` \
--chunk_size N \
--inplace
```
#### inference multimer sequence
Alphafold Multimer is supported. You can the following cmd or shell script `./inference_multimer.sh`.
Workflow and memory parameters mentioned above can also be used.
```shell
python inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--gpus 2 \
--model_preset multimer \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path data/pdb70/pdb70 \
--uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--uniprot_database_path data/uniprot/uniprot_sprot.fasta \
--pdb_seqres_database_path data/pdb_seqres/pdb_seqres.txt \
--param_path data/params/params_model_1_multimer.npz \
--model_name model_1_multimer \
--jackhmmer_binary_path `which jackhmmer` \
--hhblits_binary_path `which hhblits` \
--hhsearch_binary_path `which hhsearch` \
--kalign_binary_path `which kalign`
```
## Performance Benchmark
We have included a performance benchmark script in `./benchmark`. You can benchmark the performance of Evoformer using different settings.
```shell
cd ./benchmark
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256
```
Benchmark Dynamic Axial Parallelism with 2 GPUs:
```shell
cd ./benchmark
torchrun --nproc_per_node=2 perf.py --msa-length 128 --res-length 256 --dap-size 2
```
If you want to benchmark with [OpenFold](https://github.com/aqlaboratory/openfold), you need to install OpenFold first and benchmark with option `--openfold`:
```shell
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256 --openfold
```
## Cite us
Cite this paper, if you use FastFold in your research publication.
```
@misc{cheng2022fastfold,
title={FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours},
author={Shenggan Cheng and Ruidong Wu and Zhongming Yu and Binrui Li and Xiwen Zhang and Jian Peng and Yang You},
year={2022},
eprint={2203.00854},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
...@@ -120,6 +120,17 @@ def get_sha(root: Union[str, Path]) -> str: ...@@ -120,6 +120,17 @@ def get_sha(root: Union[str, Path]) -> str:
return 'Unknown' return 'Unknown'
def get_abi():
try:
command = "echo '#include <string>' | gcc -x c++ -E -dM - | fgrep _GLIBCXX_USE_CXX11_ABI"
result = subprocess.run(command, shell=True, capture_output=True, text=True)
output = result.stdout.strip()
abi = "abi" + output.split(" ")[-1]
return abi
except Exception:
return 'abiUnknown'
def get_version_add(sha: Optional[str] = None) -> str: def get_version_add(sha: Optional[str] = None) -> str:
fastfold_root = os.path.dirname(os.path.abspath(__file__)) fastfold_root = os.path.dirname(os.path.abspath(__file__))
add_version_path = "version.py" add_version_path = "version.py"
...@@ -128,12 +139,24 @@ def get_version_add(sha: Optional[str] = None) -> str: ...@@ -128,12 +139,24 @@ def get_version_add(sha: Optional[str] = None) -> str:
sha = get_sha(fastfold_root) sha = get_sha(fastfold_root)
version = 'git' + sha[:7] version = 'git' + sha[:7]
if os.getenv('FASTFOLD_BUILD_VERSION'): # abi version
version_dtk = os.getenv('FASTFOLD_BUILD_VERSION', "") version += "." + get_abi()
version += "." + version_dtk
# dtk version
if os.getenv("ROCM_PATH"):
rocm_path = os.getenv('ROCM_PATH', "")
rocm_version_path = os.path.join(rocm_path, '.info', "rocm_version")
with open(rocm_version_path, 'r',encoding='utf-8') as file:
lines = file.readlines()
rocm_version=lines[0][:-2].replace(".", "")
version += ".dtk" + rocm_version
# torch version
version += ".torch" + torch.__version__[:4]
with open(add_version_path, encoding="utf-8", mode="w") as file: with open(add_version_path, encoding="utf-8", mode="w") as file:
file.write("__version__='0.2.1'+'+{}'\n".format(version)) file.write("__version__='0.2.1'\n")
file.write("__dcu_version__='0.2.1+{}'\n".format(version))
file.close() file.close()
...@@ -142,7 +165,7 @@ def get_version(): ...@@ -142,7 +165,7 @@ def get_version():
version_file = 'version.py' version_file = 'version.py'
with open(version_file, encoding='utf-8') as f: with open(version_file, encoding='utf-8') as f:
exec(compile(f.read(), version_file, 'exec')) exec(compile(f.read(), version_file, 'exec'))
return locals()['__version__'] return locals()['__dcu_version__']
setup( setup(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment