Commit 7f4b059f authored by ziyannchen's avatar ziyannchen
Browse files

add support to run on CPU device for macOS users

parent e9e58bef
...@@ -83,6 +83,7 @@ ...@@ -83,6 +83,7 @@
- [x] Add a patch-based sampling schedule:mag:. - [x] Add a patch-based sampling schedule:mag:.
- [x] Upload inference code of latent image guidance:page_facing_up:. - [x] Upload inference code of latent image guidance:page_facing_up:.
- [ ] Improve the performance:superhero:. - [ ] Improve the performance:superhero:.
- [ ] Support MPS acceleration for MacOS users.
## <a name="installation"></a>:gear:Installation ## <a name="installation"></a>:gear:Installation
<!-- - **Python** >= 3.9 <!-- - **Python** >= 3.9
...@@ -101,6 +102,8 @@ conda activate diffbir ...@@ -101,6 +102,8 @@ conda activate diffbir
pip install -r requirements.txt pip install -r requirements.txt
``` ```
Note the installation is only compatiable with **Linux** users. If you are working on different platforms, please check [xOS Installation](assets/docs/installation_xOS.md).
<!-- ```shell <!-- ```shell
# clone this repo # clone this repo
git clone https://github.com/XPixelGroup/DiffBIR.git git clone https://github.com/XPixelGroup/DiffBIR.git
......
# Linux
Please follow the primary README.md of this repo.
# Windows
Windows users may stumble when installing the package `triton`.
You can choose to run on **CPU** without `xformers` and `triton` installed.
To use **CUDA**, please refer to [issue#24](https://github.com/XPixelGroup/DiffBIR/issues/24) to try solve the problem of `triton` installation.
# MacOS
Currenly only CPU device is supported to run DiffBIR on Apple Silicon since most GPU acceleration packages are compatiable with CUDA only.
We are still trying to support MPS device. Stay tuned for our progress!
You can try to set up according to the following steps.
1. Install **torch** according to the [official document](https://pytorch.org/get-started/locally/).
```bash
pip install torch torchvision
```
2. Package `triton` and `xformers` is not needed since they work with CUDA.
Remove torch & cuda related packages. Your requirements.txt looks like:
```bash
# requirements.txt
pytorch_lightning==1.4.2
einops
open-clip-torch
omegaconf
torchmetrics==0.6.0
opencv-python-headless
scipy
matplotlib
lpips
gradio
chardet
transformers
facexlib
```
```bash
pip install -r requirements.txt
```
3. Run the inference script using CPU. Ensure you've downloaded the model weights.
```bash
python inference.py \
--input inputs/demo/general \
--config configs/model/cldm.yaml \
--ckpt weights/general_full_v1.ckpt \
--reload_swinir --swinir_ckpt weights/general_swinir_v1.ckpt \
--steps 50 \
--sr_scale 4 \
--image_size 512 \
--color_fix_type wavelet --resize_back \
--output results/demo/general \
--device cpu
```
\ No newline at end of file
--extra-index-url https://download.pytorch.org/whl/cu116 --extra-index-url https://download.pytorch.org/whl/cu116
torch==1.13.1+cu116 torch==1.13.1+cu116
torchvision==0.14.1+cu116 torchvision==0.14.1+cu116
torchaudio==0.13.1
xformers==0.0.16 xformers==0.0.16
pytorch_lightning==1.4.2 pytorch_lightning==1.4.2
einops einops
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment