> Docker requires a GPU with at least 8GB of VRAM, and all acceleration features are enabled by default.
> Docker requires a GPU with at least 6GB of VRAM, and all acceleration features are enabled by default.
>
> Before running this Docker, you can use the following command to check if your device supports CUDA acceleration on Docker.
>
...
...
@@ -330,7 +330,7 @@ If your device has NPU acceleration hardware, you can follow the tutorial below
### Using MPS
If your device uses Apple silicon chips, you can enable MPS acceleration for certain supported tasks (such as layout detection and formula detection).
If your device uses Apple silicon chips, you can enable MPS acceleration for your tasks.
You can enable MPS acceleration by setting the `device-mode` parameter to `mps` in the `magic-pdf.json` configuration file.
...
...
@@ -341,10 +341,6 @@ You can enable MPS acceleration by setting the `device-mode` parameter to `mps`
}
```
> [!TIP]
> Since the formula recognition task cannot utilize MPS acceleration, you can disable the formula recognition feature in tasks where it is not needed to achieve optimal performance.
>
> You can disable the formula recognition feature by setting the `enable` parameter in the `formula-config` section to `false`.
> If the version number is less than 0.7.0, please report it in the issues section.
> If the version number is less than 1.3.0, please report it in the issues section.
### 5. Download Models
...
...
@@ -60,12 +59,12 @@ Download a sample file from the repository and test it.
### 8. Test CUDA Acceleration
If your graphics card has at least 8GB of VRAM, follow these steps to test CUDA-accelerated parsing performance.
If your graphics card has at least 6GB of VRAM, follow these steps to test CUDA-accelerated parsing performance.
1.**Overwrite the installation of torch and torchvision** supporting CUDA.
1.**Overwrite the installation of torch and torchvision** supporting CUDA.(Please select the appropriate index-url based on your CUDA version. For more details, refer to the [PyTorch official website](https://pytorch.org/get-started/locally/).)