@@ -27,14 +27,14 @@ We strongly recommend using the Docker environment, which is the simplest and fa
#### 1. Pull Image
Visit LightX2V's [Docker Hub](https://hub.docker.com/r/lightx2v/lightx2v/tags) and select a tag with the latest date, such as `25080104`:
Visit LightX2V's [Docker Hub](https://hub.docker.com/r/lightx2v/lightx2v/tags), select a tag with the latest date, such as `25080601-cu128`:
```bash
# Pull the latest version of LightX2V image
docker pull lightx2v/lightx2v:25080104
# Pull the latest version of LightX2V image, this image does not have SageAttention installed
docker pull lightx2v/lightx2v:25080601-cu128
```
If you need to use `SageAttention`, you can use docker image versions with the `-SageSmXX` suffix. The use of `SageAttention` requires selection based on GPU type, where:
If you need to use `SageAttention`, you can use image versions with the `-SageSmXX` suffix. The use of `SageAttention` requires selection based on GPU type, where:
1. A100: -SageSm80
2. RTX30 series: -SageSm86
...
...
@@ -42,13 +42,24 @@ If you need to use `SageAttention`, you can use docker image versions with the `
4. H100: -SageSm90
5. RTX50 series: -SageSm120
For example, to use `SageAttention` on 4090 or H100, the docker image pull command would be:
For example, to use `SageAttention` on 4090 or H100, the image pull commands are:
We recommend using the `cuda128` environment for faster inference speed. If you need to use the `cuda124` environment, you can use image versions with the `-cu124` suffix:
```bash
# cuda124 version, without SageAttention installed
docker pull lightx2v/lightx2v:25080601-cu124
# For 4090, cuda124 version, with SageAttention installed