01.prepare_envs.md 1.44 KB
Newer Older
1
2
# Prepare Environment

helloyongyang's avatar
helloyongyang committed
3
We recommend using a docker environment. Here is the [dockerhub](https://hub.docker.com/r/lightx2v/lightx2v/tags) for lightx2v. Please select the tag with the latest date, for example, 25061301.
4
5

```shell
helloyongyang's avatar
helloyongyang committed
6
docker pull lightx2v/lightx2v:25061301
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
docker run --gpus all -itd --ipc=host --name [container_name] -v [mount_settings]  --entrypoint /bin/bash [image_id]
```

If you want to set up the environment yourself using conda, you can refer to the following steps:

```shell
# clone repo and submodules
git clone https://github.com/ModelTC/lightx2v.git lightx2v && cd lightx2v

conda create -n lightx2v python=3.11 && conda activate lightx2v
pip install -r requirements.txt

# Install again separately to bypass the version conflict check
# The Hunyuan model needs to run under this version of transformers. If you do not need to run the Hunyuan model, you can ignore this step.
pip install transformers==4.45.2

# install flash-attention 2
helloyongyang's avatar
helloyongyang committed
24
25
git clone https://github.com/Dao-AILab/flash-attention.git --recursive
cd flash-attention && python setup.py install
26
27

# install flash-attention 3, only if hopper
helloyongyang's avatar
helloyongyang committed
28
cd flash-attention/hopper && python setup.py install
29
30
31
32
33
34
35
36
37
38
```

# Infer

```shell
# Modify the path in the script
bash scripts/run_wan_t2v.sh
```

In addition to the existing input arguments in the script, there are also some necessary parameters in the `${lightx2v_path}/configs/wan_t2v.json` file specified by `--config_json`. You can modify them as needed.