# Prepare Environment We recommend using a docker environment. Here is the [dockerhub](https://hub.docker.com/r/lightx2v/lightx2v/tags) for lightx2v. Please select the tag with the latest date, for example, 25042502. ```shell docker pull lightx2v/lightx2v:25042502 docker run --gpus all -itd --ipc=host --name [container_name] -v [mount_settings] --entrypoint /bin/bash [image_id] ``` If you want to set up the environment yourself using conda, you can refer to the following steps: ```shell # clone repo and submodules git clone https://github.com/ModelTC/lightx2v.git lightx2v && cd lightx2v git submodule update --init --recursive conda create -n lightx2v python=3.11 && conda activate lightx2v pip install -r requirements.txt # Install again separately to bypass the version conflict check # The Hunyuan model needs to run under this version of transformers. If you do not need to run the Hunyuan model, you can ignore this step. pip install transformers==4.45.2 # install flash-attention 2 cd lightx2v/3rd/flash-attention && pip install --no-cache-dir -v -e . # install flash-attention 3, only if hopper cd lightx2v/3rd/flash-attention/hopper && pip install --no-cache-dir -v -e . ``` # Infer ```shell # Modify the path in the script bash scripts/run_wan_t2v.sh ``` In addition to the existing input arguments in the script, there are also some necessary parameters in the `${lightx2v_path}/configs/wan_t2v.json` file specified by `--config_json`. You can modify them as needed.