The DiT model in `LightX2V` currently uses three types of attention mechanisms. Each type of attention can be configured with a specific backend library.
To switch to other types, simply replace the corresponding values with the type names from the table above.
Tips: radial_attn can only be used in self attention due to the limitations of its sparse algorithm principle.
For further customization or behavior tuning, please refer to the official documentation of the respective attention libraries.
For further customization of attention mechanism behavior, please refer to the official documentation or implementation code of each attention library.