[
](https://openreview.net/attachment?id=tpJPlFTyxd&name=pdf)  [](https://tangoflux.github.io/)  
## Overall Pipeline
TangoFlux consists of FluxTransformer blocks which are Diffusion Transformer (DiT) and Multimodal Diffusion Transformer (MMDiT), conditioned on textual prompt and duration embedding to generate audio at 44.1kHz up to 30 seconds. TangoFlux learns a rectified flow trajectory from audio latent representation encoded by a variational autoencoder (VAE). The TangoFlux training pipeline consists of three stages: pre-training, fine-tuning, and preference optimization. TangoFlux is aligned via CRPO which iteratively generates new synthetic data and constructs preference pairs to perform preference optimization.

## Quickstart
## Training TangoFlux
## Inference with TangoFlux
## Evaluation Scripts
## Comparison Between TangoFlux and Other Audio Generation Models
This comparison evaluates TangoFlux and other audio generation models across various metrics. Key metrics include:
- **Output Length**: Represents the duration of the generated audio.
- **FD**