-Cache reuse is an important acceleration algorithm in the inference process of diffusion models.
-In the inference process of diffusion models, cache reuse is an important acceleration algorithm.
- Its core idea is to skip redundant computations at certain time steps by reusing historical cache results to improve inference efficiency.
- Its core idea is to skip redundant computations at certain time steps and improve inference efficiency by reusing historical cache results.
- The key to the algorithm is how to decide at which time steps to perform cache reuse, usually based on dynamic judgment of model state changes or error thresholds.
- The key to the algorithm lies in how to decide at which time steps to perform cache reuse, usually based on dynamic judgment of model state changes or error thresholds.
- During inference, key content such as intermediate features, residuals, and attention outputs need to be cached. When entering reusable time steps, directly use the cached content and reconstruct the current output through approximation methods like Taylor expansion, thereby reducing repetitive calculations and achieving efficient inference.
- During inference, key content such as intermediate features, residuals, and attention outputs need to be cached. When entering a reusable time step, the cached content is directly utilized, and the current output is reconstructed through approximation methods like Taylor expansion, thereby reducing repetitive computations and achieving efficient inference.
### TeaCache
### TeaCache
The core idea of `TeaCache` is to accumulate the **relative L1** distance between adjacent time step inputs, and when the cumulative distance reaches a set threshold, determine that the current time step can perform cache reuse.
The core idea of `TeaCache` is to accumulate the **relative L1** distance between adjacent time step inputs. When the cumulative distance reaches the set threshold, it determines that the current time step should not use cache reuse; conversely, when the cumulative distance does not reach the set threshold, cache reuse is used to accelerate the inference process.
- Specifically, the algorithm calculates the relative L1 distance between the current input and the previous step input at each inference step, and accumulates it.
- Specifically, the algorithm calculates the relative L1 distance between the current input and the previous step's input at each inference step and accumulates it.
- When the cumulative distance exceeds the threshold, indicating that the model state changes are not significant, it directly reuses the most recently cached content, skipping some redundant computations. This can significantly reduce the number of forward computations of the model and improve inference speed.
- When the cumulative distance does not exceed the threshold, it indicates that the model state change is not obvious, so the most recent cached content is directly reused, skipping some redundant computations. This can significantly reduce the number of forward computations of the model and improve inference speed.
In actual effect, TeaCache achieves significant acceleration while ensuring generation quality. The video comparison before and after acceleration is as follows:
In actual effectiveness, TeaCache achieves significant acceleration while ensuring generation quality. The video comparison before and after acceleration is as follows:
| Before Acceleration | After Acceleration |
| Before Acceleration | After Acceleration |
|:------:|:------:|
|:------:|:------:|
| Single H200 inference time: 58s | Single H200 inference time: 17.9s |
| Single H200 inference time: 58s | Single H200 inference time: 17.9s |
|  |  |
The core of `TaylorSeer Cache` lies in using Taylor formula to recalculate cached content as residual compensation for cache reuse time steps. The specific approach is that at cache reuse time steps, not only simply reuse historical cache, but also approximate reconstruction of current output through Taylor expansion. This can further improve output accuracy while reducing computational load. Taylor expansion can effectively capture subtle changes in model state, compensating for errors brought by cache reuse, thereby ensuring generation quality while accelerating. `TaylorSeer Cache` is suitable for scenarios with high requirements for output precision, and can further improve model inference performance on the basis of cache reuse.
The core of `TaylorSeer Cache` lies in using Taylor formula to recalculate cached content as residual compensation for cache reuse time steps.
- The specific approach is that at cache reuse time steps, not only simply reusing historical cache, but also approximating reconstruction of current output through Taylor expansion. This can further improve output accuracy while reducing computational load.
- Taylor expansion can effectively capture subtle changes in model state, allowing errors caused by cache reuse to be compensated, thus ensuring generation quality while accelerating.
`TaylorSeer Cache` is suitable for scenarios with high output accuracy requirements and can further improve model inference performance based on cache reuse.
| Before Acceleration | After Acceleration |
| Before Acceleration | After Acceleration |
|:------:|:------:|
|:------:|:------:|
| Single H200 inference time: 57.7s | Single H200 inference time: 41.3s |
| Single H200 inference time: 57.7s | Single H200 inference time: 41.3s |
|  |  |
The core idea of `AdaCache` is to dynamically adjust the step size of cache reuse based on partial cached content in specified block chunks.
The core idea of `AdaCache` is to dynamically adjust the stride of cache reuse based on partial cached content in specified block chunks.
- The algorithm analyzes feature differences between two adjacent time steps within specific blocks, and adaptively decides the next cache reuse time step interval based on the difference magnitude.
- The algorithm analyzes feature differences between two adjacent time steps within specific blocks and adaptively determines the next cache reuse time step interval based on the difference magnitude.
- When model state changes are small, the step size automatically increases, reducing cache update frequency; when state changes are large, the step size decreases to ensure output quality.
- When model state changes are small, the stride automatically increases, reducing cache update frequency; when state changes are large, the stride decreases to ensure output quality.
This allows flexible adjustment of cache strategies based on dynamic changes in the actual inference process, achieving more efficient acceleration and better generation effects. AdaCache is suitable for application scenarios with high requirements for both inference speed and generation quality.
This allows flexible adjustment of caching strategies based on dynamic changes during actual inference, achieving more efficient acceleration and better generation results. AdaCache is suitable for application scenarios with high requirements for both inference speed and generation quality.
| Before Acceleration | After Acceleration |
| Before Acceleration | After Acceleration |
|:------:|:------:|
|:------:|:------:|
| Single H200 inference time: 227s | Single H200 inference time: 83s |
| Single H200 inference time: 227s | Single H200 inference time: 83s |
|  |  |
@@ -52,20 +56,20 @@ This allows flexible adjustment of cache strategies based on dynamic changes in
...
@@ -52,20 +56,20 @@ This allows flexible adjustment of cache strategies based on dynamic changes in
- It combines the real-time and rationality of `TeaCache` in cache decision-making, determining when to perform cache reuse through dynamic thresholds.
- It combines the real-time and rationality of `TeaCache` in cache decision-making, determining when to perform cache reuse through dynamic thresholds.
- At the same time, it utilizes `TaylorSeer`'s Taylor expansion method to make use of cached content.
- At the same time, it utilizes `TaylorSeer`'s Taylor expansion method to make use of cached content.
This not only efficiently determines the timing of cache reuse, but also maximizes the utilization of cached content, improving output accuracy and generation quality. Actual tests show that `CustomCache` generates video quality superior to using `TeaCache`, `TaylorSeer Cache`, or `AdaCache` alone across multiple content generation tasks, making it one of the currently optimal comprehensive performance cache acceleration algorithms.
This not only efficiently determines the timing of cache reuse but also maximally utilizes cached content to improve output accuracy and generation quality. Actual testing shows that `CustomCache` generates better video quality than solutions using `TeaCache`, `TaylorSeer Cache`, or `AdaCache` alone across multiple content generation tasks, making it one of the currently best-performing comprehensive cache acceleration algorithms.
| Before Acceleration | After Acceleration |
| Before Acceleration | After Acceleration |
|:------:|:------:|
|:------:|:------:|
| Single H200 inference time: 57.9s | Single H200 inference time: 16.6s |
| Single H200 inference time: 57.9s | Single H200 inference time: 16.6s |
|  |  |