• Daniël de Kok's avatar
    Remove vLLM dependency for CUDA (#2751) · 52e48739
    Daniël de Kok authored
    * Remove vLLM dependency for CUDA
    
    This change adds `attention-kernels` as a dependency for paged
    attention and cache reshaping. With that, we don't use vLLM
    anywhere for CUDA.
    
    Tested run (since we don't have paged attention in CI):
    
    ```
    ❯ ATTENTION=paged python -m pytest integration-tests -k "llama and awq" --release
    [...]
    5 snapshots passed.
    ```
    
    * Fix clippy warning
    52e48739
pyproject.toml 4.26 KB