1. 04 Mar, 2024 1 commit
  2. 20 Dec, 2023 1 commit
  3. 10 Nov, 2023 1 commit
    • Li Zhang's avatar
      TurboMind 2 (#590) · ab1767cf
      Li Zhang authored
      * refresh decoder attention kernel
      
      * block-level kv cache
      
      * `BlockManager` & `SequenceManager`
      
      * update
      
      * update
      
      * update
      
      * update
      
      * rename
      
      * GQA support
      
      * fix context length
      
      * GQA dispatch
      
      * kv8
      
      * tune
      
      * async stream cb
      
      * nvtx
      
      * config parsing
      
      * debug
      
      * optimize output cost
      
      * split-k decoding
      
      * minor
      
      * truncate `session_len` by available blocks
      
      * minor
      
      * license
      
      * fix
      
      * dispatch `cp.async`
      
      * fix linking
      
      * fix
      
      * fix deadlock
      
      * guard input length
      
      * correct start offset
      
      * fix prefill chunking
      
      * fix `cache_block_seq_len` param passing
      
      * fix `block_size` fmtstr
      
      * fix output tokens
      
      * fix batch resizing
      
      * fix masking of finished sequences
      
      * add debug util
      
      * free unused block early
      
      * add ntk scaling and logn scaling
      
      * cmake flags
      
      * fix typo
      
      * w4a16 for sm75
      
      * fix msvc build
      
      * fix msvc build
      
      * fix block verification
      
      * fix msvc build
      
      * use `std::shuffle`
      
      * fix lint
      
      * fix lint
      
      * fix lint
      
      * clear incoming buffer
      
      * clear finished requests
      
      * fix batch initialization
      
      * fix typo
      
      * fix typo
      
      * fix comparison
      ab1767cf
  4. 11 Sep, 2023 1 commit
    • Lyu Han's avatar
      Support codellama (#359) · 65c662f9
      Lyu Han authored
      * tmp
      
      * add demo for codellama inference
      
      * update
      
      * update
      
      * update
      
      * update codellama.md
      
      * export rope_theta
      
      * update
      
      * update doc
      
      * fix client.py
      
      * define SamplingParam
      
      * rollback 'end'
      
      * rotary_emb_base to rotary_embedding_base
      
      * change to baichuan2-7b
      65c662f9
  5. 18 Aug, 2023 1 commit
  6. 21 Jul, 2023 1 commit
  7. 01 Jul, 2023 3 commits
  8. 20 Jun, 2023 1 commit