- 18 Aug, 2023 1 commit
-
-
Li Zhang authored
* qwen support * dynamic ntk & logn attn * fix ntk & add chat template * fix ntk scaling & stop words * fix lint * add tiktoken to requirements.txt * fix tokenizer, set model format automatically * update model.py * update readme * fix lint
-
- 17 Aug, 2023 1 commit
-
-
Chen Xin authored
* __PRETTY_FUNCTION__ * CASE_K * uint * remove not * HALF_FLT_MAX * struct init * port utils * better build pthread-win32 * port kernels * port utils/gemm_test * hide windows header * port models * port examples && triton_backend && unittests * update build readme * fix lint * fix lint * fix lint * fix lint * fix lint * fix build * fix build * cmake version * fix typos * update ci * port kernels/gemm_s_f16 * update ci * fix ci * use cudaStreamSynchronize instead of volatile check * remove gettimeofday * remove pthread-win32 * remove dirent.h * update pre-commit * update * remove todo * fix include * fix build * fix build * fix build ci * fix github action trigger * update README * fix linux-build ci * remove windows folder * fix lint * update readme
-
- 24 Jul, 2023 1 commit
-
-
Li Zhang authored
* decode only forward pass * fix lint * batch embedding
-
- 21 Jul, 2023 1 commit
-
-
Li Zhang authored
* add GQA for llama2 * fix model conversion * fix lint & remove dev log * update news * minor * fix allocation size * fix split_dim for w_qkv.bias
-
- 01 Jul, 2023 3 commits
- 28 Jun, 2023 1 commit
-
-
tpoisonooo authored
* feat(src): add int8 and compile passed * feat(kernels): fix * feat(llama): update kernel * feat(src): add debug * fix(kernel): k_cache use int8_t pointer * style(llama): clean code * feat(deploy.py): revert to enable fmha * style(LlamaV2): clean code * feat(deploy.py): add default quant policy
-
- 24 Jun, 2023 1 commit
-
-
Li Zhang authored
* support attention bias * fix conflict
-
- 20 Jun, 2023 1 commit
-
-
Li Zhang authored
* add ft code * gitignore * fix lint * revert fmha
-