- 05 Feb, 2024 4 commits
-
-
Camille Zhong authored
-
Hongxin Liu authored
-
Hongxin Liu authored
* [llama] update training script * [doc] polish docstr
-
Hongxin Liu authored
* [plugin] refactor prepare dataloader * [plugin] update train script
-
- 25 Jan, 2024 1 commit
-
-
李文军 authored
[NFC] polish applications/Colossal-LLaMA-2/colossal_llama2/tokenizer/init_tokenizer.py code style (#5228)
-
- 22 Jan, 2024 1 commit
-
-
Desperado-Jia authored
-
- 07 Dec, 2023 1 commit
-
-
Yuanchen authored
* Add finetuning Colossal-Llama-2 example * Add finetuning Colossal-Llama-2 example 2 * Add finetuning Colossal-Llama-2 example and support NEFTuning * Add inference example and refine neftune * Modify readme file * update the imports --------- Co-authored-by:
Xu Yuanchen <yuanchen.xu00@gmail.com> Co-authored-by:
Camille Zhong <44392324+Camille7777@users.noreply.github.com>
-
- 17 Oct, 2023 1 commit
-
-
github-actions[bot] authored
Co-authored-by:github-actions <github-actions@github.com>
-
- 16 Oct, 2023 1 commit
-
-
Zian(Andy) Zheng authored
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer. https://github.com/huggingface/transformers/pull/25598
-
- 24 Sep, 2023 1 commit
-
-
Tong Li authored
-