1. 08 Dec, 2023 1 commit
  2. 27 Nov, 2023 1 commit
  3. 22 Nov, 2023 2 commits
  4. 20 Nov, 2023 2 commits
  5. 18 Nov, 2023 1 commit
  6. 16 Nov, 2023 1 commit
    • Elsa Granger's avatar
      [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading... · b2ad0d9e
      Elsa Granger authored
      
      [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017)
      
      * Use p2p
      
      * Cannot bidirectonal send p2p
      
      * Refactor tensor creation and serialization in P2P
      communication
      
      * Fix llama forward args in flash attention
      
      * Add flop estimate from megatron
      
      * Support loading weight not in weight_map when strict=False in hybrid_parallel
      
      * Use send_forward_recv_backward, etc in 1f1b
      
      * Use dataclass for metdata
      Remove torch.cuda.synchronize() as suggested
      
      * Add comment about the torch.cuda.synchronize for potential error
      
      * Typo
      
      * Update hybrid_parallel_checkpoint_io.py
      
      * Update p2p.py
      
      * Update one_f_one_b.py
      
      * Update p2p.py
      
      ---------
      Co-authored-by: default avatarflybird11111 <1829166702@qq.com>
      b2ad0d9e
  7. 19 Sep, 2023 1 commit
  8. 15 Sep, 2023 1 commit
    • flybird11111's avatar
      [example] llama2 add fine-tune example (#4673) · 4c4482f3
      flybird11111 authored
      * [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] change dataset
      
      * [shardformer] change dataset
      
      * [shardformer] fix CI
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      [example] update opt example
      
      [example] resolve comments
      
      fix
      
      fix
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * fix
      
      * update llama2 example
      
      * update llama2 example
      
      * fix
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * Update requirements.txt
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      4c4482f3
  9. 14 Sep, 2023 1 commit
  10. 13 Sep, 2023 1 commit
  11. 04 Sep, 2023 1 commit
  12. 28 Aug, 2023 1 commit
    • Hongxin Liu's avatar
      [example] add llama2 example (#4527) · 0b00def8
      Hongxin Liu authored
      * [example] transfer llama-1 example
      
      * [example] fit llama-2
      
      * [example] refactor scripts folder
      
      * [example] fit new gemini plugin
      
      * [cli] fix multinode runner
      
      * [example] fit gemini optim checkpoint
      
      * [example] refactor scripts
      
      * [example] update requirements
      
      * [example] update requirements
      
      * [example] rename llama to llama2
      
      * [example] update readme and pretrain script
      
      * [example] refactor scripts
      0b00def8