1. 20 Nov, 2023 1 commit
    • Hongxin Liu's avatar
      [npu] add npu support for gemini and zero (#5067) · e5ce4c8e
      Hongxin Liu authored
      * [npu] setup device utils (#5047)
      
      * [npu] add npu device support
      
      * [npu] support low level zero
      
      * [test] update npu zero plugin test
      
      * [hotfix] fix import
      
      * [test] recover tests
      
      * [npu] gemini support npu (#5052)
      
      * [npu] refactor device utils
      
      * [gemini] support npu
      
      * [example] llama2+gemini support npu
      
      * [kernel] add arm cpu adam kernel (#5065)
      
      * [kernel] add arm cpu adam
      
      * [optim] update adam optimizer
      
      * [kernel] arm cpu adam remove bf16 support
      e5ce4c8e
  2. 16 Nov, 2023 1 commit
    • Elsa Granger's avatar
      [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading... · b2ad0d9e
      Elsa Granger authored
      
      [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017)
      
      * Use p2p
      
      * Cannot bidirectonal send p2p
      
      * Refactor tensor creation and serialization in P2P
      communication
      
      * Fix llama forward args in flash attention
      
      * Add flop estimate from megatron
      
      * Support loading weight not in weight_map when strict=False in hybrid_parallel
      
      * Use send_forward_recv_backward, etc in 1f1b
      
      * Use dataclass for metdata
      Remove torch.cuda.synchronize() as suggested
      
      * Add comment about the torch.cuda.synchronize for potential error
      
      * Typo
      
      * Update hybrid_parallel_checkpoint_io.py
      
      * Update p2p.py
      
      * Update one_f_one_b.py
      
      * Update p2p.py
      
      ---------
      Co-authored-by: default avatarflybird11111 <1829166702@qq.com>
      b2ad0d9e
  3. 19 Sep, 2023 1 commit
  4. 28 Aug, 2023 1 commit
    • Hongxin Liu's avatar
      [example] add llama2 example (#4527) · 0b00def8
      Hongxin Liu authored
      * [example] transfer llama-1 example
      
      * [example] fit llama-2
      
      * [example] refactor scripts folder
      
      * [example] fit new gemini plugin
      
      * [cli] fix multinode runner
      
      * [example] fit gemini optim checkpoint
      
      * [example] refactor scripts
      
      * [example] update requirements
      
      * [example] update requirements
      
      * [example] rename llama to llama2
      
      * [example] update readme and pretrain script
      
      * [example] refactor scripts
      0b00def8