".github/workflows/compatiblity_test_on_dispatch.yml" did not exist on "dbe8e030fbfa67826e6f7a73461428aa3fc01645"
  1. 22 Dec, 2023 1 commit
    • Wenhao Chen's avatar
      [pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134) · 4fa689fc
      Wenhao Chen authored
      * test: add more p2p tests
      
      * fix: remove send_forward_recv_forward as p2p op list need to use the same group
      
      * fix: make send and receive atomic
      
      * feat: update P2PComm fn
      
      * feat: add metadata cache in 1f1b
      
      * feat: add metadata cache in interleaved pp
      
      * feat: modify is_xx_stage fn
      
      * revert: add _broadcast_object_list
      
      * feat: add interleaved pp in llama policy
      
      * feat: set NCCL_BUFFSIZE in HybridParallelPlugin
      4fa689fc
  2. 28 Nov, 2023 1 commit
  3. 19 Sep, 2023 1 commit
  4. 15 Sep, 2023 1 commit
    • flybird11111's avatar
      [example] llama2 add fine-tune example (#4673) · 4c4482f3
      flybird11111 authored
      * [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] change dataset
      
      * [shardformer] change dataset
      
      * [shardformer] fix CI
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      [example] update opt example
      
      [example] resolve comments
      
      fix
      
      fix
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * fix
      
      * update llama2 example
      
      * update llama2 example
      
      * fix
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * Update requirements.txt
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      4c4482f3
  5. 09 Sep, 2023 1 commit
    • flybird11111's avatar
      [shardformer] update llama2/opt finetune example and fix llama2 policy (#4645) · 7486ed7d
      flybird11111 authored
      * [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] change dataset
      
      * [shardformer] change dataset
      
      * [shardformer] fix CI
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      [example] update opt example
      
      [example] resolve comments
      
      fix
      
      fix
      7486ed7d
  6. 07 Sep, 2023 1 commit
  7. 04 Sep, 2023 1 commit
    • flybird11111's avatar
      [shardformer] update bert finetune example with HybridParallelPlugin (#4584) · 0a94fcd3
      flybird11111 authored
      
      
      * [shardformer] fix opt test hanging
      
      * fix
      
      * test
      
      * test
      
      * test
      
      * fix test
      
      * fix test
      
      * remove print
      
      * add fix
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] fix epoch change
      
      * [shardformer] broadcast add pp group
      
      * [shardformer] fix opt test hanging
      
      * fix
      
      * test
      
      * test
      
      * [shardformer] zero1+pp and the corresponding tests (#4517)
      
      * pause
      
      * finish pp+zero1
      
      * Update test_shard_vit.py
      
      * [shardformer/fix overlap bug] fix overlap bug, add overlap as an option in shardco… (#4516)
      
      * fix overlap bug and support bert, add overlap as an option in shardconfig
      
      * support overlap for chatglm and bloom
      
      * [shardformer] fix emerged bugs after updating transformers (#4526)
      
      * test
      
      * fix test
      
      * fix test
      
      * remove print
      
      * add fix
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] Add overlap support for gpt2 (#4535)
      
      * add overlap support for gpt2
      
      * remove unused code
      
      * remove unused code
      
      * [shardformer] support pp+tp+zero1 tests (#4531)
      
      * [shardformer] fix opt test hanging
      
      * fix
      
      * test
      
      * test
      
      * test
      
      * fix test
      
      * fix test
      
      * remove print
      
      * add fix
      
      * [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] fix submodule replacement bug when enabling pp (#4544)
      
      * [shardformer] support sharded optimizer checkpointIO of HybridParallelPlugin (#4540)
      
      * implement sharded optimizer saving
      
      * add more param info
      
      * finish implementation of sharded optimizer saving
      
      * fix bugs in optimizer sharded saving
      
      * add pp+zero test
      
      * param group loading
      
      * greedy loading of optimizer
      
      * fix bug when loading
      
      * implement optimizer sharded saving
      
      * add optimizer test & arrange checkpointIO utils
      
      * fix gemini sharding state_dict
      
      * add verbose option
      
      * add loading of master params
      
      * fix typehint
      
      * fix master/working mapping in fp16 amp
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] fix epoch change
      
      * [shardformer] broadcast add pp group
      
      * rebase feature/shardformer
      
      * update pipeline
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] bert finetune fix
      
      * [shardformer] add all_reduce operation to loss
      
      add all_reduce operation to loss
      
      * [shardformer] make compatible with pytree.
      
      make compatible with pytree.
      
      * [shardformer] disable tp
      
      disable tp
      
      * [shardformer] add 3d plugin to ci test
      
      * [shardformer] update num_microbatches to None
      
      * [shardformer] update microbatchsize
      
      * [shardformer] update assert
      
      * update scheduler
      
      * update scheduler
      
      ---------
      Co-authored-by: default avatarJianghai <72591262+CjhHa1@users.noreply.github.com>
      Co-authored-by: default avatarBin Jia <45593998+FoolPlayer@users.noreply.github.com>
      Co-authored-by: default avatarBaizhou Zhang <eddiezhang@pku.edu.cn>
      0a94fcd3
  8. 24 Aug, 2023 1 commit
    • Hongxin Liu's avatar
      [gemini] improve compatibility and add static placement policy (#4479) · 27061426
      Hongxin Liu authored
      * [gemini] remove distributed-related part from colotensor (#4379)
      
      * [gemini] remove process group dependency
      
      * [gemini] remove tp part from colo tensor
      
      * [gemini] patch inplace op
      
      * [gemini] fix param op hook and update tests
      
      * [test] remove useless tests
      
      * [test] remove useless tests
      
      * [misc] fix requirements
      
      * [test] fix model zoo
      
      * [test] fix model zoo
      
      * [test] fix model zoo
      
      * [test] fix model zoo
      
      * [test] fix model zoo
      
      * [misc] update requirements
      
      * [gemini] refactor gemini optimizer and gemini ddp (#4398)
      
      * [gemini] update optimizer interface
      
      * [gemini] renaming gemini optimizer
      
      * [gemini] refactor gemini ddp class
      
      * [example] update gemini related example
      
      * [example] update gemini related example
      
      * [plugin] fix gemini plugin args
      
      * [test] update gemini ckpt tests
      
      * [gemini] fix checkpoint io
      
      * [example] fix opt example requirements
      
      * [example] fix opt example
      
      * [example] fix opt example
      
      * [example] fix opt example
      
      * [gemini] add static placement policy (#4443)
      
      * [gemini] add static placement policy
      
      * [gemini] fix param offload
      
      * [test] update gemini tests
      
      * [plugin] update gemini plugin
      
      * [plugin] update gemini plugin docstr
      
      * [misc] fix flash attn requirement
      
      * [test] fix gemini checkpoint io test
      
      * [example] update resnet example result (#4457)
      
      * [example] update bert example result (#4458)
      
      * [doc] update gemini doc (#4468)
      
      * [example] update gemini related examples (#4473)
      
      * [example] update gpt example
      
      * [example] update dreambooth example
      
      * [example] update vit
      
      * [example] update opt
      
      * [example] update palm
      
      * [example] update vit and opt benchmark
      
      * [hotfix] fix bert in model zoo (#4480)
      
      * [hotfix] fix bert in model zoo
      
      * [test] remove chatglm gemini test
      
      * [test] remove sam gemini test
      
      * [test] remove vit gemini test
      
      * [hotfix] fix opt tutorial example (#4497)
      
      * [hotfix] fix opt tutorial example
      
      * [hotfix] fix opt tutorial example
      27061426
  9. 07 Jun, 2023 1 commit
  10. 06 May, 2023 1 commit