1. 23 Nov, 2023 1 commit
    • YeAnbang's avatar
      [Feature] Add document retrieval QA (#5020) · e53e729d
      YeAnbang authored
      
      
      * add langchain
      
      * add langchain
      
      * Add files via upload
      
      * add langchain
      
      * fix style
      
      * fix style: remove extra space
      
      * add pytest; modified retriever
      
      * add pytest; modified retriever
      
      * add tests to build_on_pr.yml
      
      * fix build_on_pr.yml
      
      * fix build on pr; fix environ vars
      
      * seperate unit tests for colossalqa from build from pr
      
      * fix container setting; fix environ vars
      
      * commented dev code
      
      * add incremental update
      
      * remove stale code
      
      * fix style
      
      * change to sha3 224
      
      * fix retriever; fix style; add unit test for document loader
      
      * fix ci workflow config
      
      * fix ci workflow config
      
      * add set cuda visible device script in ci
      
      * fix doc string
      
      * fix style; update readme; refactored
      
      * add force log info
      
      * change build on pr, ignore colossalqa
      
      * fix docstring, captitalize all initial letters
      
      * fix indexing; fix text-splitter
      
      * remove debug code, update reference
      
      * reset previous commit
      
      * update LICENSE update README add key-value mode, fix bugs
      
      * add files back
      
      * revert force push
      
      * remove junk file
      
      * add test files
      
      * fix retriever bug, add intent classification
      
      * change conversation chain design
      
      * rewrite prompt and conversation chain
      
      * add ui v1
      
      * ui v1
      
      * fix atavar
      
      * add header
      
      * Refactor the RAG Code and support Pangu
      
      * Refactor the ColossalQA chain to Object-Oriented Programming and the UI demo.
      
      * resolved conversation. tested scripts under examples. web demo still buggy
      
      * fix ci tests
      
      * Some modifications to add ChatGPT api
      
      * modify llm.py and remove unnecessary files
      
      * Delete applications/ColossalQA/examples/ui/test_frontend_input.json
      
      * Remove OpenAI api key
      
      * add colossalqa
      
      * move files
      
      * move files
      
      * move files
      
      * move files
      
      * fix style
      
      * Add Readme and fix some bugs.
      
      * Add something to readme and modify some code
      
      * modify a directory name for clarity
      
      * remove redundant directory
      
      * Correct a type in  llm.py
      
      * fix AI prefix
      
      * fix test_memory.py
      
      * fix conversation
      
      * fix some erros and typos
      
      * Fix a missing import in RAG_ChatBot.py
      
      * add colossalcloud LLM wrapper, correct issues in code review
      
      ---------
      Co-authored-by: default avatarYeAnbang <anbangy2@outlook.com>
      Co-authored-by: default avatarOrion-Zheng <zheng_zian@u.nus.edu>
      Co-authored-by: default avatarZian(Andy) Zheng <62330719+Orion-Zheng@users.noreply.github.com>
      Co-authored-by: default avatarOrion-Zheng <zhengzian@u.nus.edu>
      e53e729d
  2. 28 Mar, 2023 1 commit
  3. 14 Feb, 2023 1 commit
  4. 06 Jan, 2023 1 commit
  5. 19 Aug, 2022 1 commit
  6. 02 Aug, 2022 1 commit
  7. 26 Apr, 2022 1 commit
  8. 09 Dec, 2021 1 commit
    • Frank Lee's avatar
      Develop/experiments (#59) · da01c234
      Frank Lee authored
      
      
      * Add gradient accumulation, fix lr scheduler
      
      * fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
      
      * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
      
      * fixed trainer
      
      * Revert "fixed trainer"
      
      This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097.
      
      * improved consistency between trainer, engine and schedule (#23)
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      
      * Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
      Fix convergence in cifar10, Imagenet1000
      
      * Integrate 1d tensor parallel in Colossal-AI (#39)
      
      * fixed 1D and 2D convergence (#38)
      
      * optimized 2D operations
      
      * fixed 1D ViT convergence problem
      
      * Feature/ddp (#49)
      
      * remove redundancy func in setup (#19) (#20)
      
      * use env to control the language of doc (#24) (#25)
      
      * Support TP-compatible Torch AMP and Update trainer API (#27)
      
      * Add gradient accumulation, fix lr scheduler
      
      * fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
      
      * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
      
      * fixed trainer
      
      * Revert "fixed trainer"
      
      This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097.
      
      * improved consistency between trainer, engine and schedule (#23)
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarver217 <lhx0217@gmail.com>
      
      * add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)
      
      * add explanation for ViT example (#35) (#36)
      
      * support torch ddp
      
      * fix loss accumulation
      
      * add log for ddp
      
      * change seed
      
      * modify timing hook
      Co-authored-by: default avatarFrank Lee <somerlee.9@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      
      * Feature/pipeline (#40)
      
      * remove redundancy func in setup (#19) (#20)
      
      * use env to control the language of doc (#24) (#25)
      
      * Support TP-compatible Torch AMP and Update trainer API (#27)
      
      * Add gradient accumulation, fix lr scheduler
      
      * fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
      
      * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
      
      * fixed trainer
      
      * Revert "fixed trainer"
      
      This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097.
      
      * improved consistency between trainer, engine and schedule (#23)
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarver217 <lhx0217@gmail.com>
      
      * add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)
      
      * add explanation for ViT example (#35) (#36)
      
      * optimize communication of pipeline parallel
      
      * fix grad clip for pipeline
      Co-authored-by: default avatarFrank Lee <somerlee.9@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      
      * optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51)
      
      * Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset
      
      * update api for better usability (#58)
      
      update api for better usability
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarver217 <lhx0217@gmail.com>
      Co-authored-by: default avatarpuck_WCR <46049915+WANG-CR@users.noreply.github.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      Co-authored-by: default avatarアマデウス <kurisusnowdeng@users.noreply.github.com>
      Co-authored-by: default avatarBoxiangW <45734921+BoxiangW@users.noreply.github.com>
      da01c234
  9. 28 Oct, 2021 1 commit