- 25 Jul, 2023 5 commits
-
-
Sangam Lee authored
* docs: ko: perf_hardware.md * feat: nmt draft * fix: manual edits * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> * fix: resolve suggestions Co-authored-by:
Haewon Kim <ehdvkf02@naver.com> * Fix: manual edits * fix: manual edits * fix: manual edits * fix: manual edits * fix: fix rendering error of perf_hardware.md --------- Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by:
Haewon Kim <ehdvkf02@naver.com>
-
Haewon Kim authored
* docs: ko: tf_xla.md * feat: chatgpt draft * fix: manual edits * fix: manual edits * fix: manual edits * fix: resolve suggestions
-
Kashif Rasul authored
fix rope_scaling doc string
-
Joao Gante authored
-
Arthur authored
* Add note in doc on `RwkvStoppingCriteria` * give some breathing space to the code
-
- 24 Jul, 2023 17 commits
-
-
Sylvain Gugger authored
* Better error message when signal is not supported on OS * Address review comments
-
seank021 authored
* dos: ko: perf_train_cpu.md * feat: chatgpt draft * fix: manual edits * fix: resolve suggestions * fix: manual edits Co-authored-by:
Haewon Kim <ehdvkf02@naver.com> --------- Co-authored-by:
Haewon Kim <ehdvkf02@naver.com>
-
Younes Belkada authored
fix 8bit corner case with Blip2 8bit
-
Nate Brake authored
compute_loss in trainer failing to label shift for PEFT model when label smoothing enabled. (#25044) * added PeftModelForCausalLM to MODEL_FOR_CAUSAL_LM_MAPPING_NAMES dict * check for PEFT model in compute_loss section --------- Co-authored-by:Nathan Brake <nbrake3@mmm.com>
-
Rinat authored
* pull and push updates * add docs * fix modeling * Add and run test * make copies * add task * fix tests and fix small issues * Checks on a Pull Request * fix docs * add desc pvt.md
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Make more test models tiny * Make more test models tiny * More models * More models
-
S枚ren Brunk authored
-
Zach Mueller authored
* Dispatch batches * Copy items
-
Sunmin Cho authored
* docs: ko: testing.md * feat: draft * fix: manual edits * fix: edit ko/_toctree.yml * fix: manual edits * fix: manual edits * fix: manual edits * fix: manual edits * fix: resolve suggestions
-
Sangam Lee authored
* dos: ko: performance.md * feat: chatgpt draft * fix: manual edits * fix: manual edits * Update docs/source/ko/performance.md Co-authored-by:
Kihoon Son <75935546+kihoon71@users.noreply.github.com> * Update docs/source/ko/performance.md --------- Co-authored-by:
Kihoon Son <75935546+kihoon71@users.noreply.github.com>
-
Iskren Ivov Chernev authored
* Better handling missing SYS in llama conversation tokenizer The existing code failed to add SYS if the conversation has history without SYS, but did modify the passed conversation as it did. Rearrange the code so modification to the conversation object are taken into account for token id generation. * Fix formatting with black * Avoid one-liners * Also fix fast tokenizer * Drop List decl
-
Lucain authored
* Support GatedRepoError + use raise from * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Use token instead of use_auth_token in error messages --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Maria Khalusova authored
* first pass at the single gpu doc * overview: improved clarity and navigation * WIP * updated intro and deepspeed sections * improved torch.compile section * more improvements * minor improvements * make style * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * feedback addressed * mdx -> md * link fix * feedback addressed --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Bharat Ramanathan authored
fix: store training args to wandb config without sanitization. Allows resuming runs by reusing the wandb config. Co-authored-by:Bharat Ramanathan <ramanathan.parameshwaran@gohuddl.com>
-
Arthur authored
set default logger
-
Stas Bekman authored
* [check_config_docstrings.py] improve diagnostics * style * rephrase * fix
-
- 21 Jul, 2023 16 commits
-
-
Wonhyeong Seo authored
fix: update ko/serialization.md * chatgpt draft
-
Sylvain Gugger authored
-
Ivan Sorokin authored
* improve from_pretrained for zero3 multi gpus mode * Add check if torch.distributed.is_initialized * Revert torch.distributed --------- Co-authored-by:Stas Bekman <stas@stason.org>
-
Arthur authored
remove persistent tensor
-
Younes Belkada authored
add simple check for bnb
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Avoid importing all models when instantiating a pipeline * Remove sums that don't work
-
Sylvain Gugger authored
-
Arthur authored
* pad token should be None by default * fix tests * nits
-
Joya Chen authored
* Update tokenization_llama.py * Update tokenization_llama_fast.py * Update src/transformers/models/llama/tokenization_llama_fast.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/models/llama/tokenization_llama.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/models/llama/tokenization_llama.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/models/llama/tokenization_llama_fast.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Sourab Mangrulkar authored
* fix fsdp prepare to remove the warnings and fix excess memory usage * Update training_args.py * parity for FSDP+XLA * Update trainer.py
-
Wonhyeong Seo authored
* fix: english/korean quicktour.md * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by:
Kihoon Son <75935546+kihoon71@users.noreply.github.com> * fix: follow glossary * 韺岇澑韸滊嫕 -> 氙胳劯臁办爼 --------- Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by:
Kihoon Son <75935546+kihoon71@users.noreply.github.com>
-
Jim Allanson authored
* fix: cast input pixels to appropriate dtype for image_to_text tasks * fix: add casting to pixel inputs of additional models after running copy checks
-
Sourab Mangrulkar authored
* fix fsdp load * Update trainer.py * remove saving duplicate state_dict
-
- 20 Jul, 2023 2 commits
-
-
Apoorv Khandelwal authored
* [trainer] fallback for deepspeed param count * [trainer] more readable numel count
-
Benjamin Badger authored
Co-authored-by:Joao Gante <joaofranciscocardosogante@gmail.com>
-