- 19 Apr, 2023 7 commits
-
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
amyeroberts authored
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Elabonga Atuo authored
moved logits for m2m_100
-
Liu Chenyang authored
* move preprocess_logits_for_metrics before _nested_gather in trainer.evaluation_loop * fix * Update src/transformers/trainer.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix * fix --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Matthijs Hollemans authored
fix doc comments
-
- 18 Apr, 2023 10 commits
-
-
Youssef Adarrab authored
-
Zachary Mueller authored
* Add warning about accelerate * Version block Accelerate * Include parse * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Check partial state * Update param --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* initial work * Add other classes * Refactor code * Move warning and fix dynamic pipeline * Issue warning when necessary * Add test * Do not skip auto tests * Fix failing tests * Refactor and address review comments * Address review comments
-
Zachary Mueller authored
-
Joao Gante authored
* working mvp * remove breakpoint * fix commit * standardize outputs * tmp commit * tests almost ready * tmp commit * skip a few models * Add streaming; Docs and examples * document limitations * PR commits * Amy PR comments
-
Yih-Dar authored
* fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Gabriel Yang authored
docs: ko: fix anchor links for docs (auto_tutorial, training) Co-authored-by:
Hyeonseo Yun <0525_hhgus@naver.com> Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by:
Na Yeon Han <nayeon2.han@gmail.com> Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by:
Jungnerd <46880056+jungnerd@users.noreply.github.com>
-
Matthijs Hollemans authored
* wrong argument name * append eos_token_id * all tokenizers need mask and ctc_blank tokens * remove reduction factor from feature extractor * add proper TTS loss * did shifting the wrong way around * mask out padded portions * remove logits again (don't really need it) * fix unit tests * fixup * pad also returns the decoder attention mask, since that's useful to have * clean up feature extractor logic * pad can handle TTS task too * remove stop_labels from loss calculation * simplify logic * fixup * do -100 masking properly * small STFT optimization (calculate mel filterbanks only once) * replace torchaudio fbanks with audio_utils * remove torchaudio dependency * simplify & speed up the STFT * don't serialize window and mel filters * output cross attentions when generating speech * add guided attention loss * fix failing test * Update src/transformers/models/speecht5/feature_extraction_speecht5.py Co-authored-by:
Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * Update src/transformers/models/speecht5/modeling_speecht5.py Co-authored-by:
Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * change type annotation of attention_mask to LongTensor * extract loss into class * remove unused frame_signal_scale argument * use config object in loss class * fix type annotations in doc comments * change optional to just bool * implement missing tokenizer method * add deprecation warning * Update src/transformers/models/speecht5/feature_extraction_speecht5.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/speecht5/feature_extraction_speecht5.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add deprecation warning for stop_labels --------- Co-authored-by:
Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 17 Apr, 2023 14 commits
-
-
Sylvain Gugger authored
* Mark auto models as important * Annoying file with bad line endings
-
Zachary Mueller authored
* Use accelerate for device management * Add accelerate to setup Co-authored-by:Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
Revert "Use code on the Hub from another repo (#22698)" This reverts commit ea7b0a53.
-
Sylvain Gugger authored
* Simplify update metadata job * Match more branch names * Install all what is necessary * Install all what is necessary * Forgot the dev * Install less stuff * This syntax?
-
Zachary Mueller authored
Remove accelerate from tf
-
Kunhao ZHENG authored
fix-squeeze-tuple
-
Yih-Dar authored
* fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sylvain Gugger authored
* initial work * Add other classes * Refactor code * Move warning and fix dynamic pipeline * Issue warning when necessary * Add test
-
Wonhyeong Seo authored
docs: ko: tasks/translation.mdx
-
Matt authored
-
fpgaminer authored
-
Jungnerd authored
fix: docs: ko: sagemaker anchors and `_toctree.yml` Co-authored-by:
Hyeonseo Yun <0525_hhgus@naver.com> Co-authored-by:
Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by:
Na Yeon Han <nayeon2.han@gmail.com> Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com>
-
Na Yeon Han authored
docs: ko: translated `custom_models.mdx` Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by:
Gabriel Yang <gabrielwithhappy@gmail.com> Co-authored-by:
Jungnerd <46880056+jungnerd@users.noreply.github.com>
-
Yih-Dar authored
* fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 15 Apr, 2023 1 commit
-
-
bcol authored
-
- 14 Apr, 2023 8 commits
-
-
oscar-garzon authored
-
amyeroberts authored
* Indexing fix - CLIP checkpoint conversion * Fix up
-
Joao Gante authored
-
Mayank Agarwal authored
* Fix word_ids hyperlink * Add suggested fix
-
Matt authored
* If EOS is None, don't add it to sequences * If EOS is None, don't add it to sequences
-
Sohyun Sim authored
* add ko preprocessing * translate preprocessing.mdx to korean * translate preprocessing.mdx * Update preprocessing.mdx Fixed the line 273 as below: 또한, 특징 추출기에 `sampling_rate` 인자를 추가하여 발생할 수 있는 조용한 오류(silent errors)를 더 잘 디버깅하는 것을 권장합니다. * translate Image part * translated preprocess.mdx * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/preprocessing.mdx * Update docs/source/ko/preprocessing.mdx * Update docs/source/ko/preprocessing.mdx * Update docs/source/ko/preprocessing.mdx * Update docs/source/ko/preprocessing.mdx * Update docs/source/ko/preprocessing.mdx * fixed translation --------- Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com>
-
Yih-Dar authored
* fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Alexander Ljungberg authored
Fixed string format; better tokenizer message. Before: `Saving a {tokenizer_class} to {tokenizer_path}` After: `Saving a LlamaTokenizerFast to outdir.`
-