- 25 Apr, 2023 1 commit
-
-
Lingepumpe authored
* Avoid invalid escape sequences, use raw strings * Integrate PR feedback
-
- 06 Feb, 2023 1 commit
-
-
Sylvain Gugger authored
* Result of black 23.1 * Update target to Python 3.7 * Switch flake8 to ruff * Configure isort * Configure isort * Apply isort with line limit * Put the right black version * adapt black in check copies * Fix copies
-
- 03 Aug, 2022 1 commit
-
-
LSinev authored
Comparisons like version.parse(torch.__version__) > version.parse("1.6") are True for torch==1.6.0+cu101 or torch==1.6.0+cpu version.parse(version.parse(torch.__version__).base_version) are preferred (and available in pytorch_utils.py
-
- 29 Jul, 2022 1 commit
-
-
Sylvain Gugger authored
* Preliminary work on tokenizers * Quality + fix tests * Treat processors * Fix pad * Remove all uses of in tests, docs and examples * Replace all as_target_tokenizer * Fix tests * Fix quality * Update examples/flax/image-captioning/run_image_captioning_flax.py Co-authored-by:
amyeroberts <amy@huggingface.co> * Style Co-authored-by:
amyeroberts <amy@huggingface.co>
-
- 12 May, 2022 1 commit
-
-
Sylvain Gugger authored
* Black preview * Fixup too! * Fix check copies * Use the same version as the CI * Bump black
-
- 11 Nov, 2021 1 commit
-
-
Stas Bekman authored
-
- 30 Jul, 2021 1 commit
-
-
21jun authored
help for `ModelArguments.gradient_checkpointing` should be "If True, use gradient checkpointing to save memory at the expense of slower backward pass." not "Whether to freeze the feature extractor layers of the model." (which is duplicated from `freeze_feature_extractor` arg)
-
- 25 Jun, 2021 1 commit
-
-
Stas Bekman authored
-
- 14 Jun, 2021 1 commit
-
-
Stas Bekman authored
* consistent nn. and nn.functional: p4 examples * restore
-
- 12 May, 2021 1 commit
-
-
Philip May authored
-
- 18 Mar, 2021 1 commit
-
-
Mohamed El-Geish authored
* wav2vec2: support datasets other than LibriSpeech * Formatting run_asr.py to pass code quality test * bundled orthography options and added verbose logs * fixing a typo in timit fine-tuning script * update comment for clarity * resize_lm_head and load custom vocab from file * adding a max_duration_in_seconds filter * do not assign `duration_filter` lambda, use a def * log untransliterated text as well * fix base model for arabic * fix duration filter when target_sr is not set * drop duration_in_seconds when unneeded * script for wav2vec2-large-lv60-timit-asr * fix for "tha" in arabic corpus (huggingface#10581) * adding more options to work with common_voice * PR feedback (huggingface#10581) * small README change
-
- 05 Mar, 2021 1 commit
-
-
Patrick von Platen authored
-
- 01 Mar, 2021 1 commit
-
-
Patrick von Platen authored
* add encode labels function to tokenizer * start adding finetuning * init dropout * upload * correct convert script * apply changes * fix second typo * make first dummy training run * adapt convert script * push confg for comparison * remove conf * finish training * adapt data collator * add research folder * update according to fairseq feedback * some minor corrections * refactor masking indices a bit * some minor changes * clean tokenizer * finish clean-up * remove previous logic * update run script * correct training * finish changes * finish model * correct bug * fix training a bit more * add some tests * finish gradient checkpointing * finish example * correct gradient checkpointing * improve tokenization method * revert changes in tokenizer * revert general change * adapt fine-tuning * update * save intermediate test * Update README.md * finish finetuning * delete conversion script * Update src/transformers/models/wav2vec2/configuration_wav2vec2.py * Update src/transformers/models/wav2vec2/processing_wav2vec2.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * finish wav2vec2 script * finish wav2vec2 fine-tuning * finalize test * correct test * adapt tests * finish * remove test file Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-