"docs/vscode:/vscode.git/clone" did not exist on "b6b2f2270fe6c32852fc1b887afe354b7b79d18c"
- 03 Oct, 2023 12 commits
-
-
Arthur authored
* remove unprotected import to PIL * cleanup --------- Co-authored-by:Lysandre <lysandre@huggingface.co>
-
Younes Belkada authored
* fix issues with PEFT * logger warning futurewarning issues * fixup * adapt from suggestions * oops * rm test
-
Younes Belkada authored
* add FA-2 support for mistral * fixup * add sliding windows * fixing few nits * v1 slicing cache - logits do not match * add comment * fix bugs * more mem efficient * add warning once * add warning once * oops * fixup * more comments * copy * add safety checker * fixup * Update src/transformers/models/mistral/modeling_mistral.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * copied from * up * raise when padding side is right * fixup * add doc + few minor changes * fixup --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Arthur authored
* fix stripping * nits * fix another test * styling * fix? * update * revert bad merge * found the bug * YES SIR * is that change really required? * make fast even faster * re order functions
-
Srijan Sahay Srivastava authored
* [Doctest] Add configuration_encoder_decoder.py Added configuration_encoder_decoder.py to utils/documentation_tests.txt for doctest * Revert "[Doctest] Add configuration_encoder_decoder.py" This reverts commit bd653535a4356dc3c9f43e65883819079a2053b0. * [Doctest] Add configuration_encoder_decoder.py add configuration_encoder_decoder.py to utils/documentation_tests.txt * [Doctest] Add configuration_encoder_decoder.py add configuration_encoder_decoder.py to utils/documentation_tests.txt * [Doctest] Add configuration_encoder_decoder.py add configuration_encoder_decoder.py to utils/documentation_tests.txt * changed as per request * fixed line 46
-
Funtowicz Morgan authored
* Add initial version for run_tests_multi_gpu * Trigger change in BERT * fix typo setup -> setup_gpu * Add tag mi210 * Enable multi-gpu jobs * One more * Use dynamic device allocation * Attempt to fix syntax for docker create * fix script path * fix * temp machine type * fix label * Enable multi-gpu tests * Rename multi-amd-gpu to multi-gpu * Let's not be lazy dude * Update rocm-smi output * Add gpu_flavour in the matrix * Fix typos * merge single/multi dispatch into the matrix * Format. * Revert BERT's change --------- Co-authored-by:Guillaume LEGENDRE <glegendre01@gmail.com>
-
Sanchit Gandhi authored
-
Nathan Cahill authored
* add tokenizer kwarg inputs * Adding tokenizer_kwargs to _sanitize_parameters * Add truncation=True example to tests * Update test_pipelines_fill_mask.py * Update test_pipelines_fill_mask.py * make fix-copies and make style * Update fill_mask.py Replace single tick with double * make fix-copies * Style --------- Co-authored-by:Lysandre <lysandre@huggingface.co>
-
Patrick von Platen authored
[Logging] Change warning to info
-
dependabot[bot] authored
Bump urllib3 in /examples/research_projects/decision_transformer Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.9 to 1.26.17. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](https://github.com/urllib3/urllib3/compare/1.26.9...1.26.17 ) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
dependabot[bot] authored
Bump urllib3 in /examples/research_projects/visual_bert Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](https://github.com/urllib3/urllib3/compare/1.26.5...1.26.17 ) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
dependabot[bot] authored
Bump urllib3 in /examples/research_projects/lxmert Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](https://github.com/urllib3/urllib3/compare/1.26.5...1.26.17 ) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
- 02 Oct, 2023 15 commits
-
-
Florian Zimmermeister authored
* start working on next chapter * finish testing * Update docs/source/de/testing.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/de/testing.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/de/testing.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Wonhyeong Seo authored
* docs: ko: toknenizer_summary.md Co-Authored-By:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-Authored-By:
Juntae <79131091+sronger@users.noreply.github.com> Co-Authored-By:
Injin Paek <71638597+eenzeenee@users.noreply.github.com> * update review * fix: resolve suggestions Co-Authored-By:
Nayeon Han <nayeon2.han@gmail.com> Co-Authored-By:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> --------- Co-authored-by:
HanNayeoniee <nayeon2.han@gmail.com> Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> Co-authored-by:
Juntae <79131091+sronger@users.noreply.github.com> Co-authored-by:
Injin Paek <71638597+eenzeenee@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com>
-
Arthur authored
* add build_inputs_with_special_tokens to LlamaFast * fixup * Update src/transformers/models/llama/tokenization_llama_fast.py
-
Arthur authored
* fix encoding when the fill token is None * add tests and edge cases * fiuxp * Update tests/models/code_llama/test_tokenization_code_llama.py
-
Adithya Hegde Kota authored
* [Doctest] Add configuration_roformer.py * [Doctest] Add configuration_roformer.py * [Doctest] Add configuration_roformer.py * [Doctest] Add configuration_roformer.py * Removed documentation_test.txt * Removed configuration_roformer.py * Update not_doctested.txt
-
Arthur authored
* fix stripping * remove some warnings and update some warnings * revert changes for other PR
-
Younes Belkada authored
Update modeling_utils.py
-
Arthur authored
* fix wav2vec2 * nit * stash * one more file to update * fix byt5 * vocab size is 256, don't change that! * use other revision * test persimon in smaller size * style * tests * nits * update add tokens from pretrained * test tokenization * nits * potential fnet fix? * more nits * nits * correct test * assert close * udpate * ouch * fix it * some more nits * FINALLU * use `adept` checkpoints * more adept checkpoints * that was invlved!
-
Younes Belkada authored
* fix bnb test with code revision * fix test * Apply suggestions from code review * Update src/transformers/models/auto/auto_factory.py * Update src/transformers/models/auto/auto_factory.py * Update src/transformers/models/auto/auto_factory.py
-
Younes Belkada authored
* try * nit * nits
-
HelgeS authored
-
marcmk6 authored
* fix issue of canine forward requires input_ids anyway The `forward` requires `input_ids` for deriving other variables in all cases. Change this to use the given one between `input_ids` and `inputs_embeds` * fix canine forward The current `forward` requires (the shape of) `input_ids` for deriving other variables whenever `input_ids` or `inputs_embeds` is provided. Change this to use the given one instead of `input_ids` all the time. * fix format * fix format
-
Jan Philipp Harries authored
fix requests connection error Co-authored-by:Jan Philipp Harries <jphme@users.noreply.github.com>
-
Florian Seiler authored
* Fix num_heads in _upad_input The variable num_key_value_heads has falsely been named num_heads, which led to reshaping the query_layer using the wrong attention head count. (It would have been enough to use the correct variable self.num_heads instead of num_heads, but I renamed num_heads to num_key_value_heads for clarity) * fixed copies using make fix-copies and ran make fixup --------- Co-authored-by:fseiler <f.seiler@jerocom.de>
-
Lysandre Debut authored
* Revert "Falcon: fix revision propagation (#26006)" This reverts commit 118c676ef3124423e5d062b665f05cde55bc9a90. * Revert "Put Falcon back (#25960)" This reverts commit 22a69f1d.
-
- 29 Sep, 2023 6 commits
-
-
Sanchit Gandhi authored
* improve docs/errors * why whisper * Update docs/source/en/pipeline_tutorial.md Co-authored-by:
Lysandre Debut <hi@lysand.re> * specify pt only --------- Co-authored-by:
Lysandre Debut <hi@lysand.re>
-
Sanchit Gandhi authored
* from seq2seq speech * [Flax] Example script for speech seq2seq * tests and fixes * make style * fix: label padding tokens * fix: label padding tokens over list * update ln names for Whisper * try datasets iter loader * create readme and append results * style * make style * adjust lr * use pt dataloader * make fast * pin gen max len * finish * add pt to requirements for test * fix pt -> torch * add accelerate
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
skip Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Maria Khalusova authored
* navigation improvement between text generation pipelines and text generation docs * make style
-
Steven Liu authored
update
-
- 28 Sep, 2023 7 commits
-
-
Sanchit Gandhi authored
make decoding faster
-
Amelie Schreiber authored
* Fixed in-place operation error in EsmEmbeddings * Fixed in-place operation error in EsmEmbeddings again --------- Co-authored-by:Schreiber-Finance <amelie.schreiber.finance@gmail.com>
-
Marc Sun authored
* fix_mbart_tied_weights * add test
-
fleance authored
Do not warn about unexpected decoder weights when loading T5EncoderModel and LongT5EncoderModel (#26211) Ignore decoder weights when using T5EncoderModel and LongT5EncoderModel Both T5EncoderModel and LongT5EncoderModel do not have any decoder layers, so loading a pretrained model checkpoint such as t5-small will give warnings about keys found in the model checkpoint that are not in the model itself. To prevent this log warning, r"decoder" has been added to _keys_to_ignore_on_load_unexpected for both T5EncoderModel and LongT5EncoderModel
-
Younes Belkada authored
[`PEFT`]聽introducing `adapter_kwargs` for loading adapters from different Hub location (`subfolder`, `revision`) than the base model (#26270) * make use of adapter_revision * v1 adapter kwargs * fix CI * fix CI * fix CI * fixup * add BC * Update src/transformers/integrations/peft.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup * change it to error * Update src/transformers/modeling_utils.py * Update src/transformers/modeling_utils.py * fixup * change * Update src/transformers/integrations/peft.py --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Fakhir Ali authored
* [VITS] Fix speaker_embed device mismatch - pass device arg to speaker_id tensor * [VITS] put speaker_embed on device when int * [VITS] device=self.device instead of self.embed_speaker.weight.device * [VITS] make tensor directly on device using torch.full()
-
Tanishq Abraham authored
* change mention of decoder_input_ids to input_ids and same with decoder_input_embeds * Style --------- Co-authored-by:Lysandre <lysandre@huggingface.co>
-