- 17 Jul, 2023 6 commits
-
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Samin Yasar authored
* add multimodal heading and docqa * fix sentence * task_summary data type = modality clarification * change the multimodal example to a smaller model
-
dependabot[bot] authored
Bump cryptography from 41.0.0 to 41.0.2 in /examples/research_projects/decision_transformer (#24833) Bump cryptography in /examples/research_projects/decision_transformer Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.0 to 41.0.2. - [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst) - [Commits](https://github.com/pyca/cryptography/compare/41.0.0...41.0.2 ) --- updated-dependencies: - dependency-name: cryptography dependency-type: direct:production ... Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
namespace-Pt authored
1
-
Sohyun Sim authored
* docs: ko: custom_tools.mdx * feat: deepl draft * fix: change .mdx to .md * fix: resolve suggestions * fix: resolve suggestions
-
statelesshz authored
* deprecate fairscale's ShardedDDP * fix code style * roll back * deprecate the `sharded_ddp` training argument --------- Co-authored-by:jihuazhong <jihuazhong1@huawei.com>
-
- 14 Jul, 2023 4 commits
-
-
Kadir Nar authored
* [
馃敆 Docs] Fixed Incorrect Migration Link * Update README.md Co-authored-by:amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Sylvain Gugger authored
* First models * Conditional DETR * Treat DETR models, skip others * Skip LayoutLMv2 as well * Fix last tests
-
Dario Su膷i膰 authored
-
Nicolas Patry authored
Fixing double `use_auth_token.pop` (preventing private models from being visible). Should fix: https://github.com/huggingface/transformers/issues/14334#issuecomment-1634527833 Repro: Have a private repo, with `vocab.json` (spread out files for the tokenizer) and use `AutoTokenizer.from_pretrained(..., use_auth_token="token")`.
-
- 13 Jul, 2023 16 commits
-
-
Sylvain Gugger authored
* Copy code when using local trust remote code * Remote upgrade strategy * Revert "Remote upgrade strategy" This reverts commit 4f0392f5d747bcbbcf7211ef9f9b555a86778297.
-
Sylvain Gugger authored
* Run hub tests * [all-test] Run tests please! * [all-test] Add vision dep for hub tests * Fix tests
-
Fady Nakhla authored
Switching _BaseAutoModelClass from_pretrained and from_config to use the register classmethod that it defines rather than using the _LazyAutoMapping register method directly. This makes use of the additional consistency check within the base model's register.
-
Georgie Mathews authored
-
Matt authored
* Remove Falcon docs for the release until TGI is ready * Update toctree
-
dymil authored
-
amyeroberts authored
* Add accelerate version in transformers-cli env * Add accelerate config
-
Joao Gante authored
* add rope_scaling * tmp commit * add gptneox * add tests * GPTNeoX can now handle long inputs, so the pipeline test was wrong * Update src/transformers/models/open_llama/configuration_open_llama.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * remove ntk * remove redundant validation --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Sylvain Gugger authored
* Deprecate some models * Fix imports * Fix inits too * Remove tests * Add deprecated banner to documentation * Remove from init * Fix auto classes * Style * Remote upgrade strategy 1 * Remove site package cache * Revert this part * Fix typo... * Update utils * Update docs/source/en/model_doc/bort.md Co-authored-by:
Lysandre Debut <lysandre.debut@reseau.eseo.fr> * Address review comments * With all files saved --------- Co-authored-by:
Lysandre Debut <lysandre.debut@reseau.eseo.fr>
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
amyeroberts authored
* Fix doctest checkpoint * Add import torch for mobilevit
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Bram Vanroy authored
* Update training_args.py Clarify the relationship between `load_best_model_at_end` and `save_total_limit`. * fix: faulty quotes * make quality * Update src/transformers/training_args.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * DOCS: add explicit `True` * DOCS: make style/quality --------- Co-authored-by:
Bram Vanroy <Bram.Vanroy@UGent.be> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
SeongBeomLEE authored
[fix] Change the condition of ValueError in "convert_checkpoint_from_transformers_to_megatron" (#24769) * fix: half inference error norm_factor is still torch.float32 after using model.half So I changed it to register_buffer so I can change it to torch.float16 after using model.half * fix: Added a variable "persistent=False" * run make style * [fix] Change the condition of ValueError convert_checkpoint_from_transformers_to_megatron * [fix] error wording layers -> attention heads
-
Liyang90 authored
* Update modeling_llama.py Removing unnecessary `device=device` * fix in all occurrences of _make_causal_mask
-
- 12 Jul, 2023 9 commits
-
-
Zach Mueller authored
Rm duplicate
-
Lysandre Debut authored
-
Pedro Cuenca authored
gpt-bigcode: avoid `zeros_` to support Core ML. In-place `zeros_` is not supported by the Core ML conversion process. This PR replaces it with `zeros_like` so conversion can proceed. The change only affects a workaround for a PyTorch bug on the `cpu` device.
-
Zach Mueller authored
* dim, and rm copy * Don't rm copy for now * Oops * pad index * Should be a working test * Tickle down ddp timeout * Put fix back in now that testing locally is done * Better comment specifying timeout Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Yih-Dar authored
* fix * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Bauke Brenninkmeijer authored
* initial replacements of asserts with errors/exceptions * replace assert with exception in generation, align and bart * reset formatting change * reset another formatting issue * Apply suggestion Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * don't touch this file * change to 'is not False' * fix type --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Joao Gante authored
* tmp commit * __call__ docs * kwargs documented; shorter input_ids doc * nit * Update src/transformers/generation/logits_process.py
-
amyeroberts authored
* Add to doctests * Alphabetical order
-
Zach Mueller authored
Fix eval steps
-
- 11 Jul, 2023 5 commits
-
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sylvain Gugger authored
-
Gaurav Kumbhat authored
*
馃悰 Handle empty gen_kwargs for seq2seq trainer prediction_step fn Signed-off-by:gkumbhat <kumbhat.gaurav@gmail.com> * Update src/transformers/trainer_seq2seq.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> --------- Signed-off-by:
gkumbhat <kumbhat.gaurav@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Zach Mueller authored
* Try this * Solved! * Rm extranious * Rm extranious * self * Args' * Check for if we created the lr scheduler * Move comment * Clean
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-