"examples/legacy/vscode:/vscode.git/clone" did not exist on "ad28ca291bf851b48d7f2d4becf96ca90c98f8f1"
- 13 Oct, 2023 4 commits
-
-
Younes Belkada authored
* fix fa-2 import * nit
-
dekomori_sanae09 authored
* fix docstring dpr config * fix style * Update descp Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com> --------- Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com>
-
Yih-Dar authored
* fix * [skip-ci] fix * [skip-ci] fix * [skip-ci] fix * [skip-ci] fix * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Bojun-Feng authored
* update check_docstrings * update docstring
-
- 12 Oct, 2023 9 commits
-
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Heinz-Alexander Fuetterer authored
-
Adwait authored
* [docstring] Remove 'BertGenerationConfig' from OBJECTS_TO_IGNORE * [docstring] Fix docstring for 'BertGenerationConfig' (#26638)
-
Joseph McDonnell authored
* [DOCS] Update docstrings for and tokenizer * [DOCS] add pad_token argument to whisper tokenizer docstring * [FIX] Reword pad_token description * [CHORE] Apply style formatting --------- Co-authored-by:jmcdonnell <jmcdonnell@fieldbox.ai>
-
Gizem authored
* Remove UniSpeechConfig * Remove , at the end otherwise check_docstring changes order * Auto add new docstring * Update docstring for UniSpeechConfig * Remove from check_docstrings * Remove UniSpeechSatConfig and UniSpeechSatForCTC from check_docstrings * Remove , at the end * Fix docstring * Update docstring for Wav2Vec2ForCTC * Update Wav2Vec2ForCTC docstring Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com> * fix style --------- Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com>
-
William Horton authored
* Fix backward compatibility of Conversation I ran into a case where an external library was depending on the `new_user_input` field of Conversation. https://github.com/SeldonIO/MLServer/blob/release/1.4.x/runtimes/huggingface/mlserver_huggingface/codecs/utils.py#L37 This field was deprecated as part of the refactor, but if `transformers` wants to maintain backwards compatibility for now (which is mentioned in a few comments) then there's a good argument for supporting it. Some comments referred to it as an "internal" property, but it didn't start with `_` as is Python convention, so I think it's reasonable that other libraries were referencing it directly. It's not difficult to add it to the other supported backwards-compatible properties. In addition, the implementation of `past_user_inputs` didn't actually match the past behavior (it would contain the most recent message as well) so I updated that as well. * make style --------- Co-authored-by:
Matt <rocketknight1@gmail.com>
-
Lysandre Debut authored
* Logger level Co-authored-by:
Sahil Bhosale <sahilbhosale63@live.com> Co-authored-by:
Adithya4720 <hegdeadithyak@gmail.com> Co-authored-by:
Sachin Singh <sachinishu02@gmail.com> Co-authored-by:
Riya Dhanduke <113622644+riiyaa24@users.noreply.github.com> * More comprehensive documentation --------- Co-authored-by:
Sahil Bhosale <sahilbhosale63@live.com> Co-authored-by:
Adithya4720 <hegdeadithyak@gmail.com> Co-authored-by:
Sachin Singh <sachinishu02@gmail.com> Co-authored-by:
Riya Dhanduke <113622644+riiyaa24@users.noreply.github.com>
-
Tom Aarsen authored
Add missing spaces in adjacent strings
-
Yih-Dar authored
* fix * fix * fix * fix * fix * fix * fix * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 11 Oct, 2023 9 commits
-
-
Bojun-Feng authored
* update check_docstrings * update docstring
-
Minho Ryang authored
* [docstring] Fix docstring for `LlamaTokenizer` and `LlamaTokenizerFast` * [docstring] Fix docstring typo at `LlamaTokenizer` and `LlamaTokenizerFast`
-
Yih-Dar authored
* fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sourab Mangrulkar authored
-
Shivanand authored
* remove from utils * updated doc string * only in the model * Update src/transformers/models/swin/modeling_swin.py Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com> * Update src/transformers/models/swin/modeling_swin.py Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com> --------- Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com>
-
Patrick von Platen authored
* [Assistant Generation] Improve enc dec * save more * Fix logit processor checks * Clean * make style * fix deprecation * fix generation test * Apply suggestions from code review * fix biogpt * make style
-
Ben Gubler authored
* feat: update callback doc to explain disabling callbacks using report_to * docs: update report_to docstring
-
Billy Bradley authored
In assisted decoding, pass model_kwargs to model's forward call (fix prepare_input_for_generation in all models) (#25242) * In assisted decoding, pass model_kwargs to model's forward call Previously, assisted decoding would ignore any additional kwargs that it doesn't explicitly handle. This was inconsistent with other generation methods, which pass the model_kwargs through prepare_inputs_for_generation and forward the returned dict to the model's forward call. The prepare_inputs_for_generation method needs to be amended in all models, as previously it only kept the last input ID when a past_key_values was passed. * Improve variable names in _extend_attention_mask * Refactor extending token_type_ids into a function * Replace deepcopy with copy to optimize performance * Update new persimmon model with llama changes for assisted generation * Update new mistral model for assisted generation with prepare_inputs_for_generation * Update position_ids creation in falcon prepare_inputs_for_generation to support assisted generation
-
Thien Tran authored
* set encoder's PE as non-trainable * freeze flax * init sinusoids * add test for non-trainable embed positions * simplify TF encoder embed_pos * revert tf * clean up * add sinusoidal init for jax * make consistent sinusoidal function * fix dtype * add default dtype * use numpy for sinusoids. fix jax * add sinusoid init for TF * fix * use custom embedding * use specialized init for each impl * fix sinusoids init. add test for pytorch * fix TF dtype * simplify sinusoid init for flax and tf * add tests for TF * change default dtype to float32 * add sinusoid test for flax * Update src/transformers/models/whisper/modeling_flax_whisper.py Co-authored-by:
Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * Update src/transformers/models/whisper/modeling_tf_whisper.py Co-authored-by:
Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * move sinusoidal init to _init_weights --------- Co-authored-by:
sanchit-gandhi <sanchit@huggingface.co> Co-authored-by:
Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
-
- 10 Oct, 2023 4 commits
-
-
Roy Hvaara authored
`jnp.array` is a function, not a type: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html so it never makes sense to use `jnp.array` in a type annotation. Presumably the intent was to write `jnp.ndarray` aka `jax.Array`. Co-authored-by:
Peter Hawkins <phawkins@google.com>
-
th茅o gigant authored
* fix a typo in flax t5 attention * fix the typo in flax longt5 attention
-
Pavarissy authored
* Your commit message here * fix LlamaConfig docstring * run make fixup * fix formatting after review reformat of the file to prevent script issues * rerun make fixup after reformat
-
jiqing-feng authored
* control first downsample stride * reduce first only works for ResNetBottleNeckLayer * fix param name * fix style
-
- 09 Oct, 2023 5 commits
-
-
Isaac Chung authored
fix docstrings for vanilla clip
-
Alex Bzdel authored
* removed donutimageprocessor from objects_to_ignore * added docstring for donutimageprocessor * readding donut file * moved docstring to correct location
-
Isaac Chung authored
fix docstring for CLIPImageProcessor
-
Isaac Chung authored
* fix docstrings for CLIP configs * black formatted
-
NielsRogge authored
* Convert checkpoints * Update doc test * Address comment
-
- 06 Oct, 2023 8 commits
-
-
Yih-Dar authored
example fix docstring Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Arthur authored
* make sure eos and bos are properly handled for fast tokenizer * fix code llama as well * nits * fix the conversion script as well * fix failing test
-
statelesshz authored
* remove SharedDDP as it was drepracated * apply review suggestion * make style * Oops,forgot to remove the compute_loss context manager in Seq2SeqTrainer. * remove the unnecessary conditional statement * keep the logic of IPEX * clean code * mix precision setup & make fixup --------- Co-authored-by:statelesshz <jihuazhong1@huawei.com>
-
rui-ren authored
-
fxmarty authored
* remove unnecessary unsqueeze-squeeze in llama * correct other models * fix * revert gpt_neox_japanese * fix copie * fix test
-
Tianqi Liu authored
* Update tokenization_code_llama_fast.py * Update test_tokenization_code_llama.py * Update test_tokenization_code_llama.py
-
Towdo authored
-
Ramiro Leal-Cavazos authored
* Remove unnecessary `view` of `position_ids` in `modeling_llama` When `position_ids` is `None`, its value is generated using `torch.arange`, which creates a tensor of size `(seq_length + past_key_values_length) - past_key_values_length = seq_length`. The tensor is then unsqueezed, resulting in a tensor of shape `(1, seq_length)`. This means that the last `view` to a tensor of shape `(-1, seq_length)` is a no-op. This commit removes the unnecessary view. * Remove no-op `view` of `position_ids` in rest of transformer models
-
- 05 Oct, 2023 1 commit
-
-
eajechiloae authored
don't close clearml task if it was created externally
-