"tests/tokenization_roberta_test.py" did not exist on "bc1715c1e0872187a8b76d2f258d43815dcf6067"
- 04 Mar, 2024 13 commits
-
-
NielsRogge authored
* First draft * More improvements * More improvements * More fixes * Fix copies * More improvements * More fixes * More improvements * Convert checkpoint * More improvements, set up tests * Fix more tests * Add UdopModel * More improvements * Fix equivalence test * More fixes * Redesign model * Extend conversion script * Use real inputs for conversion script * Add image processor * Improve conversion script * Add UdopTokenizer * Add fast tokenizer * Add converter * Update README's * Add processor * Add fully fledged tokenizer * Add fast tokenizer * Use processor in conversion script * Add tokenizer tests * Fix one more test * Fix more tests * Fix tokenizer tests * Enable fast tokenizer tests * Fix more tests * Fix additional_special_tokens of fast tokenizer * Fix tokenizer tests * Fix more tests * Fix equivalence test * Rename image to pixel_values * Rename seg_data to bbox * More renamings * Remove vis_special_token * More improvements * Add docs * Fix copied from * Update slow tokenizer * Update fast tokenizer design * Make text input optional * Add first draft of processor tests * Fix more processor tests * Fix decoder_start_token_id * Fix test_initialization * Add integration test * More improvements * Improve processor, add test * Add more copied from * Add more copied from * Add more copied from * Add more copied from * Remove print statement * Update README and auto mapping * Delete files * Delete another file * Remove code * Fix test * Fix docs * Remove asserts * Add doc tests * Include UDOP in exotic model tests * Add expected tesseract decodings * Add sentencepiece * Use same design as T5 * Add UdopEncoderModel * Add UdopEncoderModel to tests * More fixes * Fix fast tokenizer * Fix one more test * Remove parallelisable attribute * Fix copies * Remove legacy file * Copy from T5Tokenizer * Fix rebase * More fixes, copy from T5 * More fixes * Fix init * Use ArthurZ/udop for tests * Make all model tests pass * Remove UdopForConditionalGeneration from auto mapping * Fix more tests * fixups * more fixups * fix the tokenizers * remove un-necessary changes * nits * nits * replace truncate_sequences_boxes with truncate_sequences for fix-copies * nit current path * add a test for input ids * ids that we should get taken from c9f7a32f57440d90ff79890270d376a1cc0acb68 * nits converting * nits * apply ruff * nits * nits * style * fix slow order of addition * fix udop fast range as well * fixup * nits * Add docstrings * Fix gradient checkpointing * Update code examples * Skip tests * Update integration test * Address comment * Make fixup * Remove extra ids from tokenizer * Skip test * Apply suggestions from code review Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update year * Address comment * Address more comments * Address comments * Add copied from * Update CI * Rename script * Update model id * Add AddedToken, skip tests * Update CI * Fix doc tests * Do not use Tesseract for the doc tests * Remove kwargs * Add original inputs * Update casting * Fix doc test * Update question * Update question * Use LayoutLMv3ImageProcessor * Update organization * Improve docs * Update forward signature * Make images optional * Remove deprecated device argument * Add comment, add add_prefix_space * More improvements * Remove kwargs --------- Co-authored-by:
ArthurZucker <arthur.zucker@gmail.com> Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Donggeun Yu authored
* Update ms_deform_attn_cuda.cu * Update ms_deform_attn_cuda.cuh * Update modeling_deformable_detr.py * Update src/transformers/models/deformable_detr/modeling_deformable_detr.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update modeling_deformable_detr.py * python utils/check_copies.py --fix_and_overwrite * Fix dtype missmatch error * Update test_modeling_deformable_detr.py * Update test_modeling_deformable_detr.py * Update modeling_deformable_detr.py * Update modeling_deformable_detr.py * Support DeformableDETR with bfloat16 * Add test code * Use AT_DISPATCH_FLOATING_TYPES_AND2 Use AT_DISPATCH_FLOATING_TYPES_AND2 * Update tests/models/deformable_detr/test_modeling_deformable_detr.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update tests/models/deformable_detr/test_modeling_deformable_detr.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fix not found require_torch_bf16 function --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Yoach Lacombe authored
-
Sven Schultze authored
* Fix grad_norm unserializable tensor log failure * Fix origin of grad_norm logs to be in deepspeed get_global_grad_norm()
-
Zach Mueller authored
Fully revert atomic checkpointing
-
Nick DeGroot authored
*
🐛 Fix oneformer instance post processing when using panoptic task type *✅ Add unit test for oneformer instance post processing panoptic bug --------- Co-authored-by:Nick DeGroot <1966472+nickthegroot@users.noreply.github.com>
-
Sean (Seok-Won) Yi authored
Fix: Fixed the previous tracking URI setting logic to prevent clashes with original MLflow code. (#29096) * Changed logic for setting the tracking URI. The previous code was calling the `mlflow.set_tracking_uri` function regardless of whether or not the environment variable `MLFLOW_TRACKING_URI` is even set. This led to clashes with the original MLflow implementation and therefore the logic was changed to only calling the function when the environment variable is explicitly set. * Check if tracking URI has already been set. The previous code did not consider the possibility that the tracking URI may already be set elsewhere and was therefore (erroneously) overriding previously set tracking URIs using the environment variable. * Removed redundant parentheses. Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fix docstring to reflect library convention properly. Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fix docstring to reflect library convention properly. "Unset by default" is the correct expression rather than "Default to `None`." Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
NielsRogge authored
* First commit * Improve conversion script * Convert more checkpoints * Update src/transformers/models/sam/convert_sam_original_to_hf_format.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Rename file * More updates * Update docstring * Update script --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Traun Leyden authored
-
Y4hL authored
* Add mlx support * Fix import order and use def instead of lambda * Another fix for ruff format :) * Add detecting mlx from repr, add is_mlx_array
-
Siming Dai authored
Fix mixtral load balancing loss Co-authored-by:dingkunbo <dingkunbo@baidu.com>
-
Poedator authored
update path to hub files need to add `tree/` to path to files at HF hub. see example path: `https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main`
-
Fanli Lin authored
* use require_torch_gpu * enable on XPU
-
- 01 Mar, 2024 10 commits
-
-
David Valente authored
* Correct zero division error in inverse sqrt scheduler * default timescale to 10_000
-
Zach Mueller authored
* Fix deprecated arg issue * Trainer check too * Check for dict or dataclass * Simplify, make config always AcceleratorConfig * Upstream to Trainer
-
Marc Sun authored
-
Jingya HUANG authored
enable subfolder
-
amyeroberts authored
* Fix yolos processing * Add back slow marker - protects for pycocotools in slow * Slow decorator goes above copied from header
-
Sanchit Gandhi authored
* [Whisper Tok] Update integration test * make style
-
Arthur authored
* use the generation config 🫠 * fixup
-
Younes Belkada authored
* fix ESM 8bit * Apply suggestions from code review Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Leon Engländer authored
* LlamaForQuestionAnswering self.transformer->self.model * fix "Copied from" string * Llama QA model: set base_model_prefix = "transformer"
-
Song Fuchang authored
Expose `offload_buffers` parameter of `accelerate` to `PreTrainedModel.from_pretrained` method (#28755) Expose offload_buffers parameter to from_pretrained method
-
- 29 Feb, 2024 6 commits
-
-
Lucain authored
-
NielsRogge authored
Fix issue
-
Yih-Dar authored
* more fixes * more fixes --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Younes Belkada authored
Update test_modeling_llama.py
-
Younes Belkada authored
fix failing tests for peft integration
-
Younes Belkada authored
change starcoder2 path to correct one
-
- 28 Feb, 2024 11 commits
-
-
Michael authored
* [i18n-zh] Sync source/zh/index.md * apply review comments
-
fxmarty authored
* better unmask imple * comment * typo * bug report pytorch * cleanup * fix import * add back example * retrigger ci * come on
-
Marc Sun authored
* [CI] Quantization workflow * build dockerfile * fix dockerfile * update self-cheduled.yml * test build dockerfile on push * fix torch install * udapte to python 3.10 * update aqlm version * uncomment build dockerfile * tests if the scheduler works * fix docker * do not trigger on psuh again * add additional runs * test again * all good * style * Update .github/workflows/self-scheduled.yml Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * test build dockerfile with torch 2.2.0 * fix extra * clean * revert changes * Revert "revert changes" This reverts commit 4cb52b8822da9d1786a821a33e867e4fcc00d8fd. * revert correct change --------- Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
-
jiqing-feng authored
Co-authored-by:Joao Gante <joao@huggingface.co>
-
Daniel Han authored
* Update modeling_llama.py Llama - Force float32 since bfloat16 loses precision on long contexts * Update modeling_llama.py * Update modeling_gemma.py Fix RoPE and logits.float() * @torch.no_grad() * @torch.no_grad() * Cos, Sin to float32 * cos, sin to float32 * Update src/transformers/models/gemma/modeling_gemma.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/models/llama/modeling_llama.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Resolve PR conflicts * Fix RoPE for llama * Revert "Fix RoPE for llama" This reverts commit b860a22dab9bb01cd15cb9a3220abeaefad3e458. * Fix RoPE for llama * RoPE device * Autocast device type * RoPE * RoPE isinstance --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Joao Gante authored
-
Leonardo Emili authored
* Set output_router_logits=False in prepare_inputs_for_generation for mixtral * Add output_router_logits=False to prepare_inputs_for_generation for mixtral * Fix style
-
Arthur authored
* remove control flow * update gptneox * update .... * nits * Actually let's just break. Otherwise we are silently failing which imo is not optimal * version BC * fix tests * fix eager causal * nit * add a test * style * nits * nits * more nits for the test * update and fix * make sure cuda graphs are not skipped * read token is needed for meta llama * update! * fiixup * compile test should be slow * fix thet fix copies * stle 🫠
-
Arthur authored
* remove warning * add co-author * update --------- Co-authored-by:hiaoxui <hiaoxui@users.noreply.github.com>
-
Arthur authored
fix wrapper
-
fxmarty authored
* remove numpy usage from owlvit * fix init owlv2 * style
-