"vscode:/vscode.git/clone" did not exist on "5e620a92cf7e6c312435db86ec55e13b75dece75"
- 22 Sep, 2021 6 commits
-
-
Lysandre Debut authored
* Patch training arguments issue * Update src/transformers/training_args.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Gunjan Chhablani authored
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
MocktaiLEngineer authored
* Raise exceptions instead of using assertions for control flow #12789 * # coding=utf-8 * Raise exceptions instead of using assertions for control flow * Raise exceptions instead of using assertions for control flow * Update src/transformers/tokenization_utils.py Raise exceptions instead of using assertions for control flow Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/transformers/tokenization_utils.py Raise exceptions instead of using assertions for control flow Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Raise exceptions instead of using assertions for control flow * test * Raise exceptions instead of using assertions for control flow Co-authored-by:
MocktaiLEngineer <kavinarasu22@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
* Make gradient_checkpointing a training argument * Update src/transformers/modeling_utils.py Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Update src/transformers/configuration_utils.py Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Fix tests * Style * document Gradient Checkpointing as a performance feature * Small rename * PoC for not using the config * Adapt BC to new PoC * Forgot to save * Rollout changes to all other models * Fix typo Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Stas Bekman <stas@stason.org>
-
Anton Lozhkov authored
* Force dtype, add tests * Local torch imports * Remove unused logic (always ndarray)
-
- 21 Sep, 2021 12 commits
-
-
Patrick von Platen authored
* up * up
-
Patrick von Platen authored
-
Kamal Raj authored
conv kernel_size to Tuple, Flax Version 0.3.5 breaking change, https://github.com/google/flax/releases/tag/v0.3.5
-
Sylvain Gugger authored
-
Nishant Prabhu authored
* Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * cleanup * Removed regression/ folder * Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * cleanup * Fixed import error * Remove unnecessary import statements * Changed max_2d_positions from class variable to instance variable of the config class * Add support for exporting PyTorch LayoutLM to ONNX * Added tests for converting LayoutLM to ONNX * cleanup * Add support for exporting PyTorch LayoutLM to ONNX * cleanup * Fixed import error * Changed max_2d_positions from class variable to instance variable of the config class * Use super class generate_dummy_inputs method Co-authored-by:
Michael Benayoun <mickbenayoun@gmail.com> * Add support for Masked LM, sequence classification and token classification Co-authored-by:
Michael Benayoun <mickbenayoun@gmail.com> * Removed uncessary import and method * Fixed code styling * Raise error if PyTorch is not installed * Remove unnecessary import statement Co-authored-by:
Michael Benayoun <mickbenayoun@gmail.com>
-
Sylvain Gugger authored
* Add push_to_hub to no_trainer examples * Quality * Document integration * Roll out to other examples
-
Stas Bekman authored
-
Anton Lozhkov authored
* Test np padding * Pass feature extraction tests * Update type hints * Fix flaky integration tests * Try a more stable waveform * Add to_numpy jax support * int32 attention masks * Refactor normalization tests
-
Kamal Raj authored
-
Kamal Raj authored
* flax qa example * Updated README: Added Large model * added utils_qa.py FULL_COPIES * Updates: 1. Copyright Year updated 2. added dtype arg 3. passing seed and dtype to load model 4. Check eval flag before running eval * updated README * updated code comment
-
Kamal Raj authored
* beit-flax * updated FLAX_BEIT_MLM_DOCSTRING * removed bool_masked_pos from classification * updated Copyright * code refactoring: x -> embeddings * updated test: rm from_pt * Update docs/source/model_doc/beit.rst * model code dtype updates and other changes according to review * relative_position_bias revert back to pytorch design
-
Patrick von Platen authored
* upload * correct * correct * correct * finish * up * up * up again
-
- 20 Sep, 2021 10 commits
-
-
flozi00 authored
-
Lowin authored
the variables of run example is not correct
-
Sylvain Gugger authored
* Dynamic model * Use defensive flag * Style * Doc and arg rename * Arg rename * Add tests * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Address review comments * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
flozi00 authored
-
Stas Bekman authored
* [megatron_gpt2] checkpoint v3 * bug fix * fixes * switch to default from - which is what the current megatron-lm uses * cleanup * back compat
-
Kamal Raj authored
Fixed expand_dims axis
-
Ayaka Mikazuki authored
* Fix MT5 documentation The abstract is incomplete * MT5 -> mT5
-
Chengjiang Li authored
-
Gunjan Chhablani authored
* Init FNet * Update config * Fix config * Update model classes * Update tokenizers to use sentencepiece * Fix errors in model * Fix defaults in config * Remove position embedding type completely * Fix typo and take only real numbers * Fix type vocab size in configuration * Add projection layer to embeddings * Fix position ids bug in embeddings * Add minor changes * Add conversion script and remove CausalLM vestiges * Fix conversion script * Fix conversion script * Remove CausalLM Test * Update checkpoint names to dummy checkpoints * Add tokenizer mapping * Fix modeling file and corresponding tests * Add tokenization test file * Add PreTraining model test * Make style and quality * Make tokenization base tests work * Update docs * Add FastTokenizer tests * Fix fast tokenizer special tokens * Fix style and quality * Remove load_tf_weights vestiges * Add FNet to main README * Fix configuration example indentation * Comment tokenization slow test * Fix style * Add changes from review * Fix style * Remove bos and eos tokens from tokenizers * Add tokenizer slow test, TPU transforms, NSP * Add scipy check * Add scipy availabilty check to test * Fix tokenizer and use correct inputs * Remove remaining TODOs * Fix tests * Fix tests * Comment Fourier Test * Uncomment Fourier Test * Change to google checkpoint * Add changes from review * Fix activation function * Fix model integration test * Add more integration tests * Add comparison steps to MLM integration test * Fix style * Add masked tokenization fix * Improve mask tokenization fix * Fix index docs * Add changes from review * Fix issue * Fix failing import in test * some more fixes * correct fast tokenizer * finalize * make style * Remove additional tokenization logic * Set do_lower_case to False * Allow keeping accents * Fix tokenization test * Fix FNet Tokenizer Fast * fix tests * make style * Add tips to FNet docs Co-authored-by:patrickvonplaten <patrick.v.platen@gmail.com>
-
Suraj Patil authored
-
- 17 Sep, 2021 9 commits
-
-
calpt authored
-
Lysandre Debut authored
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Alessandro Suglia authored
Co-authored-by:Alessandro Suglia <asuglia@fb.com>
-
Alex Hedges authored
-
Matt authored
* Removed misfiring warnings * Revert "Removed misfiring warnings" This reverts commit cea90de325056b9c1cbcda2bd2613a785c1639ce. * Retain the warning, but only when the user actually overrides things * Fix accidentally breaking just about every model on the hub simultaneously * Style pass
-
Li-Huai (Allan) Lin authored
* Fix special tokens not correctly tokenized * Add testing * Fix * Fix * Use user workflows instead of directly assigning variables * Enable test of fast tokenizers * Update test of canine tokenizer
-
Patrick von Platen authored
* finish * add test * push * remove unnecessary code * up * correct test * Update src/transformers/training_args.py
-
Ibraheem Moosa authored
* Optimize Token Classification models for TPU As per the XLA document XLA cannot handle masked indexing well. So token classification models for BERT and others use an implementation based on `torch.where`. This implementation works well on TPU. ALBERT token classification model uses the masked indexing which causes performance issues on TPU. This PR fixes this issue by following the BERT implementation. * Same fix for ELECTRA * Same fix for LayoutLM
-
- 16 Sep, 2021 3 commits
-
-
Benjamin Davidson authored
* made tokenizer fully picklable * remove whitespace * added testcase
-
Sylvain Gugger authored
* Properly use test_fetcher for examples * Fake example modification * Fake modeling file modification * Clean fake modifications * Run example tests for any modification.
-
Stas Bekman authored
* [deepspeed] replaced deprecated init arg * Trigger CI
-