"vscode:/vscode.git/clone" did not exist on "e30bbb268589d21923646238033a7046018004c2"
- 26 Aug, 2021 9 commits
-
-
Nicolas Patry authored
-
Nicolas Patry authored
-
Nicolas Patry authored
* Moving `summarization` pipeline to new testing format. * Remove generate_kwargs from __init__ args.
-
Nicolas Patry authored
Moving question_answering tests to the new testing scheme. Had to tweak a little some ModelTesterConfig for pipelines. (#13277) * Moving question_answering tests to the new testing scheme. Had to tweak a little some ModelTesterConfig for pipelines. * Removing commented code.
-
Nicolas Patry authored
-
Nicolas Patry authored
- Enforce `test_small_models_{tf,pt}` methods to exist (enforce checking actual values in small tests) - Add support for non RGB image for the pipeline. -
Bram Vanroy authored
* add error message concerning revision * Update src/transformers/configuration_utils.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * re-add double line endings * is not None instead of implicit bool casting Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Stas Bekman authored
* fix tokenizer_class_from_name * Update src/transformers/models/auto/tokenization_auto.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * add test Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Nicolas Patry authored
* New test format for conversational. * Putting back old mixin. * Re-enabling auto tests with LazyLoading. * Feature extraction tests. * Remove feature-extraction. * Feature extraction with feature_extractor (No pun intended). * Update check_model_type for fill-mask.
-
- 25 Aug, 2021 8 commits
-
-
Lysandre Debut authored
-
Lysandre Debut authored
* Some tokenizers cannot be in the mapping * Style
-
Lysandre Debut authored
-
Lysandre Debut authored
-
Lysandre Debut authored
-
Nishant Prabhu authored
-
Lysandre authored
-
Lysandre Debut authored
-
- 24 Aug, 2021 6 commits
-
-
Will Frey authored
If you're using type hints, then passing an `int` where a `float` is annotated is acceptable as per [PEP 484](https://www.python.org/dev/peps/pep-0484/#the-numeric-tower). This makes life a little nicer.
-
dependabot[bot] authored
Bumps [notebook](http://jupyter.org ) from 6.1.5 to 6.4.1. --- updated-dependencies: - dependency-name: notebook dependency-type: direct:production ... Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
Ambesh Shekhar authored
* Adding custom errors and BatchSizeError for GPT2 * Adding custom errors and BatchSizeError for GPT2 * Changing Exception to BaseException * Exception * Adding args to Custom Exception * Adding args to Custom Exception * Changing from BaseException to Exception * Changing Conditional loop syntax * Adding Copyright info * Handling check_code_quality * Handling check_code_quality pt2 * Handling check_code_quality pt3 * Handling check_code_quality pt4 * Handling check_code_quality pt5 * Handling check_code_quality pt6 * Handling check_code_quality pt6 * Using black for check_code_quality * sorting import style * Changing * Changing * verified through style_doc.py * verified through style_doc.py * applying isort * Removing indentation * Changing * Changing * Changing * Used ValueError * Using ValueError * Reformatted Style doc * Using style doc on modeling_gp2.py * Adding indentation * Changing
-
Ori Ram authored
-
Stas Bekman authored
* fix AutoModel.from_pretrained(..., torch_dtype=...) * fix to_diff_dict * add better test * torch is not always available when a model has self.torch_dtype
-
Bram Vanroy authored
* allow local_files_only for fast pretrained tokenizers * make style
-
- 23 Aug, 2021 9 commits
-
-
Lysandre Debut authored
-
Allan Lin authored
-
Kamal Raj authored
-
Yih-Dar authored
* make flax gpt2 working with cross attention * Remove encoder->decoder projection layer * A draft (incomplete) for FlaxEncoderDecoderModel * Add the method from_encoder_decoder_pretrained + the docstrings * Fix the mistakes of using EncoderDecoderModel * Fix style * Add FlaxEncoderDecoderModel to the library * Fix cyclic imports * Add FlaxEncoderDecoderModel to modeling_flax_auto.py * Remove question comments * add tests for FlaxEncoderDecoderModel * add flax_encoder_decoder to the lists of ignored entries in check_repo.py * fix missing required positional arguments * Remove **kwargs when creating FlaxEncoderDecoderModel in from_encoder_decoder_pretrained() Also fix generation eos/pad tokens issue * Fix: Use sequences from the generated_output * Change a check from assert to raise ValueError * Fix examples and token ids issues * Fix missing all_cross_attentions when outputting tuple in modeling_gpt2 * Remove the changes in configuration docstrings. * allow for bert 2 gpt2 * make fix-copies * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Change remaining examples to bert2gpt2 * Change the test to Bert2GPT2 * Fix examples * Fix import * Fix unpack bug * Rename to FlaxEncoderDecoderModelTest and change the test to bert2gpt2 * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Fix: NotImplentedError -> NotImplementedError * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * up * finalize Co-authored-by:
ydshieh <ydshieh@user.noreply> Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
SaulLu authored
Change how "additional_special_tokens" argument in the ".from_pretrained" method of the tokenizer is taken into account (#13056) * add test * add change in PretrainedTokenizerBase * change Luke * deactivate * add the possibility to add additional special tokens for M2M100 * format * add special test for canine * proposed changes for mbart * proposed changes for mbart50 * proposed changes for byt5 * proposed changes for canine * proposed changes for t5 * test fast and slow * remove comment * remove comment * add fast version for all tests * replace break by continue * add more comments * add check to avoid duplicates * remove comment * format * proposed change for wave2vec2 * reverse changes mbart * uncomment * format
-
sourabh112 authored
-
Philipp Schmid authored
* Barrier -> barrier * added logger for metrics * removed stream handler in trainer * moved handler * removed streamhandler from trainer * updated test image and instance type added datasets version to test * Update tests/sagemaker/scripts/pytorch/requirements.txt Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com>
-
NielsRogge authored
* Add min and max question length option to the tokenizer * Add corresponding test
-
NielsRogge authored
-
- 20 Aug, 2021 1 commit
-
-
StevenTang1998 authored
* Fix the loss calculation of ProphetNet * Fix the loss calculation of ProphetNet Fix the loss calculation of ProphetNet and remove warning
-
- 19 Aug, 2021 1 commit
-
-
Allan Lin authored
* Update torch.utils.data namespaces to the latest. * Format * Update Dataloader. * Style
-
- 18 Aug, 2021 2 commits
-
-
Jannis Vamvas authored
-
Patrick von Platen authored
* up * up
-
- 17 Aug, 2021 2 commits
-
-
Ori Ram authored
* splinter template * initialize splinter classes * Splinter Tokenizer * splinter.rst * tokenization fixes * Documentation & some minor variable name changes * bug fix (added back question_token_id to config) + variable names * Minor bug fixes + variable name changes * Fix Splinter references after merge with new transformers * changes after running make style & quality * Fix documentation unindent * Fix doc indentation in tokenization_splinter * Fix also SplinterTokenizerFast * Add Splinter to index.rst and README * Fixdouble whitespace from index.rst * Fixed index.rst with 'make fix-copies' * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/transformers/models/splinter/__init__.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Added "copied from BERT" comments * Removing unnexessary code from modeling_splinter * Update README.md Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/splinter/configuration_splinter.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Remove references to TF modeling from splinter * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Remove unnecessary check * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Add differences between Splinter and Bert tokenizers * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/splinter/tokenization_splinter_fast.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Remove unnecessary check * Doc formatting * Update src/transformers/models/splinter/tokenization_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/splinter/tokenization_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * bug fix: remove load_tf_weights attribute * Some minor quality changes * Update docs/source/model_doc/splinter.rst Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/splinter/configuration_splinter.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Change FullyConnectedLayer to SplinterFullyConnectedLayer * Variable naming * Reove gather_positions function * Remove ClassificationHead as it's outdated * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Remove hardcoded 102 token id * Minor style change * Added "tau" organization to all model identifiers & URLS * Added tau to the tests as well * Copy-from comments * Removed all unnecessary classes (e.g. SplinterForMaskedLM) * Running make fix-copies * Bug fix: Further removed unnecessary classes * Add Splinter to AutoTokenization * Add an integration test for Splinter * Removed initialize_new_qass from config - It will be done through different checkpoints * Removed `initialize_new_qass` from documentation as well * Added new checkpoint names (`tau/splinter-base-qass` and same for large) in the code * Minor change to test * SplinterTokenizer now doesn't abstract from BertTokenizer * SplinterTokenizerFast also dosn't abstract from Bert * style and quality * bug fix: import ing torch in tests only if it's available * Auto mappings * Changed copyrights in Splinter's files * Update src/transformers/models/splinter/configuration_splinter.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
yuvalkirstain <kirstain.yuval@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Nicolas Patry authored
* Starting to optimize ByT5. * Making ByT5Tokenizer faster. * Even faster. * Cleaning up.
-
- 16 Aug, 2021 2 commits
-
-
sararb authored
-
Lysandre Debut authored
* Continue on error * Specific * Temporary patch
-