- 11 Mar, 2021 16 commits
-
-
Sylvain Gugger authored
-
WybeKoper authored
* Fixed broken link * fixed max length violation Co-authored-by:WybeKoper <WybeKoper@users.noreply.github.com>
-
jeswan authored
* add deberta to pretraining mapping * add deberta_v2 to PRETRAINING_MAPPING
-
Lysandre Debut authored
-
Sylvain Gugger authored
* PoC * Fix slow tests for the PT1.8 Embedding problem
-
Funtowicz Morgan authored
* Allow to pass kwargs to model's from_pretrained when using pipeline. * Disable the use of past_keys_values for GPT2 when exporting to ONNX. * style * Remove comment. * Appease the documentation gods * Fix style Co-authored-by:Lysandre <lysandre.debut@reseau.eseo.fr>
-
Lysandre Debut authored
-
Lysandre Debut authored
-
Lysandre Debut authored
* Adds a @require_torch to a test that requires it * Tokenizer too * Style
-
Suraj Patil authored
-
Sylvain Gugger authored
* Remove special path for custom vocab files * Update src/transformers/tokenization_utils_base.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Expand error message Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Lysandre Debut authored
* S2S + M2M100 should be available in tokenization_auto * Requires sentencepiece * SentencePiece for S2T as well :)
-
Patrick von Platen authored
* add conversion script * add wav2vec2 xslr models * finish * Update docs/source/model_doc/xlsr_wav2vec2.rst Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
-
ArvidYin authored
correct spell error: 'nether'
-
Lysandre Debut authored
-
- 10 Mar, 2021 8 commits
-
-
Sylvain Gugger authored
-
Philipp Schmid authored
* renamed logging to hf_logging * changed logging from hf_logging to logging and loggin to native_logging * removed everything trying to fix import Trainer error * adding imports again * added custom add_handler function to logging.py * make style * added remove_handler * added another conditional to assert
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Move tokenizer files in each repo * Fix mBART50 tests * Fix mBART tests * Fix Marian tests * Update templates
-
Suraj Patil authored
* s2t * fix config * conversion script * fix import * add tokenizer * fix tok init * fix tokenizer * first version working * fix embeds * fix lm head * remove extra heads * fix convert script * handle encoder attn mask * style * better enc attn mask * override _prepare_attention_mask_for_generation * handle attn_maks in encoder and decoder * input_ids => input_features * enable use_cache * remove old code * expand embeddings if needed * remove logits bias * masked_lm_loss => loss * hack tokenizer to support feature processing * fix model_input_names * style * fix error message * doc * remove inputs_embeds * remove input_embeds * remove unnecessary docstring * quality * SpeechToText => Speech2Text * style * remove shared_embeds * subsample => conv * remove Speech2TextTransformerDecoderWrapper * update output_lengths formula * fix table * remove max_position_embeddings * update conversion scripts * add possibility to do upper case for now * add FeatureExtractor and Processor * add tests for extractor * require_torch_audio => require_torchaudio * add processor test * update import * remove classification head * attention mask is now 1D * update docstrings * attention mask should be of type long * handle attention mask from generate * alwyas return attention_mask * fix test * style * doc * Speech2TextTransformer => Speech2Text * Speech2TextTransformerConfig => Speech2TextConfig * remove dummy_inputs * nit * style * multilinguial tok * fix tokenizer * add tgt_lang setter * save lang_codes * fix tokenizer * add forced_bos_token_id to tokenizer * apply review suggestions * add torchaudio to extra deps * add speech deps to CI * fix dep * add libsndfile to ci * libsndfile1 * add speech to extras all * libsndfile1 -> libsndfile1 * libsndfile * libsndfile1-dev * apt update * add sudo to install * update deps table * install libsndfile1-dev on CI * tuple to list * init conv layer * add model tests * quality * add integration tests * skip_special_tokens * add speech_to_text_transformer in toctree * fix tokenizer * fix fp16 tests * add tokenizer tests * fix copyright * input_values => input_features * doc * add model in readme * doc * change checkpoint names * fix copyright * fix code example * add max_model_input_sizes in tokenizer * fix integration tests * add do_lower_case to tokenizer * remove clamp trick * fix "Add modeling imports here" * fix copyrights * fix tests * SpeechToTextTransformer => SpeechToText * fix naming * fix table formatting * fix typo * style * fix typos * remove speech dep from extras[testing] * fix copies * rename doc file, * put imports under is_torch_available * run feat extract tests when torch is available * dummy objects for processor and extractor * fix imports in tests * fix import in modeling test * fxi imports * fix torch import * fix imports again * fix positional embeddings * fix typo in import * adapt new extractor refactor * style * fix torchscript test * doc * doc * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix docs, copied from, style * fix docstring * handle imports * remove speech from all extra deps * remove s2t from seq2seq lm mapping * better names * skip training tests * add install instructions * List => Tuple * doc * fix conversion script * fix urls * add instruction for libsndfile * fix fp16 test Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
* Add new GLUE example with no Trainer. * Style * Address review comments
-
Suraj Patil authored
-
Allen Wang authored
Fixes an issue in `text-classification` where MNLI eval/test datasets are not being preprocessed. (#10621) * Fix MNLI tests * Linter fix
-
- 09 Mar, 2021 9 commits
-
-
Sylvain Gugger authored
* Fix tests of TrainerCallback * Update tests/test_trainer_callback.py Co-authored-by:Lysandre Debut <lysandre@huggingface.co>
-
Sylvain Gugger authored
* Hotfix fairscale FSDP * Evaluation works * Save on process zero
-
Bhadresh Savani authored
-
Philipp Schmid authored
* added sm to ua * update id * removed id * removed comments * added env variable * changed variable name * make quality happy * added sguggers feedback * make styling happy and remove brackets * added sm to ua * update id * removed id * removed comments * added env variable * changed variable name * make quality happy * added sguggers feedback * make styling happy and remove brackets
-
Suraj Patil authored
-
Lysandre authored
-
Patrick von Platen authored
* save first version * finish refactor * finish refactor * correct naming * correct naming * shorter names * Update src/transformers/feature_extraction_common_utils.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * change name * finish Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Stas Bekman authored
* How to solve: Title level inconsistent * list chars
-
Lysandre Debut authored
* Pipeline tests should be slow * Temporarily mark some tests as slow * Temporarily mark Barthez tests as slow
-
- 08 Mar, 2021 7 commits
-
-
Ratthachat (Jung) authored
* Create modeling_tf_dpr.py * Add TFDPR * Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot last commit accidentally deleted these 4 lines, so I recover them back * Add TFDPR * Add TFDPR * clean up some comments, add TF input-style doc string * Add TFDPR * Make return_dict=False as default * Fix return_dict bug (in .from_pretrained) * Add get_input_embeddings() * Create test_modeling_tf_dpr.py The current version is already passed all 27 tests! Please see the test run at : https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing * fix quality * delete init weights * run fix copies * fix repo consis * del config_class, load_tf_weights They shoud be 'pytorch only' * add config_class back after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion * newline after .. note:: * import tf, np (Necessary for ModelIntegrationTest) * slow_test from_pretrained with from_pt=True At the moment we don't have TF weights (since we don't have official official TF model) Previously, I did not run slow test, so I missed this bug * Add simple TFDPRModelIntegrationTest Note that this is just a test that TF and Pytorch gives approx. the same output. However, I could not test with the official DPR repo's output yet * upload correct tf model * remove position_ids as missing keys * create modeling_tf_rag * add tests for tf * add tf tests * revert wrong pt commit * further refactor * further refactor * refactor * Update modeling_tf_rag.py - input_processing - fix prepare_input_for_generation (mostly fix generate bug) - bring back from_pretrained hack in order to test generate * delete colab pieces of code * Show case of greedy "generate" Temporarily change from beam_search test to greedy_search test to show case that TF and PT do get equivalent output. * cosmetic update * correct typos * update * push some progress * make easy check * fix rag save from pretrained * Update src/transformers/modeling_tf_utils.py * remove commented out lines * delete unnecessary lines * add simple test case for nq_checkpoint Add nq_checkpoint test to show that current version without hack still fails * temporarily put ugly hack back again * Add TFRagSequenceForGeneration!! * __init__.py , import TFRagSequenceForGeneration * Add TFRagSequence tests! * rag init.py - add TFRagSequenceForGeneration * fix from_pretrained * fix prepare_inputs_for_generation * Beam search for RagToken! * minor clean up * add tf.cast in TFRagModel * More tf.cast * Add all remaining tests (still have issues) * delete all T5 related * make style * fix load weight prefix * fix bart * fix return_dict for tf_rag make all tests pass .. Hooray * fix some tests * fix code quality * fix qualtiy check * finish tests tf rag * add tf rag to docs * remove TFT5 from docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove TFT5 from docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Delete outdated comments Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * improve doc strings * add generative model classes * fix adjust token logic * refactor generate for TFRag * using shape_list, not _get_shape Co-authored-by:
Julien Plu <plu.julien@gmail.com> * axis=[1]->axis=1 * delete NEED_HELP comment * improve readability Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * improve readability Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * improve readability Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Indicating model is in a developing state in docstrings As suggested by Julien * small last changes * apply sylvains suggestions * finish tf rag Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
patrickvonplaten <patrick@huggingface.co> Co-authored-by:
Julien Plu <plu.julien@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
* Check layer types for Optimizer construction * Duplicate class
-
Sylvain Gugger authored
This reverts commit b35e7b68.
-
Sylvain Gugger authored
This reverts commit a8ec52ef.
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Stas Bekman authored
* batch 1 * this is tpu * deebert attempt * the rest
-