- 09 Mar, 2021 3 commits
-
-
Patrick von Platen authored
* save first version * finish refactor * finish refactor * correct naming * correct naming * shorter names * Update src/transformers/feature_extraction_common_utils.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * change name * finish Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Stas Bekman authored
* How to solve: Title level inconsistent * list chars
-
Lysandre Debut authored
* Pipeline tests should be slow * Temporarily mark some tests as slow * Temporarily mark Barthez tests as slow
-
- 08 Mar, 2021 20 commits
-
-
Ratthachat (Jung) authored
* Create modeling_tf_dpr.py * Add TFDPR * Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot last commit accidentally deleted these 4 lines, so I recover them back * Add TFDPR * Add TFDPR * clean up some comments, add TF input-style doc string * Add TFDPR * Make return_dict=False as default * Fix return_dict bug (in .from_pretrained) * Add get_input_embeddings() * Create test_modeling_tf_dpr.py The current version is already passed all 27 tests! Please see the test run at : https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing * fix quality * delete init weights * run fix copies * fix repo consis * del config_class, load_tf_weights They shoud be 'pytorch only' * add config_class back after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion * newline after .. note:: * import tf, np (Necessary for ModelIntegrationTest) * slow_test from_pretrained with from_pt=True At the moment we don't have TF weights (since we don't have official official TF model) Previously, I did not run slow test, so I missed this bug * Add simple TFDPRModelIntegrationTest Note that this is just a test that TF and Pytorch gives approx. the same output. However, I could not test with the official DPR repo's output yet * upload correct tf model * remove position_ids as missing keys * create modeling_tf_rag * add tests for tf * add tf tests * revert wrong pt commit * further refactor * further refactor * refactor * Update modeling_tf_rag.py - input_processing - fix prepare_input_for_generation (mostly fix generate bug) - bring back from_pretrained hack in order to test generate * delete colab pieces of code * Show case of greedy "generate" Temporarily change from beam_search test to greedy_search test to show case that TF and PT do get equivalent output. * cosmetic update * correct typos * update * push some progress * make easy check * fix rag save from pretrained * Update src/transformers/modeling_tf_utils.py * remove commented out lines * delete unnecessary lines * add simple test case for nq_checkpoint Add nq_checkpoint test to show that current version without hack still fails * temporarily put ugly hack back again * Add TFRagSequenceForGeneration!! * __init__.py , import TFRagSequenceForGeneration * Add TFRagSequence tests! * rag init.py - add TFRagSequenceForGeneration * fix from_pretrained * fix prepare_inputs_for_generation * Beam search for RagToken! * minor clean up * add tf.cast in TFRagModel * More tf.cast * Add all remaining tests (still have issues) * delete all T5 related * make style * fix load weight prefix * fix bart * fix return_dict for tf_rag make all tests pass .. Hooray * fix some tests * fix code quality * fix qualtiy check * finish tests tf rag * add tf rag to docs * remove TFT5 from docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove TFT5 from docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Delete outdated comments Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * improve doc strings * add generative model classes * fix adjust token logic * refactor generate for TFRag * using shape_list, not _get_shape Co-authored-by:
Julien Plu <plu.julien@gmail.com> * axis=[1]->axis=1 * delete NEED_HELP comment * improve readability Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * improve readability Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * improve readability Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Indicating model is in a developing state in docstrings As suggested by Julien * small last changes * apply sylvains suggestions * finish tf rag Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
patrickvonplaten <patrick@huggingface.co> Co-authored-by:
Julien Plu <plu.julien@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
* Check layer types for Optimizer construction * Duplicate class
-
Sylvain Gugger authored
This reverts commit b35e7b68.
-
Sylvain Gugger authored
This reverts commit a8ec52ef.
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Stas Bekman authored
* batch 1 * this is tpu * deebert attempt * the rest
-
Bhadresh Savani authored
* reverted changes of logging and saving metrics * added max_sample arguments * fixed code * white space diff * reformetting code * reformatted code
-
Stas Bekman authored
* fix sharded ddp enum * test fixes * stronger validation + apex breaks other tests
-
Stas Bekman authored
* more readable test * add all the missing places * one more nltk * better exception check * revert
-
Sylvain Gugger authored
* Fix version control with anchors * Simplify
-
Stas Bekman authored
-
Mehrad Moradshahi authored
* Fix Marian decoding Tokenizer's decode and batch_decode now accepts a new argument (use_source_tokenizer) which indicates whether the source spm should be used to decode ids. This is useful for Marian models specificallly when decoding source input ids. * Adapt docstrings Co-authored-by:Sylvain Gugger <sylvain.gugger@gmail.com>
-
Lysandre authored
-
Lysandre Debut authored
* Enable torch 1.8.0 in GPU CI * Disable torch-scatter
-
Suraj Patil authored
* fix tests * emb should be a parameter * fix positional embeddings * fix make_weights * don't save pos embeds * add comment to describe the clamping
-
Oren Amsalem authored
-
Eunhyuk Shin authored
-
Stas Bekman authored
-
Yu authored
-
- 06 Mar, 2021 3 commits
-
-
Suraj Patil authored
* m2m_100 * no layernorm_embedding * sinusoidal positional embeddings * update pos embeddings * add default config values * tokenizer * add conversion script * fix config * fix pos embed * remove _float_tensor * update tokenizer * update lang codes * handle lang codes * fix pos embeds * fix spm key * put embedding weights on device * remove qa and seq classification heads * fix convert script * lang codes pn one line * fix embeds * fix tokenizer * fix tokenizer * add fast tokenizer * style * M2M100MT => M2M100 * fix copyright, style * tokenizer converter * vocab file * remove fast tokenizer * fix embeds * fix tokenizer * fix tests * add tokenizer tests * add integration test * quality * fix model name * fix test * doc * doc * fix doc * add copied from statements * fix tokenizer tests * apply review suggestions * fix urls * fix shift_tokens_right * apply review suggestions * fix * fix doc * add lang code to id * remove unused function * update checkpoint names * fix copy * fix tokenizer * fix checkpoint names * fix merge issue * style
-
Lysandre authored
-
Stas Bekman authored
* offline mode start * add specific values * fix fallback * add test * better values check and range * test that actually works * document the offline mode * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * more strict check * cleaner test * pt-only test * style Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 05 Mar, 2021 11 commits
-
-
Daniel Hug authored
* Refactor checkpoint name in ALBERT and ALBERT_tf * Refactor checkpoint name in BART and BART_tf * Refactor checkpoint name in BERT generation * Refactor checkpoint name in Blenderbot_tf * Refactor checkpoint name in Blenderbot_small_tf * Refactor checkpoint name in ConvBERT AND CONVBERT_TF * Refactor checkpoint name in CTRL AND CTRL_TF * Refactor checkpoint name in DistilBERT AND DistilBERT_TF * Refactor checkpoint name in DistilBERT redo * Refactor checkpoint name in Electra and Electra_tf * Refactor checkpoint name in FlauBERT and FlauBERT_tf * Refactor checkpoint name in FSMT * Refactor checkpoint name in GPT2 and GPT2_tf * Refactor checkpoint name in IBERT * Refactor checkpoint name in LED and LED_tf * Refactor checkpoint name in Longformer and Longformer_tf * Refactor checkpoint name in Lxmert and Lxmert_tf * Refactor checkpoint name in Marian_tf * Refactor checkpoint name in MBART and MBART_tf * Refactor checkpoint name in MobileBERT and MobileBERT_tf * Refactor checkpoint name in mpnet and mpnet_tf * Refactor checkpoint name in openai and openai_tf * Refactor checkpoint name in pegasus_tf * Refactor checkpoint name in reformer * Refactor checkpoint name in Roberta and Roberta_tf * Refactor checkpoint name in SqueezeBert * Refactor checkpoint name in Transformer_xl and Transformer_xl_tf * Refactor checkpoint name in XLM and XLM_tf * Refactor checkpoint name in XLNET and XLNET_tf * Refactor checkpoint name in BERT_tf * run make tests, style, quality, fixup
-
Lysandre Debut authored
* Add stale bot to Github Actions * Update message * Message for assignee * Update scripts/stale.py * Uncomment & stop testing
-
Sylvain Gugger authored
* Fix embeddings for PyTorch 1.8 * Try with PyTorch 1.8.0 * Fix embeddings init * Fix copies * Typo * More typos
-
Chen Liang authored
DEBERTA_PRETRAINED_MODEL_ARCHIVE_LIST => DEBERTA_V2_PRETRAINED_MODEL_ARCHIVE_LIST in line 31.
-
Joakim Warholm authored
-
Lysandre Debut authored
* Only run one test * Patch segfault * Fix summarization pipeline * Ready for merge
-
Patrick von Platen authored
-
Nicolas Patry authored
-
Lysandre authored
-
lewtun authored
-
Lysandre authored
-
- 04 Mar, 2021 3 commits
-
-
Patrick von Platen authored
* first step to refactor * make all fast tests pass * make all slow tests pass * save intermediate * correct cache * finish PR * make fp16 work
-
Sylvain Gugger authored
* Rework TPU checkpointing in Trainer * Wraps the barrier in a dist test * Address review comments * Remove line
-
Philipp Schmid authored
* removed overwrites * remove default value for output_dir * adjusted typing
-