- 16 Jun, 2020 1 commit
-
-
Yacine Jernite authored
* add eli5 examples * add dense query script * query_di * merging * merging * add_utils * adds nearest neighbor wikipedia * batch queries * training_retriever * new notebooks * moved retriever traiing script * finished wiki40b * max_len_fix * train_s2s * retriever_batch_checkpointing * cleanup * merge * dim_fix * fix_indexer * fix_wiki40b_snippets * fix_embed_for_r * fp32 index * fix_sparse_q * joint_training * remove obsolete datasets * add_passage_nn_results * add_passage_nn_results * add_batch_nn * add_batch_nn * add_data_scripts * notebook * notebook * notebook * fix_multi_gpu * add_app * full_caching * full_caching * notebook * sparse_done * images * notebook * add_image_gif * with_Gif * add_contr_image * notebook * notebook * notebook * train_functions * notebook * min_retrieval_length * pandas_option * notebook * min_retrieval_length * notebook * notebook * eval_Retriever * notebook * images * notebook * add_example * add_example * notebook * fireworks * notebook * notebook * joe's notebook comments * app_update * notebook * notebook_link * captions * notebook * assing RetriBert model * add RetriBert to Auto * change AutoLMHead to AutoSeq2Seq * notebook downloads from hf models * style_black * style_black * app_update * app_update * fix_app_update * style * style * isort * Delete WikiELI5training.ipynb * Delete evaluate_eli5.py * Delete WikiELI5explore.ipynb * Delete ExploreWikiELI5Support.html * Delete explainlikeimfive.py * Delete wiki_snippets.py * children before parent * children before parent * style_black * style_black_only * isort * isort_new * Update src/transformers/modeling_retribert.py Co-authored-by:
Julien Chaumond <chaumond@gmail.com> * typo fixes * app_without_asset * cleanup * Delete ELI5animation.gif * Delete ELI5contrastive.svg * Delete ELI5wiki_index.svg * Delete choco_bis.svg * Delete fireworks.gif * Delete huggingface_logo.jpg * Delete huggingface_logo.svg * Delete Long_Form_Question_Answering_with_ELI5_and_Wikipedia.ipynb * Delete eli5_app.py * Delete eli5_utils.py * readme * Update README.md * unused imports * moved_info * default_beam * ftuned model * disclaimer * Update src/transformers/modeling_retribert.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * black * add_doc * names * isort_Examples * isort_Examples * Add doc to index Co-authored-by:
Julien Chaumond <chaumond@gmail.com> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
- 15 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
* Add `DistilBertForMultipleChoice`
-
- 12 Jun, 2020 3 commits
-
-
Suraj Patil authored
-
Sylvain Gugger authored
* Add AlbertForMultipleChoice * Make up to date and add all models to common tests
-
Patrick von Platen authored
* first commit * add new auto models * better naming * fix bert automodel * fix automodel for pretraining * add models to init * fix name typo * fix typo * better naming * future warning instead of depreciation warning
-
- 10 Jun, 2020 1 commit
-
-
Suraj Patil authored
* ElectraForQuestionAnswering * udate __init__ * add test for electra qa model * add ElectraForQuestionAnswering in auto models * add ElectraForQuestionAnswering in all_model_classes * fix outputs, input_ids defaults to None * add ElectraForQuestionAnswering in docs * remove commented line
-
- 09 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
* Add XLMRobertaForQuestionAnswering * Formatting * Make test happy
-
- 02 Jun, 2020 1 commit
-
-
Julien Chaumond authored
* Kill model archive maps * Fixup * Also kill model_archive_map for MaskedBertPreTrainedModel * Unhook config_archive_map * Tokenizers: align with model id changes * make style && make quality * Fix CI
-
- 01 Jun, 2020 1 commit
-
-
Lysandre authored
-
- 29 May, 2020 1 commit
-
-
Patrick von Platen authored
* add multiple choice for longformer * add models to docs * adapt docstring * add test to longformer * add longformer for mc in init and modeling auto * fix tests
-
- 28 May, 2020 1 commit
-
-
Suraj Patil authored
-
- 27 May, 2020 1 commit
-
-
Suraj Patil authored
* LongformerForSequenceClassification * better naming x=>hidden_states, fix typo in doc * Update src/transformers/modeling_longformer.py * Update src/transformers/modeling_longformer.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 25 May, 2020 1 commit
-
-
Suraj Patil authored
* added LongformerForQuestionAnswering * add LongformerForQuestionAnswering * fix import for LongformerForMaskedLM * add LongformerForQuestionAnswering * hardcoded sep_token_id * compute attention_mask if not provided * combine global_attention_mask with attention_mask when provided * update example in docstring * add assert error messages, better attention combine * add test for longformerForQuestionAnswering * typo * cast gloabl_attention_mask to long * make style * Update src/transformers/configuration_longformer.py * Update src/transformers/configuration_longformer.py * fix the code quality * Merge branch 'longformer-for-question-answering' of https://github.com/patil-suraj/transformers into longformer-for-question-answering Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 23 May, 2020 1 commit
-
-
Bharat Raghunathan authored
-
- 22 May, 2020 1 commit
-
-
Frankie Liuzzi authored
* added functionality for electra classification head * unneeded dropout * Test ELECTRA for sequence classification * Style Co-authored-by:
Frankie <frankie@frase.io> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
- 19 May, 2020 1 commit
-
-
Iz Beltagy authored
* first commit * bug fixes * better examples * undo padding * remove wrong VOCAB_FILES_NAMES * License * make style * make isort happy * unit tests * integration test * make `black` happy by undoing `isort` changes!! * lint * no need for the padding value * batch_size not bsz * remove unused type casting * seqlen not seq_len * staticmethod * `bert` selfattention instead of `n2` * uint8 instead of bool + lints * pad inputs_embeds using embeddings not a constant * black * unit test with padding * fix unit tests * remove redundant unit test * upload model weights * resolve todo * simpler _mask_invalid_locations without lru_cache + backward compatible masked_fill_ * increase unittest coverage
-
- 10 May, 2020 2 commits
-
-
flozi00 authored
-
Sam Shleifer authored
- MarianSentencepieceTokenizer - > MarianTokenizer - Start using unk token. - add docs page - add better generation params to MarianConfig - more conversion utilities
-
- 07 May, 2020 2 commits
-
-
Jared T Nielsen authored
* Add AlbertForPreTraining and TFAlbertForPreTraining models. * PyTorch conversion * TensorFlow conversion * style Co-authored-by:Lysandre <lysandre.debut@reseau.eseo.fr>
-
Patrick von Platen authored
* first copy & past commit from Bert and morgans LSH code * add easy way to compare to trax original code * translate most of function * make trax lsh self attention deterministic with numpy seed + copy paste code * add same config * add same config * make layer init work * implemented hash_vectors function for lsh attention * continue reformer translation * hf LSHSelfAttentionLayer gives same output as trax layer * refactor code * refactor code * refactor code * refactor * refactor + add reformer config * delete bogus file * split reformer attention layer into two layers * save intermediate step * save intermediate step * make test work * add complete reformer block layer * finish reformer layer * implement causal and self mask * clean reformer test and refactor code * fix merge conflicts * fix merge conflicts * update init * fix device for GPU * fix chunk length init for tests * include morgans optimization * improve memory a bit * improve comment * factorize num_buckets * better testing parameters * make whole model work * make lm model work * add t5 copy paste tokenizer * add chunking feed forward * clean config * add improved assert statements * make tokenizer work * improve test * correct typo * extend config * add complexer test * add new axial position embeddings * add local block attention layer * clean tests * refactor * better testing * save intermediate progress * clean test file * make shorter input length work for model * allow variable input length * refactor * make forward pass for pretrained model work * add generation possibility * finish dropout and init * make style * refactor * add first version of RevNet Layers * make forward pass work and add convert file * make uploaded model forward pass work * make uploaded model forward pass work * refactor code * add namedtuples and cache buckets * correct head masks * refactor * made reformer more flexible * make style * remove set max length * add attention masks * fix up tests * fix lsh attention mask * make random seed optional for the moment * improve memory in reformer * add tests * make style * make sure masks work correctly * detach gradients * save intermediate * correct backprob through gather * make style * change back num hashes * rename to labels * fix rotation shape * fix detach * update * fix trainer * fix backward dropout * make reformer more flexible * fix conflict * fix * fix * add tests for fixed seed in reformer layer * fix trainer typo * fix typo in activations * add fp16 tests * add fp16 training * support fp16 * correct gradient bug in reformer * add fast gelu * re-add dropout for embedding dropout * better naming * better naming * renaming * finalize test branch * finalize tests * add more tests * finish tests * fix * fix type trainer * fix fp16 tests * fix tests * fix tests * fix tests * fix issue with dropout * fix dropout seeds * correct random seed on gpu * finalize random seed for dropout * finalize random seed for dropout * remove duplicate line * correct half precision bug * make style * refactor * refactor * docstring * remove sinusoidal position encodings for reformer * move chunking to modeling_utils * make style * clean config * make style * fix tests * fix auto tests * pretrained models * fix docstring * update conversion file * Update pretrained_models.rst * fix rst * fix rst * update copyright * fix test path * fix test path * fix small issue in test * include reformer in generation tests * add docs for axial position encoding * finish docs * Update convert_reformer_trax_checkpoint_to_pytorch.py * remove isort * include sams comments * remove wrong comment in utils * correct typos * fix typo * Update reformer.rst * applied morgans optimization * make style * make gpu compatible * remove bogus file * big test refactor * add example for chunking * fix typo * add to README
-
- 28 Apr, 2020 1 commit
-
-
Patrick von Platen authored
* change encoder decoder style to bart & t5 style * make encoder decoder generation dummy work for bert * make style * clean init config in encoder decoder * add tests for encoder decoder models * refactor and add last tests * refactor and add last tests * fix attn masks for bert encoder decoder * make style * refactor prepare inputs for Bert * refactor * finish encoder decoder * correct typo * add docstring to config * finish * add tests * better naming * make style * fix flake8 * clean docstring * make style * rename
-
- 22 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* doc * [tests] Add sample files for a regression task * [HUGE] Trainer * Feedback from @sshleifer * Feedback from @thomwolf + logging tweak * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes * [glue] Use default max_seq_length of 128 like before * [glue] move DataTrainingArguments around * [ner] Change interface of InputExample, and align run_{tf,pl} * Re-align the pl scripts a little bit * ner * [ner] Add integration test * Fix language_modeling with API tweak * [ci] Tweak loss target * Don't break console output * amp.initialize: model must be on right device before * [multiple-choice] update for Trainer * Re-align to 827d6d6e
-
- 03 Apr, 2020 1 commit
-
-
Lysandre Debut authored
* Electra wip * helpers * Electra wip * Electra v1 * ELECTRA may be saved/loaded * Generator & Discriminator * Embedding size instead of halving the hidden size * ELECTRA Tokenizer * Revert BERT helpers * ELECTRA Conversion script * Archive maps * PyTorch tests * Start fixing tests * Tests pass * Same configuration for both models * Compatible with base + large * Simplification + weight tying * Archives * Auto + Renaming to standard names * ELECTRA is uncased * Tests * Slight API changes * Update tests * wip * ElectraForTokenClassification * temp * Simpler arch + tests Removed ElectraForPreTraining which will be in a script * Conversion script * Auto model * Update links to S3 * Split ElectraForPreTraining and ElectraForTokenClassification * Actually test PreTraining model * Remove num_labels from configuration * wip * wip * From discriminator and generator to electra * Slight API changes * Better naming * TensorFlow ELECTRA tests * Accurate conversion script * Added to conversion script * Fast ELECTRA tokenizer * Style * Add ELECTRA to README * Modeling Pytorch Doc + Real style * TF Docs * Docs * Correct links * Correct model intialized * random fixes * style * Addressing Patrick's and Sam's comments * Correct links in docs
-
- 26 Mar, 2020 1 commit
-
-
sakares saengkaew authored
* Add the missing token classification for XLM * fix styling * Add XLMForTokenClassification to AutoModelForTokenClassification class * Fix docstring typo for non-existing class * Add the missing token classification for XLM * fix styling * fix styling * Add XLMForTokenClassification to AutoModelForTokenClassification class * Fix docstring typo for non-existing class * Add missing description for AlbertForTokenClassification * fix styling * Add missing docstring for AlBert * Slow tests should be slow Co-authored-by:
Sakares Saengkaew <s.sakares@gmail.com> Co-authored-by:
LysandreJik <lysandre.debut@reseau.eseo.fr>
-
- 19 Mar, 2020 1 commit
-
-
Patrick von Platen authored
* fix conflicts * update bart max length test * correct spelling mistakes * implemented model specific encode function * fix merge conflicts * better naming * save intermediate state -> need to rethink strucuture a bit * leave tf problem as it is for now * current version * add layers.pop * remove ipdb * make style * clean return cut decoding * remove ipdbs * Fix restoring layers in the decoders that doesnt exists. * push good intermediate solution for now * fix conflicts * always good to refuse to merge conflicts when rebasing * fix small bug * improve function calls * remove unused file * add correct scope behavior for t5_generate Co-authored-by:Morgan Funtowicz <funtowiczmo@gmail.com>
-
- 06 Mar, 2020 1 commit
-
-
Funtowicz Morgan authored
Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> Format & quality Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> Again. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co>
-
- 05 Mar, 2020 1 commit
-
-
Sam Shleifer authored
* improved documentation
-
- 24 Feb, 2020 1 commit
-
-
Lysandre Debut authored
-
- 23 Feb, 2020 1 commit
-
-
Martin Malmsten authored
* Added support for Albert in NER pipeline * Added command-line options to examples/ner/run_ner.py to better control tokenization * Added class AlbertForTokenClassification * Changed output for NerPipeline to use .convert_ids_to_tokens(...) instead of .decode(...) to better reflect tokens
-
- 20 Feb, 2020 1 commit
-
-
Sam Shleifer authored
* Results same as fairseq * Wrote a ton of tests * Struggled with api signatures * added some docs
-
- 31 Jan, 2020 1 commit
-
-
Lysandre authored
The FlauBERT configuration file inherits from XLMConfig, and is recognized as such when loading from AutoModels as the XLMConfig is checked before the FlaubertConfig. Changing the order solves this problem, but a test should be added.
-
- 30 Jan, 2020 1 commit
-
-
Lysandre authored
-
- 27 Jan, 2020 4 commits
-
-
thomwolf authored
-
thomwolf authored
-
Julien Chaumond authored
-
Malte Pietsch authored
-
- 24 Jan, 2020 2 commits
- 22 Jan, 2020 1 commit
-
-
Julien Chaumond authored
hat/tip @stefan-it
-
- 13 Jan, 2020 1 commit
-
-
Julien Chaumond authored
-