1. 07 May, 2021 1 commit
    • Vasudev Gupta's avatar
      Add BigBirdPegasus (#10991) · dc3f6758
      Vasudev Gupta authored
      
      
      * init bigbird pegasus
      
      * add debugging nb ; update config
      
      * init conversion
      
      * update conversion script
      
      * complete conversion script
      
      * init forward()
      
      * complete forward()
      
      * add tokenizer
      
      * add some slow tests
      
      * commit current
      
      * fix copies
      
      * add docs
      
      * add conversion script for bigbird-roberta-summarization
      
      * remove TODO
      
      * small fixups
      
      * correct tokenizer
      
      * add bigbird core for now
      
      * fix config
      
      * fix more
      
      * revert pegasus-tokenizer back
      
      * make style
      
      * everything working for pubmed; yayygit status
      
      * complete tests finally
      
      * remove bigbird pegasus tok
      
      * correct tokenizer
      
      * correct tests
      
      * add tokenizer files
      
      * finish make style
      
      * fix test
      
      * update
      
      * make style
      
      * fix tok utils base file
      
      * make fix-copies
      
      * clean a bit
      
      * small update
      
      * fix some suggestions
      
      * add to readme
      
      * fix a bit, clean tests
      
      * fix more tests
      
      * Update src/transformers/__init__.py
      
      * Update src/transformers/__init__.py
      
      * make fix-copies
      
      * complete attn switching, auto-padding left
      
      * make style
      
      * fix auto-padding test
      
      * make style
      
      * fix batched attention tests
      
      * put tolerance at 1e-1 for stand-alone decoder test
      
      * fix docs
      
      * fix tests
      
      * correct slow tokenizer conversion
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * complete remaining suggestions
      
      * fix test
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      dc3f6758
  2. 04 May, 2021 1 commit
  3. 03 May, 2021 1 commit
    • NielsRogge's avatar
      Add LUKE (#11223) · f3cf8ae7
      NielsRogge authored
      
      
      * Rebase with master
      
      * Minor bug fix in docs
      
      * Copy files from adding_luke_v2 and improve docs
      
      * change the default value of use_entity_aware_attention to True
      
      * remove word_hidden_states
      
      * fix head models
      
      * fix tests
      
      * fix the conversion script
      
      * add integration tests for the pretrained large model
      
      * improve docstring
      
      * Improve docs, make style
      
      * fix _init_weights for pytorch 1.8
      
      * improve docs
      
      * fix tokenizer to construct entity sequence with [MASK] entity when entities=None
      
      * Make fix-copies
      
      * Make style & quality
      
      * Bug fixes
      
      * Add LukeTokenizer to init
      
      * Address most comments by @patil-suraj and @LysandreJik
      
      * rename _compute_extended_attention_mask to get_extended_attention_mask
      
      * add comments to LukeSelfAttention
      
      * fix the documentation of the tokenizer
      
      * address comments by @patil-suraj, @LysandreJik, and @sgugger
      
      * improve docs
      
      * Make style, quality and fix-copies
      
      * Improve docs
      
      * fix docs
      
      * add "entity_span_classification" task
      
      * update example code for LukeForEntitySpanClassification
      
      * improve docs
      
      * improve docs
      
      * improve the code example in luke.rst
      
      * rename the classification layer in LukeForEntityClassification from typing to classifier
      
      * add bias to the classifier in LukeForEntitySpanClassification
      
      * update docs to use fine-tuned hub models in code examples of the head models
      
      * update the example sentences
      
      * Make style & quality
      
      * Add require_torch to tokenizer tests
      
      * Add require_torch to tokenizer tests
      
      * Address comments by @sgugger and add community notebooks
      
      * Make fix-copies
      Co-authored-by: default avatarIkuya Yamada <ikuya@ikuya.net>
      f3cf8ae7
  4. 26 Apr, 2021 1 commit
  5. 13 Apr, 2021 1 commit
  6. 08 Apr, 2021 2 commits
  7. 07 Apr, 2021 1 commit
  8. 10 Mar, 2021 1 commit
    • Suraj Patil's avatar
      Speech2TextTransformer (#10175) · d26b37e7
      Suraj Patil authored
      
      
      * s2t
      
      * fix config
      
      * conversion script
      
      * fix import
      
      * add tokenizer
      
      * fix tok init
      
      * fix tokenizer
      
      * first version working
      
      * fix embeds
      
      * fix lm head
      
      * remove extra heads
      
      * fix convert script
      
      * handle encoder attn mask
      
      * style
      
      * better enc attn mask
      
      * override _prepare_attention_mask_for_generation
      
      * handle attn_maks in encoder and decoder
      
      * input_ids => input_features
      
      * enable use_cache
      
      * remove old code
      
      * expand embeddings if needed
      
      * remove logits bias
      
      * masked_lm_loss => loss
      
      * hack tokenizer to support feature processing
      
      * fix model_input_names
      
      * style
      
      * fix error message
      
      * doc
      
      * remove inputs_embeds
      
      * remove input_embeds
      
      * remove unnecessary docstring
      
      * quality
      
      * SpeechToText => Speech2Text
      
      * style
      
      * remove shared_embeds
      
      * subsample => conv
      
      * remove Speech2TextTransformerDecoderWrapper
      
      * update output_lengths formula
      
      * fix table
      
      * remove max_position_embeddings
      
      * update conversion scripts
      
      * add possibility to do upper case for now
      
      * add FeatureExtractor and Processor
      
      * add tests for extractor
      
      * require_torch_audio => require_torchaudio
      
      * add processor test
      
      * update import
      
      * remove classification head
      
      * attention mask is now 1D
      
      * update docstrings
      
      * attention mask should be of type long
      
      * handle attention mask from generate
      
      * alwyas return attention_mask
      
      * fix test
      
      * style
      
      * doc
      
      * Speech2TextTransformer => Speech2Text
      
      * Speech2TextTransformerConfig => Speech2TextConfig
      
      * remove dummy_inputs
      
      * nit
      
      * style
      
      * multilinguial tok
      
      * fix tokenizer
      
      * add tgt_lang setter
      
      * save lang_codes
      
      * fix tokenizer
      
      * add forced_bos_token_id to tokenizer
      
      * apply review suggestions
      
      * add torchaudio to extra deps
      
      * add speech deps to CI
      
      * fix dep
      
      * add libsndfile to ci
      
      * libsndfile1
      
      * add speech to extras all
      
      * libsndfile1 -> libsndfile1
      
      * libsndfile
      
      * libsndfile1-dev
      
      * apt update
      
      * add sudo to install
      
      * update deps table
      
      * install libsndfile1-dev on CI
      
      * tuple to list
      
      * init conv layer
      
      * add model tests
      
      * quality
      
      * add integration tests
      
      * skip_special_tokens
      
      * add speech_to_text_transformer in toctree
      
      * fix tokenizer
      
      * fix fp16 tests
      
      * add tokenizer tests
      
      * fix copyright
      
      * input_values => input_features
      
      * doc
      
      * add model in readme
      
      * doc
      
      * change checkpoint names
      
      * fix copyright
      
      * fix code example
      
      * add max_model_input_sizes in tokenizer
      
      * fix integration tests
      
      * add do_lower_case to tokenizer
      
      * remove clamp trick
      
      * fix "Add modeling imports here"
      
      * fix copyrights
      
      * fix tests
      
      * SpeechToTextTransformer => SpeechToText
      
      * fix naming
      
      * fix table formatting
      
      * fix typo
      
      * style
      
      * fix typos
      
      * remove speech dep from extras[testing]
      
      * fix copies
      
      * rename doc file,
      
      * put imports under is_torch_available
      
      * run feat extract tests when torch is available
      
      * dummy objects for processor and extractor
      
      * fix imports in tests
      
      * fix import in modeling test
      
      * fxi imports
      
      * fix torch import
      
      * fix imports again
      
      * fix positional embeddings
      
      * fix typo in import
      
      * adapt new extractor refactor
      
      * style
      
      * fix torchscript test
      
      * doc
      
      * doc
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix docs, copied from, style
      
      * fix docstring
      
      * handle imports
      
      * remove speech from all extra deps
      
      * remove s2t from seq2seq lm mapping
      
      * better names
      
      * skip training tests
      
      * add install instructions
      
      * List => Tuple
      
      * doc
      
      * fix conversion script
      
      * fix urls
      
      * add instruction for libsndfile
      
      * fix fp16 test
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      d26b37e7
  9. 08 Mar, 2021 1 commit
    • Ratthachat (Jung)'s avatar
      Add TFRag (#9002) · 696e8a43
      Ratthachat (Jung) authored
      * Create modeling_tf_dpr.py
      
      * Add TFDPR
      
      * Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot
      
      last commit accidentally deleted these 4 lines, so I recover them back
      
      * Add TFDPR
      
      * Add TFDPR
      
      * clean up some comments, add TF input-style doc string
      
      * Add TFDPR
      
      * Make return_dict=False as default
      
      * Fix return_dict bug (in .from_pretrained)
      
      * Add get_input_embeddings()
      
      * Create test_modeling_tf_dpr.py
      
      The current version is already passed all 27 tests!
      Please see the test run at : 
      https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing
      
      
      
      * fix quality
      
      * delete init weights
      
      * run fix copies
      
      * fix repo consis
      
      * del config_class, load_tf_weights
      
      They shoud be 'pytorch only'
      
      * add config_class back
      
      after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion
      
      * newline after .. note::
      
      * import tf, np (Necessary for ModelIntegrationTest)
      
      * slow_test from_pretrained with from_pt=True
      
      At the moment we don't have TF weights (since we don't have official official TF model)
      Previously, I did not run slow test, so I missed this bug
      
      * Add simple TFDPRModelIntegrationTest
      
      Note that this is just a test that TF and Pytorch gives approx. the same output.
      However, I could not test with the official DPR repo's output yet
      
      * upload correct tf model
      
      * remove position_ids as missing keys
      
      * create modeling_tf_rag
      
      * add tests for tf
      
      * add tf tests
      
      * revert wrong pt commit
      
      * further refactor
      
      * further refactor
      
      * refactor
      
      * Update modeling_tf_rag.py
      
      - input_processing
      - fix prepare_input_for_generation (mostly fix generate bug)
      - bring back from_pretrained hack in order to test generate
      
      * delete colab pieces of code
      
      * Show case of greedy "generate"
      
      Temporarily change from beam_search test to greedy_search test to show case that TF and PT do get equivalent output.
      
      * cosmetic update
      
      * correct typos
      
      * update
      
      * push some progress
      
      * make easy check
      
      * fix rag save from pretrained
      
      * Update src/transformers/modeling_tf_utils.py
      
      * remove commented out lines
      
      * delete unnecessary lines
      
      * add simple test case for nq_checkpoint
      
      Add nq_checkpoint test to show that current version without hack still fails
      
      * temporarily put ugly hack back again
      
      * Add TFRagSequenceForGeneration!!
      
      * __init__.py , import TFRagSequenceForGeneration
      
      * Add TFRagSequence tests!
      
      * rag init.py - add TFRagSequenceForGeneration
      
      * fix from_pretrained
      
      * fix prepare_inputs_for_generation
      
      * Beam search for RagToken!
      
      * minor clean up
      
      * add tf.cast in TFRagModel
      
      * More tf.cast
      
      * Add all remaining tests (still have issues)
      
      * delete all T5 related
      
      * make style
      
      * fix load weight prefix
      
      * fix bart
      
      * fix return_dict for tf_rag
      
      make all tests pass .. Hooray
      
      * fix some tests
      
      * fix code quality
      
      * fix qualtiy check
      
      * finish tests tf rag
      
      * add tf rag to docs
      
      * remove TFT5 from docstring
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * remove TFT5 from docstring
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Delete outdated comments
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * improve doc strings
      
      * add generative model classes
      
      * fix adjust token logic
      
      * refactor generate for TFRag
      
      * using shape_list, not _get_shape
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      
      * axis=[1]->axis=1
      
      * delete NEED_HELP comment
      
      * improve readability
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * improve readability
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * improve readability
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Indicating model is in a developing state in docstrings
      
      As suggested by Julien
      
      * small last changes
      
      * apply sylvains suggestions
      
      * finish tf rag
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatrickvonplaten <patrick@huggingface.co>
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      696e8a43
  10. 06 Mar, 2021 1 commit
    • Suraj Patil's avatar
      Add m2m100 (#10236) · f6e74a63
      Suraj Patil authored
      * m2m_100
      
      * no layernorm_embedding
      
      * sinusoidal positional embeddings
      
      * update pos embeddings
      
      * add default config values
      
      * tokenizer
      
      * add conversion script
      
      * fix config
      
      * fix pos embed
      
      * remove _float_tensor
      
      * update tokenizer
      
      * update lang codes
      
      * handle lang codes
      
      * fix pos embeds
      
      * fix spm key
      
      * put embedding weights on device
      
      * remove qa and seq classification heads
      
      * fix convert script
      
      * lang codes pn one line
      
      * fix embeds
      
      * fix tokenizer
      
      * fix tokenizer
      
      * add fast tokenizer
      
      * style
      
      * M2M100MT => M2M100
      
      * fix copyright, style
      
      * tokenizer converter
      
      * vocab file
      
      * remove fast tokenizer
      
      * fix embeds
      
      * fix tokenizer
      
      * fix tests
      
      * add tokenizer tests
      
      * add integration test
      
      * quality
      
      * fix model name
      
      * fix test
      
      * doc
      
      * doc
      
      * fix doc
      
      * add copied from statements
      
      * fix tokenizer tests
      
      * apply review suggestions
      
      * fix urls
      
      * fix shift_tokens_right
      
      * apply review suggestions
      
      * fix
      
      * fix doc
      
      * add lang code to id
      
      * remove unused function
      
      * update checkpoint names
      
      * fix copy
      
      * fix tokenizer
      
      * fix checkpoint names
      
      * fix merge issue
      
      * style
      f6e74a63
  11. 25 Feb, 2021 1 commit
    • Patrick von Platen's avatar
      [PretrainedFeatureExtractor] + Wav2Vec2FeatureExtractor, Wav2Vec2Processor,... · cb38ffcc
      Patrick von Platen authored
      [PretrainedFeatureExtractor] + Wav2Vec2FeatureExtractor, Wav2Vec2Processor, Wav2Vec2Tokenizer (#10324)
      
      * push to show
      
      * small improvement
      
      * small improvement
      
      * Update src/transformers/feature_extraction_utils.py
      
      * Update src/transformers/feature_extraction_utils.py
      
      * implement base
      
      * add common tests
      
      * make all tests pass for wav2vec2
      
      * make padding work & add more tests
      
      * finalize feature extractor utils
      
      * add call method to feature extraction
      
      * finalize feature processor
      
      * finish tokenizer
      
      * finish general processor design
      
      * finish tests
      
      * typo
      
      * remove bogus file
      
      * finish docstring
      
      * add docs
      
      * finish docs
      
      * small fix
      
      * correct docs
      
      * save intermediate
      
      * load changes
      
      * apply changes
      
      * apply changes to doc
      
      * change tests
      
      * apply surajs recommend
      
      * final changes
      
      * Apply suggestions from code review
      
      * fix typo
      
      * fix import
      
      * correct docstring
      cb38ffcc
  12. 09 Feb, 2021 1 commit
  13. 04 Feb, 2021 1 commit
    • demSd's avatar
      BartForCausalLM analogs to `ProphetNetForCausalLM` (#9128) · 00031785
      demSd authored
      
      
      * initiliaze bart4causalLM
      
      * create BartDecoderWrapper, setters/getters
      
      * delete spaces
      
      * forward and additional methods
      
      * update cache function, loss function, remove ngram* params in data class.
      
      * add bartcausallm, bartdecoder testing
      
      * correct bart for causal lm
      
      * remove at
      
      * add mbart as well
      
      * up
      
      * fix typo
      
      * up
      
      * correct
      
      * add pegasusforcausallm
      
      * add blenderbotforcausallm
      
      * add blenderbotsmallforcausallm
      
      * add marianforcausallm
      
      * add test for MarianForCausalLM
      
      * add Pegasus test
      
      * add BlenderbotSmall test
      
      * add blenderbot test
      
      * fix a fail
      
      * fix an import fail
      
      * a fix
      
      * fix
      
      * Update modeling_pegasus.py
      
      * fix models
      
      * fix inputs_embeds setting getter
      
      * adapt tests
      
      * correct repo utils check
      
      * finish test improvement
      
      * fix tf models as well
      
      * make style
      
      * make fix-copies
      
      * fix copies
      
      * run all tests
      
      * last changes
      
      * fix all tests
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      00031785
  14. 27 Jan, 2021 1 commit
  15. 07 Jan, 2021 1 commit
  16. 06 Jan, 2021 2 commits
  17. 05 Jan, 2021 2 commits
    • Patrick von Platen's avatar
      [PyTorch Bart] Split Bart into different models (#9343) · eef66035
      Patrick von Platen authored
      * first try
      
      * remove old template
      
      * finish bart
      
      * finish mbart
      
      * delete unnecessary line
      
      * init pegasus
      
      * save intermediate
      
      * correct pegasus
      
      * finish pegasus
      
      * remove cookie cutter leftover
      
      * add marian
      
      * finish blenderbot
      
      * replace in file
      
      * correctly split blenderbot
      
      * delete "old" folder
      
      * correct "add statement"
      
      * adapt config for tf comp
      
      * correct configs for tf
      
      * remove ipdb
      
      * fix more stuff
      
      * fix mbart
      
      * push pegasus fix
      
      * fix mbart
      
      * more fixes
      
      * fix research projects code
      
      * finish docs for bart, mbart, and marian
      
      * delete unnecessary file
      
      * correct attn typo
      
      * correct configs
      
      * remove pegasus for seq class
      
      * correct peg docs
      
      * correct peg docs
      
      * finish configs
      
      * further improve docs
      
      * add copied from statements to mbart
      
      * fix copied from in mbart
      
      * add copy statements to marian
      
      * add copied from to marian
      
      * add pegasus copied from
      
      * finish pegasus
      
      * finish copied from
      
      * Apply suggestions from code review
      
      * make style
      
      * backward comp blenderbot
      
      * apply lysandres and sylvains suggestions
      
      * apply suggestions
      
      * push last fixes
      
      * fix docs
      
      * fix tok tests
      
      * fix imports code style
      
      * fix doc
      eef66035
    • Patrick von Platen's avatar
      LED (#9278) · 189387e9
      Patrick von Platen authored
      * create model
      
      * add integration
      
      * save current state
      
      * make integration tests pass
      
      * add one more test
      
      * add explanation to tests
      
      * remove from bart
      
      * add padding
      
      * remove unnecessary test
      
      * make all tests pass
      
      * re-add cookie cutter tests
      
      * finish PyTorch
      
      * fix attention test
      
      * Update tests/test_modeling_common.py
      
      * revert change
      
      * remove unused file
      
      * add string to doc
      
      * save intermediate
      
      * make tf integration tests pass
      
      * finish tf
      
      * fix doc
      
      * fix docs again
      
      * add led to doctree
      
      * add to auto tokenizer
      
      * added tips for led
      
      * make style
      
      * apply jplus statements
      
      * correct tf longformer
      
      * apply lysandres suggestions
      
      * apply sylvains suggestions
      
      * Apply suggestions from code review
      189387e9
  18. 04 Jan, 2021 2 commits
  19. 23 Dec, 2020 1 commit
  20. 22 Dec, 2020 1 commit
  21. 11 Dec, 2020 1 commit
  22. 10 Dec, 2020 1 commit
  23. 09 Dec, 2020 1 commit
    • Patrick von Platen's avatar
      [Bart] Refactor - fix issues, consistency with the library, naming (#8900) · 06971ac4
      Patrick von Platen authored
      * remove make on the fly linear embedding
      
      * start refactor
      
      * big first refactor
      
      * save intermediate
      
      * save intermediat
      
      * correct mask issue
      
      * save tests
      
      * refactor padding masks
      
      * make all tests pass
      
      * further refactor
      
      * make pegasus test pass
      
      * fix bool if
      
      * fix leftover tests
      
      * continue
      
      * bart renaming
      
      * delete torchscript test hack
      
      * fix imports in tests
      
      * correct shift
      
      * fix docs and repo cons
      
      * re-add fix for FSTM
      
      * typo in test
      
      * fix typo
      
      * fix another typo
      
      * continue
      
      * hot fix 2 for tf
      
      * small fixes
      
      * refactor types linting
      
      * continue
      
      * finish refactor
      
      * fix import in tests
      
      * better bart names
      
      * further refactor and add test
      
      * delete hack
      
      * apply sylvains and lysandres commens
      
      * small perf improv
      
      * further perf improv
      
      * improv perf
      
      * fix typo
      
      * make style
      
      * small perf improv
      06971ac4
  24. 30 Nov, 2020 1 commit
    • Ahmed Elnaggar's avatar
      Add T5 Encoder for Feature Extraction (#8717) · 40ecaf0c
      Ahmed Elnaggar authored
      
      
      * Add T5 Encoder class for feature extraction
      
      * fix T5 encoder add_start_docstrings indent
      
      * update init with T5 encoder
      
      * update init with TFT5ModelEncoder
      
      * remove TFT5ModelEncoder
      
      * change T5ModelEncoder order in init
      
      * add T5ModelEncoder to transformers init
      
      * clean T5ModelEncoder
      
      * update init with TFT5ModelEncoder
      
      * add TFModelEncoder for Tensorflow
      
      * update init with TFT5ModelEncoder
      
      * Update src/transformers/models/t5/modeling_t5.py
      
      change output from Seq2SeqModelOutput to BaseModelOutput
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * remove encoder_outputs
      
      1. remove encoder_outputs from the function call.
      2. remove the encoder_outputs If statement.
      3. remove isinstance from return_dict.
      
      * Authorize missing decoder keys
      
      * remove unnecessary input parameters
      
      remove pask_key_values and use_cache
      
      * remove use_cache
      
      remove use_cache from the forward method
      
      * add doctoring for T5 encoder
      
      add doctoring for T5 encoder with T5_ENCODER_INPUTS_DOCSTRING
      
      * change return_dict to dot access
      
      * add T5_ENCODER_INPUTS_DOCSTRING for TF T5
      
      * change TFT5Encoder output type to BaseModelOutput
      
      * remove unnecessary parameters for TFT5Encoder
      
      * remove unnecessary if statement
      
      * add import BaseModelOutput
      
      * fix BaseModelOutput typo to TFBaseModelOutput
      
      * update T5 doc with T5ModelEncoder
      
      * add T5ModelEncoder to tests
      
      * finish pytorch
      
      * finish docs and mt5
      
      * add mtf to init
      
      * fix init
      
      * remove n_positions
      
      * finish PR
      
      * Update src/transformers/models/mt5/modeling_mt5.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/models/t5/modeling_t5.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/models/t5/modeling_tf_t5.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/models/mt5/modeling_tf_mt5.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * make style
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      40ecaf0c
  25. 25 Nov, 2020 1 commit
  26. 17 Nov, 2020 3 commits
    • Sylvain Gugger's avatar
      Fix check repo utils (#8600) · 7f3b41a3
      Sylvain Gugger authored
      7f3b41a3
    • Patrick von Platen's avatar
      T5 & mT5 (#8552) · 86822a35
      Patrick von Platen authored
      * add mt5 and t5v1_1 model
      
      * fix tests
      
      * correct some imports
      
      * add tf model
      
      * finish tf t5
      
      * improve examples
      
      * fix copies
      
      * clean doc
      86822a35
    • Sylvain Gugger's avatar
      Reorganize repo (#8580) · c89bdfbe
      Sylvain Gugger authored
      * Put models in subfolders
      
      * Styling
      
      * Fix imports in tests
      
      * More fixes in test imports
      
      * Sneaky hidden imports
      
      * Fix imports in doc files
      
      * More sneaky imports
      
      * Finish fixing tests
      
      * Fix examples
      
      * Fix path for copies
      
      * More fixes for examples
      
      * Fix dummy files
      
      * More fixes for example
      
      * More model import fixes
      
      * Is this why you're unhappy GitHub?
      
      * Fix imports in conver command
      c89bdfbe
  27. 13 Nov, 2020 1 commit
    • Lysandre Debut's avatar
      Model templates encoder only (#8509) · 826f0457
      Lysandre Debut authored
      
      
      * Model templates
      
      * TensorFlow
      
      * Remove pooler
      
      * CI
      
      * Tokenizer + Refactoring
      
      * Encoder-Decoder
      
      * Let's go testing
      
      * Encoder-Decoder in TF
      
      * Let's go testing in TF
      
      * Documentation
      
      * README
      
      * Fixes
      
      * Better names
      
      * Style
      
      * Update docs
      
      * Choose to skip either TF or PT
      
      * Code quality fixes
      
      * Add to testing suite
      
      * Update file path
      
      * Cookiecutter path
      
      * Update `transformers` path
      
      * Handle rebasing
      
      * Remove seq2seq from model templates
      
      * Remove s2s config
      
      * Apply Sylvain and Patrick comments
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Last fixes from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      826f0457
  28. 12 Nov, 2020 1 commit
  29. 11 Nov, 2020 1 commit
    • Ratthachat (Jung)'s avatar
      Add TFDPR (#8203) · 026a2ff2
      Ratthachat (Jung) authored
      * Create modeling_tf_dpr.py
      
      * Add TFDPR
      
      * Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot
      
      last commit accidentally deleted these 4 lines, so I recover them back
      
      * Add TFDPR
      
      * Add TFDPR
      
      * clean up some comments, add TF input-style doc string
      
      * Add TFDPR
      
      * Make return_dict=False as default
      
      * Fix return_dict bug (in .from_pretrained)
      
      * Add get_input_embeddings()
      
      * Create test_modeling_tf_dpr.py
      
      The current version is already passed all 27 tests!
      Please see the test run at : 
      https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing
      
      
      
      * fix quality
      
      * delete init weights
      
      * run fix copies
      
      * fix repo consis
      
      * del config_class, load_tf_weights
      
      They shoud be 'pytorch only'
      
      * add config_class back
      
      after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion
      
      * newline after .. note::
      
      * import tf, np (Necessary for ModelIntegrationTest)
      
      * slow_test from_pretrained with from_pt=True
      
      At the moment we don't have TF weights (since we don't have official official TF model)
      Previously, I did not run slow test, so I missed this bug
      
      * Add simple TFDPRModelIntegrationTest
      
      Note that this is just a test that TF and Pytorch gives approx. the same output.
      However, I could not test with the official DPR repo's output yet
      
      * upload correct tf model
      
      * remove position_ids as missing keys
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatrickvonplaten <patrick@huggingface.co>
      026a2ff2
  30. 10 Nov, 2020 1 commit
  31. 09 Nov, 2020 2 commits
  32. 30 Oct, 2020 1 commit
    • Sam Shleifer's avatar
      TFMarian, TFMbart, TFPegasus, TFBlenderbot (#7987) · 566b083e
      Sam Shleifer authored
      
      
      * Start plumbing
      
      * Marian close
      
      * Small stubs for all children
      
      * Fixed bart
      
      * marian working
      
      * pegasus test is good, but failing
      
      * Checkin tests
      
      * More model files
      
      * Subtle marian, pegasus integration test failures
      
      * Works well
      
      * rm print
      
      * boom boom
      
      * Still failing model2doc
      
      * merge master
      
      * Equivalence test failing, all others fixed
      
      * cleanup
      
      * Fix embed_scale
      
      * Cleanup marian pipeline test
      
      * Undo extra changes
      
      * Smaller delta
      
      * Cleanup model testers
      
      * undo delta
      
      * fix tests import structure
      
      * cross test decorator
      
      * Cleaner set_weights
      
      * Respect authorized_unexpected_keys
      
      * No warnings
      
      * No warnings
      
      * style
      
      * Nest tf import
      
      * black
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * functional dropout
      
      * fixup
      
      * Fixup
      
      * style_doc
      
      * embs
      
      * shape list
      
      * delete slow force_token_id_to_be_generated func
      
      * fixup
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      566b083e
  33. 20 Oct, 2020 1 commit