1. 25 Aug, 2023 1 commit
  2. 27 Jul, 2023 1 commit
    • Sanchit Gandhi's avatar
      Add bloom flax (#25094) · e9310363
      Sanchit Gandhi authored
      
      
      * First commit
      
      * step 1 working
      
      * add alibi
      
      * placeholder for `scan`
      
      * add matrix mult alibi
      
      * beta scaling factor for bmm
      
      * working v1 - simple forward pass
      
      * move layer_number from attribute to arg in call
      
      * partial functioning scan
      
      * hacky working scan
      
      * add more modifs
      
      * add test
      
      * update scan for new kwarg order
      
      * fix position_ids problem
      
      * fix bug in attention layer
      
      * small fix
      
      - do the alibi broadcasting only once
      
      * prelim refactor
      
      * finish refactor
      
      * alibi shifting
      
      * incorporate dropout_add to attention module
      
      * make style
      
      * make padding work again
      
      * update
      
      * remove bogus file
      
      * up
      
      * get generation to work
      
      * clean code a bit
      
      * added small tests
      
      * adding albii test
      
      * make CI tests pass:
      
      - change init weight
      - add correct tuple for output attention
      - add scan test
      - make CI tests work
      
      * fix few nits
      
      * fix nit onnx
      
      * fix onnx nit
      
      * add missing dtype args to nn.Modules
      
      * remove debugging statements
      
      * fix scan generate
      
      * Update modeling_flax_bloom.py
      
      * Update test_modeling_flax_bloom.py
      
      * Update test_modeling_flax_bloom.py
      
      * Update test_modeling_flax_bloom.py
      
      * fix small test issue + make style
      
      * clean up
      
      * Update tests/models/bloom/test_modeling_flax_bloom.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * fix function name
      
      * small fix test
      
      * forward contrib credits from PR17761
      
      * Fix failing test
      
      * fix small typo documentation
      
      * fix non passing test
      
      - remove device from build alibi
      
      * refactor call
      
      - refactor `FlaxBloomBlockCollection` module
      
      * make style
      
      * upcast to fp32
      
      * cleaner way to upcast
      
      * remove unused args
      
      * remove layer number
      
      * fix scan test
      
      * make style
      
      * fix i4 casting
      
      * fix slow test
      
      * Update src/transformers/models/bloom/modeling_flax_bloom.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * remove `layer_past`
      
      * refactor a bit
      
      * fix `scan` slow test
      
      * remove useless import
      
      * major changes
      
      - remove unused code
      - refactor a bit
      - revert import `torch`
      
      * major refactoring
      
      - change build alibi
      
      * remove scan
      
      * fix tests
      
      * make style
      
      * clean-up alibi
      
      * add integration tests
      
      * up
      
      * fix batch norm conversion
      
      * style
      
      * style
      
      * update pt-fx cross tests
      
      * update copyright
      
      * Update src/transformers/modeling_flax_pytorch_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * per-weight check
      
      * style
      
      * line formats
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarhaileyschoelkopf <haileyschoelkopf@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      e9310363
  3. 21 Jun, 2023 1 commit
  4. 05 May, 2023 1 commit
  5. 04 May, 2023 2 commits
  6. 04 Apr, 2023 1 commit
    • Shubhamai's avatar
      Flax Regnet (#21867) · 90067748
      Shubhamai authored
      * initial commit
      
      * review changes
      
      * post model PR merge
      
      * updating doc
      90067748
  7. 24 Mar, 2023 1 commit
    • Shubhamai's avatar
      Resnet flax (#21472) · a0cbbba3
      Shubhamai authored
      
      
      * [WIP] flax resnet
      
      * added pretrained flax models, results reproducible
      
      * Added pretrained flax models, results reproducible
      
      * working on tests
      
      * no real code change, just some comments
      
      * [flax] adding support for batch norm layers
      
      * fixing bugs related to pt+flax integration
      
      * removing loss from modeling flax output class
      
      * fixing classifier tests
      
      * fixing comments, model output
      
      * cleaning comments
      
      * review changes
      
      * review changes
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * renaming Flax to PyTorch
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      a0cbbba3
  8. 20 Feb, 2023 1 commit
    • Andy Ehrenberg's avatar
      add flax whisper implementation (#20479) · 2840272c
      Andy Ehrenberg authored
      
      
      * add flax whisper implementation
      
      * rever change to setup
      
      * remove unused imports
      
      * revert generation changes
      
      * flax whisper docs
      
      * docs
      
      * import order
      
      * import sorting
      
      * isort
      
      * add dummy objects
      
      * doc formatting
      
      * formatting
      
      * remove trailing whitespaces
      
      * fix flax whisper docs
      
      * add generation logic to unlock flax whisper
      
      * remove scans
      
      * give credits to Flax Bart implementation
      
      * remove unused imports
      
      * add license
      
      * remove assert
      
      * more credits to Bart
      
      * fix style
      
      * formatting
      
      * support left padding
      
      * add flax whisper generation test
      
      * remove copied from comments whenever not a full copy
      
      * fix docstrings for logits processors
      
      * revert change to FlaxForceTokensLogitsProcessor
      
      * revert doc changes
      
      * improve generation docs
      
      * reorganize
      
      * formatting
      
      * cleanup docs
      
      * add tests
      
      * handle empty list case
      
      * fix forced decoder ids in flax tests
      
      * add flax whisper to inits
      
      * upate dummy objects
      
      * docs for FlaxAutoModelForSpeechSeq2Seq
      
      * fix decoder_position_ids computation in pretrained model decode/__call__ fns
      
      * add Copied from statements as necessary
      
      * compute position_ids only in __call__ and decode methods of pretrained model subclasses
      
      * improve readabilityof compute positional embeddings
      
      * check dimensionality of input_features instead of hidden_states
      
      * copied from statement for init_cache
      
      * formatting
      
      * fix copies
      
      * fix copies
      
      * pass attention mask to encoder layers
      
      * fix decoder module outputs
      
      * set dtype
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * smaller flax model for whisper test
      
      * Update src/transformers/generation/flax_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update tests/models/whisper/test_modeling_flax_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * cleanup
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * bias cleanup
      
      * doc fix
      
      * align style for force tokens processor
      
      * readability
      
      * fix input shape in tests
      
      * revert FlaxGenerationMixin docstring
      
      * formatting
      
      * fix tests
      
      * fix imports
      
      * consistent encoder hidden states
      
      * consistent hidden states
      
      * input shapes
      
      * typo
      
      * partial class trick
      
      * partial class for input shape
      
      * base_class with correct input shape
      
      * partial base classes
      
      * match by name
      
      * set main_input_name
      
      * compare on names
      
      * formatting
      
      * remove unused import
      
      * safer position ids computation
      
      * safer position id computation
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * remove identical inherited tests
      
      * fix prompt ids in tests
      
      * use generation config
      
      * use jnp array
      
      * better var names
      
      * more explicit bias use
      
      * import transformers
      
      * formatting
      
      * test formatting
      
      * remove unused imports
      
      * remove unused imports
      
      * formatting
      
      * isort
      
      * docs
      
      * fix ln orders for encoder hidden states
      
      * whisper unique generation stuff
      
      * flake
      
      * use finfo for attention bias
      
      * docs
      
      * Update src/transformers/generation/flax_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * docs
      
      * add timestamp flax test
      
      * jit for timestamps
      
      * formatting
      
      * clean up timestamps processor
      
      * formatting
      
      * remove if_true
      
      * cleanup
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      2840272c
  9. 07 Feb, 2023 1 commit
    • Sylvain Gugger's avatar
      Cleanup quality (#21493) · 67d07487
      Sylvain Gugger authored
      * Remove mentions of flake8/isort
      
      * Clean up inits
      
      * Deall with all other inits
      
      * Last special rule for dummy files
      67d07487
  10. 18 Jan, 2023 1 commit
    • Samuel Xu's avatar
      Remove Roberta Dependencies from XLM Roberta Flax and Tensorflow models (#21047) · defdcd28
      Samuel Xu authored
      * Added flax model code
      
      * Added tf changes
      
      * missed some
      
      * Added copy comments
      
      * Added style hints
      
      * Fixed copy statements
      
      * Added suggested fixes
      
      * Made some fixes
      
      * Style fixup
      
      * Added necessary copy statements
      
      * Fixing copy statements
      
      * Added more copies
      
      * Final copy fix
      
      * Some bugfixes
      
      * Adding imports to init
      
      * Fixed up all make fixup errors
      
      * Fixed doc errors
      
      * Auto model changes
      defdcd28
  11. 19 Dec, 2022 1 commit
  12. 09 Nov, 2022 1 commit
  13. 29 Jun, 2022 1 commit
    • Crystina's avatar
      Flax t5 Encoder (#17784) · 692e61e9
      Crystina authored
      
      
      * first draft adding Flax-t5-encoder and Flax-mt5-encoder
      
      * imports
      
      * after make fixup
      
      * flax t5 encoder test
      
      * black on test
      
      * make fix-copies
      
      * clean
      
      * all_model_classes -> tuple
      
      * clean test
      
      * is_encoder_decoder=False in t5-enc tester
      
      * remove file docstring before FlaxT5Encoder
      
      * black
      
      * isort
      
      * commit suggestions on src/transformers/models/t5/modeling_flax_t5.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * commit suggestions on src/transformers/models/t5/modeling_flax_t5.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * remove _get_encoder_module
      
      * self.decoder_seq_length -> self.encoder_seq_length as t5-enc does not have decoder
      
      * bugfix - self.module_class is class itself, not instance;
      
      * docs for mt5 and t5
      
      * call -> __call__ in t5 doc
      
      * FlaxMT5EncoderModel to TYPE_HINT
      
      * run doc-builder to allow change the files
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      692e61e9
  14. 13 Jun, 2022 1 commit
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f
  15. 31 May, 2022 1 commit
    • Arthur's avatar
      Opt in flax and tf (#17388) · 7822a9b7
      Arthur authored
      
      
      * initial commit
      
      * add init file
      
      * update globakl init
      
      * update index and dummy objects
      
      * style
      
      * update modelling auto
      
      * fix initi typo in src/transformers
      
      * fix typo in modeling tf auto, opt was in wrong mapping name
      
      * fixed a slow test : saved_model
      
      * style
      
      * fix positionnal embedding if no position id is provided
      
      * update tf test
      
      * update test flax requirements
      
      * fixed serialization
      
      * update
      
      * update tf name to allow smooth convertion
      
      * update flax tests
      
      * style
      
      * fix test typo
      
      * fix tf typo test
      
      * add xla for generate support in causal LM
      
      * fixed bug
      
      * cleaned tf tests
      
      * style
      
      * removed from PT for slow tests
      
      * fix typp
      
      * opt test as slow
      
      * trying to fix GPT2 undefined
      
      * correct documentation and add to test doc
      
      * update tf doc
      
      * fix doc
      
      * fake commit
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * update test based on review
      
      * merged main layer for functionning test
      
      * fixup + quality
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update long comment
      
      * make fix copies
      Co-authored-by: default avatarArthur <arthur@huggingface.co>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      7822a9b7
  16. 03 May, 2022 1 commit
    • Sanchit Gandhi's avatar
      [FlaxBert] Add ForCausalLM (#16995) · cd9274d0
      Sanchit Gandhi authored
      * [FlaxBert] Add ForCausalLM
      
      * make style
      
      * fix output attentions
      
      * Add RobertaForCausalLM
      
      * remove comment
      
      * fix fx-to-pt model loading
      
      * remove comment
      
      * add modeling tests
      
      * add enc-dec model tests
      
      * add big_bird
      
      * add electra
      
      * make style
      
      * make repo-consitency
      
      * add to docs
      
      * remove roberta test
      
      * quality
      
      * amend cookiecutter
      
      * fix attention_mask bug in flax bert model tester
      
      * tighten pt-fx thresholds to 1e-5
      
      * add 'copied from' statements
      
      * amend 'copied from' statements
      
      * amend 'copied from' statements
      
      * quality
      cd9274d0
  17. 23 Mar, 2022 1 commit
    • Sylvain Gugger's avatar
      Reorganize file utils (#16264) · 4975002d
      Sylvain Gugger authored
      * Split file_utils in several submodules
      
      * Fixes
      
      * Add back more objects
      
      * More fixes
      
      * Who exactly decided to import that from there?
      
      * Second suggestion to code with code review
      
      * Revert wront move
      
      * Fix imports
      
      * Adapt all imports
      
      * Adapt all imports everywhere
      
      * Revert this import, will fix in a separate commit
      4975002d
  18. 09 Mar, 2022 1 commit
    • Sanchit Gandhi's avatar
      Add FlaxBartForCausalLM (#15995) · b256f351
      Sanchit Gandhi authored
      * add causal lm
      
      * add CausalLM tests
      
      * Add FlaxBartForCausalLM
      
      * Add EncoderDecoder model tests
      
      * change docstring
      
      * make repo-consistency
      
      * suggested changes
      
      * remove jax ops
      
      * correction
      
      * rename pre-trained decoder model
      b256f351
  19. 04 Mar, 2022 1 commit
  20. 28 Feb, 2022 1 commit
  21. 28 Jan, 2022 1 commit
    • Suraj Patil's avatar
      Add XGLM models (#14876) · d25e25ee
      Suraj Patil authored
      
      
      * add xglm
      
      * update vocab size
      
      * fix model name
      
      * style and tokenizer
      
      * typo
      
      * no mask token
      
      * fix pos embed compute
      
      * fix args
      
      * fix tokenizer
      
      * fix positions
      
      * fix tokenization
      
      * style and dic fixes
      
      * fix imports
      
      * add fast tokenizer
      
      * update names
      
      * add pt tests
      
      * fix tokenizer
      
      * fix typo
      
      * fix tokenizer import
      
      * fix fast tokenizer
      
      * fix tokenizer
      
      * fix converter
      
      * add tokenizer test
      
      * update checkpoint names
      
      * fix tokenizer tests
      
      * fix slow tests
      
      * add copied from comments
      
      * rst -> mdx
      
      * flax model
      
      * update flax tests
      
      * quality
      
      * style
      
      * doc
      
      * update index and readme
      
      * fix copies
      
      * fix doc
      
      * update toctrr
      
      * fix indent
      
      * minor fixes
      
      * fix config doc
      
      * don't save embed_pos weights
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * address Sylvains commnets, few doc fixes
      
      * fix check_repo
      
      * align order of arguments
      
      * fix copies
      
      * fix labels
      
      * remove unnecessary mapping
      
      * fix saving tokenizer
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d25e25ee
  22. 14 Jan, 2022 1 commit
  23. 04 Jan, 2022 1 commit
  24. 02 Dec, 2021 1 commit
    • Daniel Stancl's avatar
      [Flax] Add FlaxBlenderbotSmall (#14576) · 50d909be
      Daniel Stancl authored
      
      
      * [WIP] Add FlaxBlenderbotSmall
      
      * Revert some unintentionally changed files
      
      Revert some unintentionally files changed by improperly filled cookiecutter instructions.
      
      * Fix repo consistency
      
      * Fix Flax-PT equivalence
      
      * Apply suggestions from code review
      
      * Update index.mdx
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      50d909be
  25. 01 Dec, 2021 1 commit
    • Suraj Patil's avatar
      FlaxGPTJ (#14396) · 4c0dd199
      Suraj Patil authored
      * add flax gptj
      
      * no bias in attention dense
      
      * no wpe
      
      * fix rotary embeddings
      
      * fix rotary embeds
      
      * fix rotray embeds
      
      * quality
      
      * doc and quality
      
      * fix equivalence tests
      4c0dd199
  26. 30 Nov, 2021 2 commits
    • Suraj Patil's avatar
      VisionTextDualEncoder (#13511) · fc1d97f2
      Suraj Patil authored
      
      
      * init vision_text_dual_encoder
      
      * fix merge
      
      * remove extra heads
      
      * fix tests
      
      * remove VISION_TEXT_DUAL_ENCODER_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * remove archive map
      
      * fix imports
      
      * fix more imports
      
      * fix init
      
      * delete tokenizers
      
      * fix imports
      
      * clean
      
      * support clip's vision model
      
      * handle None config
      
      * begin tests
      
      * more test and few fixes
      
      * warn about newly init weights
      
      * more tests
      
      * add loss to model
      
      * remove extra classes from doc
      
      * add processor
      
      * doc and small fixes
      
      * add start docstr
      
      * update flax model
      
      * flax tests
      
      * more flax tests
      
      * doc
      
      * quality
      
      * doc and quality
      
      * fix doc
      
      * doc
      
      * remove comments
      
      * update warning
      
      * quality
      
      * fix docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * replace asserts, fix imports
      
      * update imports
      
      * fix import
      
      * address some review comments
      
      * fix check
      
      * reduce tolerance
      
      * fix test
      
      * add flax integration test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * address Sylvain's comments
      
      * fix style
      
      * add pt_flax_equivalence test in PT tests
      
      * add pt integration test
      
      * update test
      
      * use pre-trained checkpoint in examples
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      fc1d97f2
    • Daniel Stancl's avatar
      [Flax] Add FlaxBlenderbot (#13633) · faacd747
      Daniel Stancl authored
      
      
      * Init Flax implementation for Blenderbot
      
      * Add a majority of stuff except for tests
      
      * make style quality
      
      * Add tests and fix some bugs
      
      * Add tests
      
      * Clean source code and fix some bugs
      
      * Fix copies and docs
      
      * Fix jax device condition for tests
      
      * Fix layer norm in the encoder
      
      * Fix a few typos in the test file
      
      * make fix-copies
      
      * make fix-copies
      
      * fix layer norm
      
      * Fix Flax params dtype (#13090)
      
      * Fix PR reference (#13098)
      
      * make fix-copies
      
      * Update tests/test_modeling_flax_blenderbot.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      faacd747
  27. 21 Nov, 2021 1 commit
  28. 16 Nov, 2021 1 commit
  29. 09 Nov, 2021 1 commit
    • Yih-Dar's avatar
      Add FlaxVisionEncoderDecoderModel (#13359) · 95b3ec3b
      Yih-Dar authored
      
      
      * Start the work on FlaxVisionEncoderDecoderModel
      
      * Add FlaxVisionEncoderDecoderModel
      
      * Add VisionEncoderDecoderConfig
      
      * Make FlaxVisionEncoderDecoderModel visible to transformers
      
      * Add test
      
      * Fix wrong getattr usage
      
      * Fix tests
      
      * Add FlaxAutoModelForVision2Seq
      
      * Expose FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING
      
      * clean-up
      
      * add integration test
      
      * update expected logits
      
      * update expected scores
      
      * Add ViT2GPT2ModelIntegrationTest + some cleaning
      
      * Add projection layer + PT/Flax equivalence tests
      
      * Fix import
      
      * minor changes
      
      * make test slow again
      
      * Apply suggestions
      
      * Add modeling_flax_vision_encoder_decoder to _ignore_modules in get_model_modules()
      
      * fix copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * split long strings in multiple lines
      
      * decoder_input_ids can't be None
      
      * Add back test_configuration_tie
      
      * Remove attention_mask parameter
      
      * fix test - encoder_last_hidden_state should be encoder_outputs.last_hidden_state instead of the projected vector
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Remove more encoder_attention_mask
      
      * remove encoder_attention_mask when calling self.decode (in FlaxVisionEncoderDecoderModule)
      
      * Fix style + pass 1s instead of None as encoder_attention_mask
      
      * fix init_weights
      
      * pass None for encoder_attention_mask
      
      * pass 1s instead of None as encoder_attention_mask
      
      * Fix doc style
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      95b3ec3b
  30. 21 Sep, 2021 1 commit
    • Kamal Raj's avatar
      beit-flax (#13515) · a2dec768
      Kamal Raj authored
      * beit-flax
      
      * updated FLAX_BEIT_MLM_DOCSTRING
      
      * removed bool_masked_pos from classification
      
      * updated Copyright
      
      * code refactoring: x -> embeddings
      
      * updated test: rm from_pt
      
      * Update docs/source/model_doc/beit.rst
      
      * model code dtype updates and
      other changes according to review
      
      * relative_position_bias
      revert back to pytorch design
      a2dec768
  31. 14 Sep, 2021 1 commit
    • Bhadresh Savani's avatar
      [Flax] Addition of FlaxPegasus (#13420) · c1e47bf4
      Bhadresh Savani authored
      
      
      * added initial files
      
      * fixes pipeline
      
      * fixes style and quality
      
      * fixes doc issue and positional encoding
      
      * fixes layer norm and test
      
      * fixes quality issue
      
      * fixes code quality
      
      * removed extra layer norm
      
      * added layer norm back in encoder and decoder
      
      * added more code copy quality checks
      
      * update tests
      
      * Apply suggestions from code review
      
      * fix import
      
      * fix test
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      c1e47bf4
  32. 30 Aug, 2021 2 commits
    • Kamal Raj's avatar
      albert flax (#13294) · 98e409ab
      Kamal Raj authored
      * albert flax
      
      * year -> 2021
      
      * docstring updated for flax
      
      * removed head_mask
      
      * removed from_pt
      
      * removed passing attention_mask to embedding layer
      98e409ab
    • Kamal Raj's avatar
      distilbert-flax (#13324) · 774760e6
      Kamal Raj authored
      * distilbert-flax
      
      * added missing self
      
      * docs fix
      
      * removed tied kernal extra init
      
      * updated docs
      
      * x -> hidden states
      
      * removed head_mask
      
      * removed from_pt, +FLAX
      
      * updated year
      774760e6
  33. 23 Aug, 2021 1 commit
    • Yih-Dar's avatar
      Make Flax GPT2 working with cross attention (#13008) · 2e20c0f3
      Yih-Dar authored
      
      
      * make flax gpt2 working with cross attention
      
      * Remove encoder->decoder projection layer
      
      * A draft (incomplete) for FlaxEncoderDecoderModel
      
      * Add the method from_encoder_decoder_pretrained + the docstrings
      
      * Fix the mistakes of using EncoderDecoderModel
      
      * Fix style
      
      * Add FlaxEncoderDecoderModel to the library
      
      * Fix cyclic imports
      
      * Add FlaxEncoderDecoderModel to modeling_flax_auto.py
      
      * Remove question comments
      
      * add tests for FlaxEncoderDecoderModel
      
      * add flax_encoder_decoder to the lists of ignored entries in check_repo.py
      
      * fix missing required positional arguments
      
      * Remove **kwargs when creating FlaxEncoderDecoderModel in from_encoder_decoder_pretrained()
      
      Also fix generation eos/pad tokens issue
      
      * Fix: Use sequences from the generated_output
      
      * Change a check from assert to raise ValueError
      
      * Fix examples and token ids issues
      
      * Fix missing all_cross_attentions when outputting tuple in modeling_gpt2
      
      * Remove the changes in configuration docstrings.
      
      * allow for bert 2 gpt2
      
      * make fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Change remaining examples to bert2gpt2
      
      * Change the test to Bert2GPT2
      
      * Fix examples
      
      * Fix import
      
      * Fix unpack bug
      
      * Rename to FlaxEncoderDecoderModelTest and change the test to bert2gpt2
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix: NotImplentedError -> NotImplementedError
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * up
      
      * finalize
      Co-authored-by: default avatarydshieh <ydshieh@user.noreply>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2e20c0f3
  34. 04 Aug, 2021 1 commit
  35. 09 Jul, 2021 1 commit
  36. 07 Jul, 2021 1 commit
    • Daniel Stancl's avatar
      [Flax] Add FlaxMBart (#12236) · 61400e1e
      Daniel Stancl authored
      
      
      * Copy BART to MBart and rename some stuff
      
      * Add copy statements pointing to FlaxBart
      
      * Update/add some common files
      
      * Update shift_tokens_rigth + fix imports
      
      * Fix shift_tokens_right method according to MBart implementation
      
      * Update shift_tokens_right in tests accordingly
      
      * Fix the import issue and update docs file
      * make style quality
      
      * Do some minor changes according to patil-suraj suggestions
      
      * Change the order of normalization layer and attention
      
      * Add some copu statementes
      
      * Update generate method and add integration test for mBart
      
      * Make a few updates after a review
      
      Besides, add `lang_code_to_id` to MBartTokenizeFast
      
      * fix-copies; make style quality
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * fix output type, style
      
      * add copied from
      
      * resolve conflicts
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      61400e1e
  37. 06 Jul, 2021 1 commit
    • Suraj Patil's avatar
      FlaxGPTNeo (#12493) · 7a259c19
      Suraj Patil authored
      * flax gpt neo
      
      * fix query scaling
      
      * update generation test
      
      * use flax model for test
      7a259c19