1. 21 Dec, 2022 1 commit
  2. 19 Dec, 2022 1 commit
  3. 16 Dec, 2022 2 commits
    • NielsRogge's avatar
      Add Swin2SR (#19784) · 26dd041c
      NielsRogge authored
      
      
      * First draft
      
      * Add more improvements
      
      * Improve forward pass
      
      * Fix layernorm
      
      * Add upscaler
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Improve conversion script
      
      * Add preprocessing
      
      * Make output match original implementation
      
      * Add additional attributes
      
      * Add support for more models
      
      * Support more models
      
      * Add support for real world sr
      
      * Add initial Swin2SRFeatureExtractor
      
      * Add ImageSuperResolutionOutput
      
      * Make more tests pass
      
      * Use BaseModelOutput
      
      * Fix one more test
      
      * Fix more tests
      
      * Fix another test
      
      * Fix all tests
      
      * Rename to Swin2SRImageProcessor
      
      * Fix toctree
      
      * Fix toctree
      
      * Fix rebase
      
      * Improve Swin2SRImageProcessor
      
      * Remove feature extractor file
      
      * Improve model
      
      * Improve conversion script
      
      * Fix integration test
      
      * Fix init
      
      * Fix conversion script
      
      * Address comments
      
      * Improve upsampler
      
      * Add NearestConvUpsampler
      
      * Improve pixel shuffle upsampler
      
      * Improve auxiliary upsampler
      
      * Improve conversion script
      
      * Rename conv_last to final_convolution
      
      * Fix rebase
      
      * Improve upsample module
      
      * Add padding to image processor
      
      * Fix bug
      
      * Update padding
      
      * Remove print statement and fix integration test
      
      * Improve docs
      
      * Add image processor tests
      
      * Convert all checkpoints, fix testsé
      
      * Remove print statements
      
      * Fix import
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      26dd041c
    • NielsRogge's avatar
      Add Universal Segmentation class + mapping (#20766) · 7f998612
      NielsRogge authored
      
      
      * Add mapping
      
      * Add mapping to pipeline
      
      * Apply suggestions
      
      * Fix feature extractor tests
      
      * Use ForInstance, add model to universal mapping
      
      * More fixes
      
      * Remove model from deprecated objectsé
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      7f998612
  4. 14 Dec, 2022 1 commit
  5. 13 Dec, 2022 1 commit
  6. 12 Dec, 2022 1 commit
    • Ariel Ekgren's avatar
      Add gpt-sw3 model to transformers (#20209) · 5f94855d
      Ariel Ekgren authored
      
      
      * Add templates for gpt-sw3
      
      * Add templates for gpt-sw3
      
      * Added sentencepiece tokenizer
      
      * intermediate commit with many changes
      
      * fixed conflicts
      
      * Init commit for tokenization port
      
      * Tokenization progress
      
      * Remove fast tokenizer
      
      * Clean up and rename spm.model -> spiece.model
      
      * Remove TF -> PT conversion script template, Clean up Megatron -> PT script
      
      * Optimize encode & decode performance
      
      * added new attention
      
      * added new attention
      
      * attention for gpt-sw3 working
      
      * attention good
      
      * Cache is now working
      
      * fixed attention mask so that it works with causal attention
      
      * fixed badbmm bug for cpu and caching
      
      * updated config with correct parameters
      
      * Refactor and leave optimizations as separate functions to avoid breaking expected functionality
      
      * Fix special tokens mapping for both tokenizers
      
      * cleaning up of code and comments
      
      * HF compatible attention outputs
      
      * Tokenizer now passing tests, add documentation
      
      * Update documentation
      
      * reverted back to base implementation after checking that it is identical to pretrained model
      
      * updated gpt-sw3 config
      
      * updated conversion script
      
      * aligned parameters with gpt-sw3 config
      
      * changed default scale_attn_by_inverse_layer_idx to true
      
      * removed flag from conversion script
      
      * added temporary model path
      
      * reverted back to functioning convert script
      
      * small changes to default config
      
      * updated tests for gpt-sw3
      
      * make style, make quality, minor cleanup
      
      * Change local paths to testing online repository
      
      * Change name: GptSw3 -> GPTSw3
      
      * Remove GPTSw3TokenizerFast references
      
      * Use official model repository and add more model sizes
      
      * Added reference to 6.7b model
      
      * Add GPTSw3DoubleHeadsModel to IGNORE_NON_AUTO_CONFIGURED, like GPT2DoubleHeadsModel
      
      * Remove pointers to non-existing TFGPTSw3
      
      * Add GPTSw3 to docs/_toctree.yml
      
      * Remove TF artifacts from GPTSw3 in __init__ files
      
      * Update README:s with 'make fix-copies'
      
      * Add 20b model to archive list
      
      * Add documentation for GPT-Sw3
      
      * Fix typo in documentation for GPT-Sw3
      
      * Do 'make fix-copies' again after having updated docs
      
      * Fix some typos in docs
      
      * Update src/transformers/models/gpt_sw3/configuration_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/configuration_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/convert_megatron_to_pytorch.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/modeling_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update tests/models/gpt_sw3/test_tokenization_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/modeling_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/modeling_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Resolve comments from PR feedback
      
      * Resolve more comments from PR feedback, also set use_cache=True in convert script
      
      * Add '# Copied from' comments for GPTSw3 modeling
      
      * Set 'is_parallelizable = False'
      
      * Remove '# Copied from' where code was modified and add 'with x->y' when appropriate
      
      * Remove parallelize in mdx
      
      * make style, make quality
      
      * Update GPTSw3Config default values and corresponding documentation
      
      * Update src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/__init__.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Clean up and protect GPTSw3Tokenizer imports with is_sentencepiece_available
      
      * Make style, make quality
      
      * Add dummy object for GPTSw3Tokenizer via 'make fix-copies'
      
      * make fix-copies
      
      * Remove GPTSw3 modeling classes
      
      * make style, make quality
      
      * Add GPTSw3 auto-mappings for other GPT2 heads
      
      * Update docs/source/en/model_doc/gpt-sw3.mdx
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/convert_megatron_to_pytorch.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Remove old TODO-comment
      
      * Add example usage to GPTSw3Tokenizer docstring
      
      * make style, make quality
      
      * Add implementation details and example usage to gpt-sw3.mdx
      Co-authored-by: default avatarJoeyOhman <joeyoh@kth.se>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      5f94855d
  7. 08 Dec, 2022 1 commit
    • Nathan Raw's avatar
      Add video classification pipeline (#20151) · 9e56aff5
      Nathan Raw authored
      * 🚧 wip video classification pipeline
      
      * 🚧 wip - add is_decord_available check
      
      * 🐛 add missing import
      
      *  add tests
      
      * 🔧 add decord to setup extras
      
      * 🚧 add is_decord_available
      
      *  add video-classification pipeline
      
      * 📝 add video classification pipe to docs
      
      * 🐛 add missing VideoClassificationPipeline import
      
      * 📌 add decord install in test runner
      
      *  fix url inputs to video-classification pipeline
      
      *  updates from review
      
      * 📝 add video cls pipeline to docs
      
      * 📝 add docstring
      
      * 🔥 remove unused import
      
      * 🔥 remove some code
      
      * 📝 docfix
      9e56aff5
  8. 07 Dec, 2022 2 commits
  9. 06 Dec, 2022 1 commit
  10. 05 Dec, 2022 1 commit
  11. 30 Nov, 2022 1 commit
    • Yang An's avatar
      Add Chinese-CLIP implementation (#20368) · 72176402
      Yang An authored
      
      
      * init chinese-clip model from clip
      
      * init model tests and docs
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * update usecase example in model implementation
      
      * fix codestyle
      
      * fix model_type typo in readme
      
      * add placeholder in doc
      
      * add placeholder in doc
      
      * update the init script
      
      * update usecase
      
      * fix codestyle
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * forward the convert_rgb
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * merge the recent update from clip about model_input_name property
      
      * update the doc
      
      * update the doc
      
      * update the doc
      
      * update the doc
      
      * remove unused imports
      
      * reformat code style
      
      * update the doc
      
      * fix isort style
      
      * bypass a weird failed unit test which is unrelated with my PR
      
      * update the doc
      
      * implement independent vision config class
      
      * implement independent vision model class
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * make style
      
      * fix refactor bug
      
      * make style
      
      * fix refactor bug
      
      * fix refactor bug
      
      * make style
      
      * fix refactor bug
      
      * fix refactor bug
      
      * doc-build restyle
      
      * implement independent text config class
      
      * implement independent text model class
      
      * implement independent text model class
      
      * make style
      
      * make fix-copies
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * make style
      
      * update doc
      
      * black and isort
      
      * update doc
      
      * Update src/transformers/models/chinese_clip/configuration_chinese_clip.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/auto/tokenization_auto.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * modify the model type from chinese-clip to chinese_clip
      
      * format the example comment of ChineseCLIPVisionConfig
      
      * correct the copyright comment
      
      * fix the tokenizer specification
      
      * add copied from for loss function
      
      * remove unused class
      
      * update CHINESE_CLIP_TEXT_INPUTS_DOCSTRING
      
      * update CHINESE_CLIP_INPUTS_DOCSTRING
      
      * update doc
      
      * update doc
      
      * update code comment in config
      
      * update copied from statement
      
      * make style
      
      * rename the doc file
      
      * add copied statement
      
      * remove unused attention_mask, causal_attention_mask in ChineseCLIPVisionEncoder
      
      * remove ChineseCLIPTextPreTrainedModel
      
      * fix bug
      
      * fix bug
      
      * fix bug
      
      * update doc
      
      * make style
      
      * Update src/transformers/models/chinese_clip/configuration_chinese_clip.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/chinese_clip/configuration_chinese_clip.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update ChineseCLIPImageProcessor in image_processing_auto
      
      * fix config_class of chinesecliptextmodel
      
      * fix the test case
      
      * update the docs
      
      * remove the copied from comment for ChineseCLIPTextModel, since it has diverged from BertModel with customed config_class
      
      * update the testcase
      
      * final fix
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      72176402
  12. 29 Nov, 2022 2 commits
  13. 28 Nov, 2022 4 commits
  14. 23 Nov, 2022 1 commit
  15. 21 Nov, 2022 3 commits
    • NielsRogge's avatar
      Add Audio Spectogram Transformer (#19981) · 4973d2a0
      NielsRogge authored
      
      
      * First draft
      
      * Make conversion script work
      
      * Add id2label mapping, run code quality
      
      * Fix copies
      
      * Add first draft of feature extractor
      
      * Update conversion script to use feature extractor
      
      * Make more tests pass
      
      * Add docs
      
      * update input_features to input_values + pad by default to max length
      
      * Fix doc tests
      
      * Add feature extractor tests
      
      * Add proper padding/truncation to feature extractor
      
      * Add support for conversion of all audioset checkpoints
      
      * Improve docs and extend conversion script
      
      * Fix README
      
      * Rename spectogram to spectrogram
      
      * Fix copies
      
      * Add integration test
      
      * Remove dummy conv
      
      * Update to ast
      
      * Update organization
      
      * Fix init
      
      * Rename model to AST
      
      * Add require_torchaudio annotator
      
      * Move import of ASTFeatureExtractor under a is_speech_available
      
      * Fix rebase
      
      * Add pipeline config
      
      * Update name of classifier head
      
      * Rename time_dimension and frequency_dimension for clarity
      
      * Remove print statement
      
      * Fix pipeline test
      
      * Fix pipeline test
      
      * Fix index table
      
      * Fix init
      
      * Fix conversion script
      
      * Rename to ForAudioClassification
      
      * Fix index table
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      4973d2a0
    • Matthijs Hollemans's avatar
      add MobileNetV1 model (#17799) · d21c97cc
      Matthijs Hollemans authored
      * add model files etc for MobileNetV2
      
      rename files for MobileNetV1
      
      initial implementation of MobileNetV1
      
      fix conversion script
      
      cleanup
      
      write docs
      
      tweaks
      
      fix conversion script
      
      extract hidden states
      
      fix test cases
      
      make fixup
      
      fixup it all
      
      remove main from doc link
      
      fixes
      
      fix tests
      
      fix up
      
      use google org
      
      fix weird assert
      
      * fixup
      
      * use google organization for checkpoints
      d21c97cc
    • Joao Gante's avatar
  16. 18 Nov, 2022 1 commit
    • Ali Hassani's avatar
      Add Neighborhood Attention Transformer (NAT) and Dilated NAT (DiNAT) models (#20219) · fc4a993e
      Ali Hassani authored
      * Add DiNAT
      
      * Adds DiNAT + tests
      
      * Minor fixes
      
      * Added HF model
      
      * Add natten to dependencies.
      
      * Cleanup
      
      * Minor fixup
      
      * Reformat
      
      * Optional NATTEN import.
      
      * Reformat & add doc to _toctree
      
      * Reformat (finally)
      
      * Dummy objects for DiNAT
      
      * Add NAT + minor changes
      
      Adds NAT as its own independent model + docs, tests
      Adds NATTEN to ext deps to ensure ci picks it up.
      
      * Remove natten from `all` and `dev-torch` deps, add manual pip install to ci tests
      
      * Minor fixes.
      
      * Fix READMEs.
      
      * Requested changes to docs + minor fixes.
      
      * Requested changes.
      
      * Add NAT/DiNAT tests to layoutlm_job
      
      * Correction to Dinat doc.
      
      * Requested changes.
      fc4a993e
  17. 17 Nov, 2022 3 commits
  18. 16 Nov, 2022 2 commits
    • Saad Mahmud's avatar
      [Doctest] Add configuration_deformable_detr.py (#20273) · 5e012f8e
      Saad Mahmud authored
      * Update configuration_deformable_detr.py comment
      
      * Add DeformableDetrConfig to documentation_tests.txt
      5e012f8e
    • Nicolas Patry's avatar
      Adding ASR pipeline example. (#20226) · 443aaaa1
      Nicolas Patry authored
      * Adding ASR pipeline example.
      
      * De indent.
      
      * Example deindent.
      
      * Fixing example ?
      
      * Putting the example in a more prominent place.
      
      * Fixup.
      
      * Adding the file.
      
      * Adding the doctest to the daily test.
      
      * Fixing comments.
      
      * transcriber name.
      
      * Adding `>>>`.
      
      * Removing assert.
      443aaaa1
  19. 15 Nov, 2022 2 commits
    • Suraj Patil's avatar
      [CLIP] allow loading projection layer in vision and text model (#18962) · 7f744338
      Suraj Patil authored
      
      
      * allow loading projection in text and vision model
      
      * begin tests
      
      * finish test for CLIPTextModelTest
      
      * style
      
      * add slow tests
      
      * add new classes for projection heads
      
      * remove with_projection
      
      * add in init
      
      * add in doc
      
      * fix tests
      
      * fix some more tests
      
      * fix copies
      
      * fix docs
      
      * remove leftover from fix-copies
      
      * add the head models in IGNORE_NON_AUTO_CONFIGURED
      
      * fix docstr
      
      * fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstr for models
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      7f744338
    • Younes Belkada's avatar
      Add Switch transformers (#19323) · 163ac3d3
      Younes Belkada authored
      
      
      * first commit
      
      * add more comments
      
      * add router v1
      
      * clean up
      
      - remove `tf` modeling files
      
      * clean up
      
      - remove `tf` modeling files
      
      * clean up
      
      * v0 routers
      
      * added more router
      
      - Implemented `ExpertsChooseMaskedRouter`
      
      - added tests
      - 2 more routers to implement
      
      * last router
      
      * improved docstring
      
      - completed the docstring in `router.py`
      - added more args in the config
      
      * v0 sparse mlp
      
      * replace wrong naming
      
      * forward pass run
      
      * update MOE layer
      
      * small router update
      
      * fixup
      
      * consistency
      
      * remove scatter router
      
      * remove abstract layer
      
      * update test and model for integration testing
      
      * v1 conversion
      
      * update
      
      * hardcode hack
      
      * all keys match
      
      * add gin conversion, without additional libraries
      
      * update conversion sctipy
      
      * delete router file
      
      * update tests wrt router deletion
      
      * fix router issues
      
      * update expert code
      
      * update, logits match, code needsREFACTORING
      
      * Refactor code
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * add generate tests
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      
      * add support for router loss
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * fix forward error
      
      * refactor a bit
      
      * remove `FlaxSwitchTransformers` modules
      
      * more tests pass
      
      * Update code
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * fixup
      
      * fix tests
      
      * fix doc
      
      * fix doc + tokenization
      
      * fix tokenizer test
      
      * fix test
      
      * fix loss output
      
      * update code for backward pass
      
      * add loss support
      
      * update documentation
      
      * fix documentation, clean tokenizer
      
      * more doc fix, cleanup example_switch
      
      * fix failing test
      
      * fix test
      
      * fix test
      
      * fix loss issue
      
      * move layer
      
      * update doc and fix router capacity usage
      
      * fixup
      
      * add sparse mlp index for documentation on hub
      
      * fixup
      
      * test sparse mix architecture
      
      * Apply suggestions from code review
      
      * Update docs/source/en/model_doc/switch_transformers.mdx
      
      * fixup on update
      
      * fix tests
      
      * fix another test
      
      * attempt fix
      
      * Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/switch_transformers/convert_switch_transformers_original_flax_checkpoint_to_pytorch.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * try
      
      * all tests pass
      
      * fix jitter noise
      
      * Apply suggestions from code review
      
      * doc tests pass
      
      * Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove assert
      
      * change config order
      
      * fix readme japanese
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * remove parallelizable tests + add one liners
      
      * remove ONNX config
      
      * fix nits
      
      - add `T5Tokenizer` in auto mapping
      - remove `Switch Transformers` from ONNX supported models
      
      * remove `_get_router`
      
      * remove asserts
      
      * add check in test for `router_dtype`
      
      * add `SwitchTransformersConfig` in `run_pipeline_test`
      
      * Update tests/pipelines/test_pipelines_summarization.py
      
      * add huge model conversion script
      
      * fix slow tests
      
      - add better casting for `Linear8bitLt`
      - remove `torchscript` tests
      
      * add make dir
      
      * style on new script
      
      * fix nits
      
      - doctest
      - remove `_keys_to_ignore_on_load_unexpected`
      
      * Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
      
      * add google as authors
      
      * fix year
      
      * remove last `assert` statements
      
      * standardize vertical spaces
      
      * fix failing import
      
      * fix another failing test
      
      * Remove strange àuthorized_keys`
      
      * removing todo and padding that is never used
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarybelkada <younes@huggingface.co>
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarArthur Zucker <arthur@huggingface.co>
      163ac3d3
  20. 14 Nov, 2022 1 commit
    • Matthijs Hollemans's avatar
      add MobileNetV2 model (#17845) · f711d683
      Matthijs Hollemans authored
      * add model files etc for MobileNetV2
      
      * rename files for MobileNetV1
      
      * initial implementation of MobileNetV1
      
      * fix conversion script
      
      * cleanup
      
      * write docs
      
      * tweaks
      
      * fix conversion script
      
      * extract hidden states
      
      * fix test cases
      
      * make fixup
      
      * fixup it all
      
      * rename V1 to V2
      
      * fix checkpoints
      
      * fixup
      
      * implement first block + weight conversion
      
      * add remaining layers
      
      * add output stride and dilation
      
      * fixup
      
      * add tests
      
      * add deeplabv3+ head
      
      * a bit of fixup
      
      * finish deeplab conversion
      
      * add link to doc
      
      * fix issue with JIT trace
      
      in_height and in_width would be Tensor objects during JIT trace, which caused Core ML conversion to fail on the remainder op. By making them ints, the result of the padding calculation becomes a constant value.
      
      * cleanup
      
      * fix order of models
      
      * fix rebase error
      
      * remove main from doc link
      
      * add image processor
      
      * remove old feature extractor
      
      * fix converter + other issues
      
      * fixup
      
      * fix unit test
      
      * add to onnx tests (but these appear broken now)
      
      * add post_process_semantic_segmentation
      
      * use google org
      
      * remove unused imports
      
      * move args
      
      * replace weird assert
      f711d683
  21. 10 Nov, 2022 2 commits
  22. 09 Nov, 2022 2 commits
  23. 08 Nov, 2022 3 commits
    • amyeroberts's avatar
      AutoImageProcessor (#20111) · 4eb918e6
      amyeroberts authored
      * AutoImageProcessor skeleton
      
      * Update references
      
      * Add mapping in init
      
      * Add model image processors to __init__ for importing
      
      * Add AutoImageProcessor tests
      
      * Fix up
      
      * Image Processor documentation
      
      * Remove pdb
      
      * Update docs/source/en/model_doc/mobilevit.mdx
      
      * Update docs
      
      * Don't add whitespace on json files
      
      * Remove fixtures
      
      * Move checking model config down
      
      * Fix up
      
      * Add check for image processor
      
      * Remove FeatureExtractorMixin in docstrings
      
      * Rename model_tmpfile to config_tmpfile
      
      * Don't make None if not in image processor map
      4eb918e6
    • Weiwe Shi's avatar
      Add RocBert (#20013) · efa889d2
      Weiwe Shi authored
      
      
      * add roc_bert
      
      * update roc_bert readme
      
      * code style
      
      * change name and delete unuse file
      
      * udpate model file
      
      * delete unuse log file
      
      * delete tokenizer fast
      
      * reformat code and change model file path
      
      * add RocBertForPreTraining
      
      * update docs
      
      * delete wrong notes
      
      * fix copies
      
      * fix make repo-consistency error
      
      * fix files are not present in the table of contents error
      
      * change RocBert -> RoCBert
      
      * add doc, add detail test
      Co-authored-by: default avatarweiweishi <weiweishi@tencent.com>
      efa889d2
    • NielsRogge's avatar
      Add CLIPSeg (#20066) · 25896306
      NielsRogge authored
      
      
      * Add first draft
      
      * Update conversion script
      
      * Improve conversion script
      
      * Improve conversion script some more
      
      * Add conditional embeddings
      
      * Add initial decoder
      
      * Fix activation function of decoder
      
      * Make decoder outputs match original implementation
      
      * Make decoder outputs match original implementation
      
      * Add more copied from statements
      
      * Improve model outputs
      
      * Fix auto tokenizer file
      
      * Fix more tests
      
      * Add test
      
      * Improve README and docs, improve conditional embeddings
      
      * Fix more tests
      
      * Remove print statements
      
      * Remove initial embeddings
      
      * Improve conversion script
      
      * Add interpolation of position embeddings
      
      * Finish addition of interpolation of position embeddings
      
      * Add support for refined checkpoint
      
      * Fix refined checkpoint
      
      * Remove unused parameter
      
      * Improve conversion script
      
      * Add support for training
      
      * Fix conversion script
      
      * Add CLIPSegFeatureExtractor
      
      * Fix processor
      
      * Fix CLIPSegProcessor
      
      * Fix conversion script
      
      * Fix most tests
      
      * Fix equivalence test
      
      * Fix README
      
      * Add model to doc tests
      
      * Use better variable name
      
      * Convert other checkpoint as well
      
      * Update config, add link to paper
      
      * Add docs
      
      * Update organization
      
      * Replace base_model_prefix with clip
      
      * Fix base_model_prefix
      
      * Fix checkpoint of config
      
      * Fix config checkpoint
      
      * Remove file
      
      * Use logits for output
      
      * Fix tests
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      25896306
  24. 07 Nov, 2022 1 commit