1. 29 Sep, 2022 12 commits
    • Alara Dirik's avatar
      Improve DETR post-processing methods (#19205) · 01eb34ab
      Alara Dirik authored
      * Ensures consistent arguments and outputs with other post-processing methods
      * Adds post_process_semantic_segmentation, post_process_instance_segmentation, post_process_panoptic_segmentation, post_process_object_detection methods to DetrFeatureExtractor
      * Adds deprecation warnings to post_process, post_process_segmentation and post_process_panoptic
      01eb34ab
    • Sylvain Gugger's avatar
      Fix test fetching for examples (#19237) · 655f72a6
      Sylvain Gugger authored
      * Fix test fetching for examples
      
      * Fake example modif
      
      * Debug statements
      
      * Typo
      
      * You need to persist the file...
      
      * Revert change in example
      
      * Remove debug statements
      655f72a6
    • atturaioe's avatar
      b79028f0
    • Lucain's avatar
      Use `hf_raise_for_status` instead of deprecated `_raise_for_status` (#19244) · 902d30b3
      Lucain authored
      * Use  instead of  from huggingface_hub
      
      * bump huggingface_hub to 0.10.0 + make deps_table_update
      902d30b3
    • Younes Belkada's avatar
      Fix opt softmax small nit (#19243) · 3a27ba3d
      Younes Belkada authored
      * fix opt softmax nit
      
      - Use the same logic as 1eb09537550734a783c194e416029cb9bc4cb119 for consistency
      
      * Update src/transformers/models/opt/modeling_opt.py
      3a27ba3d
    • mustapha ajeghrir's avatar
      Fix `m2m_100.mdx` doc example missing `labels` (#19149) · ba9e336f
      mustapha ajeghrir authored
      The `labels` variable is not defined, the `model_inputs` already contain this information.
      ba9e336f
    • Aritra Roy Gosthipaty's avatar
      [TensorFlow] Adding GroupViT (#18020) · 0dc7b3a7
      Aritra Roy Gosthipaty authored
      
      
      * chore: initial commit
      
      * chore: adding util methods
      
      yet to work on the nn.functional.interpolate port with align_corener=True
      
      * chore: refactor the utils
      
      * used tf.compat.v1.image.resize to align the F.interpolate function
      * added type hints to the method signatures
      * added references to the gists where one 2 one alignment of torch and tf has been shown
      
      * chore: adding the layers
      
      * chore: porting all the layers from torch to tf
      
      This is the initial draft, nothing is tested yet.
      
      * chore: aligning the layers with reference to tf clip
      
      * chore: aligning the modules
      
      * added demaraction comments
      * added copied and adapted from comments
      
      * chore: aligning with CLIP
      
      * chore: wrangling the layers to keep it tf compatible
      
      * chore: aligning the names of the layers for porting
      
      * chore: style changes
      
      * chore: adding docs and inits
      
      * chore: adding tfp dependencis
      
      the code is taken from TAPAS
      
      * chore: initial commit for testing
      
      * chore: aligning the vision embeddings with the vit implementatino
      
      * chore: changing model prefix
      
      * chore: fixing the name of the model and the layer normalization test case
      
      * chore: every test passes but the slow ones
      
      * chore: fix style and integration test
      
      * chore: moving comments below decorators
      
      * chore: make fixup and fix-copies changes
      
      * chore: adding the Vision and Text Model to check_repo
      
      * chore: modifying the prefix name to align it with the torch implementation
      
      * chore: fix typo in configuration
      
      * choer: changing the name of the model variable
      
      * chore: adding segmentation flag
      
      * chore: gante's review
      
      * chore: style refactor
      
      * chore: amy review
      
      * chore: adding shape_list to parts that have been copied from other snippets
      
      * chore: init batchnorm with torch defaults
      
      * chore: adding shape_list to pass the tests
      
      * test fix: adding seed as 0
      
      * set seed
      
      * chore: changing the straight through trick to fix -ve dimensinos
      
      * chore: adding a dimension to the loss
      
      * chore: adding reviewers and contributors names to the docs
      
      * chore: added changes after review
      
      * chore: code quality fixup
      
      * chore: fixing the segmentation snippet
      
      * chore: adding  to the layer calls
      
      * chore: changing int32 to int64 for inputs of serving
      
      * chore: review changes
      
      * chore: style changes
      
      * chore: remove from_pt=True
      
      * fix: repo consistency
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      0dc7b3a7
    • Michael Benayoun's avatar
    • Gabriele Sarti's avatar
      XGLM - Fix Softmax NaNs when using FP16 (#18057) · 9d732fd2
      Gabriele Sarti authored
      
      
      * fix fp16 for xglm
      
      * Removed misleading comment
      
      * Fix undefined variable
      Co-authored-by: default avatarGabriele Sarti <gsarti@amazon.com>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      9d732fd2
    • Yih-Dar's avatar
      99c32493
    • Steven Liu's avatar
      Focus doc around preprocessing classes (#18768) · 6957350c
      Steven Liu authored
      * 馃摑 reframe docs around preprocessing classes
      
      * small edits
      
      * edits and review
      
      * fix typo
      
      * apply review
      
      * clarify processor
      6957350c
    • Steven Liu's avatar
      Move AutoClasses under Main Classes (#19163) · 990936a8
      Steven Liu authored
      * move autoclasses to main classes
      
      * keep auto.mdx in model_doc
      990936a8
  2. 28 Sep, 2022 8 commits
  3. 27 Sep, 2022 8 commits
  4. 26 Sep, 2022 9 commits
  5. 23 Sep, 2022 3 commits