1. 18 Nov, 2022 14 commits
    • Ali Hassani's avatar
      Add Neighborhood Attention Transformer (NAT) and Dilated NAT (DiNAT) models (#20219) · fc4a993e
      Ali Hassani authored
      * Add DiNAT
      
      * Adds DiNAT + tests
      
      * Minor fixes
      
      * Added HF model
      
      * Add natten to dependencies.
      
      * Cleanup
      
      * Minor fixup
      
      * Reformat
      
      * Optional NATTEN import.
      
      * Reformat & add doc to _toctree
      
      * Reformat (finally)
      
      * Dummy objects for DiNAT
      
      * Add NAT + minor changes
      
      Adds NAT as its own independent model + docs, tests
      Adds NATTEN to ext deps to ensure ci picks it up.
      
      * Remove natten from `all` and `dev-torch` deps, add manual pip install to ci tests
      
      * Minor fixes.
      
      * Fix READMEs.
      
      * Requested changes to docs + minor fixes.
      
      * Requested changes.
      
      * Add NAT/DiNAT tests to layoutlm_job
      
      * Correction to Dinat doc.
      
      * Requested changes.
      fc4a993e
    • Joao Gante's avatar
      TF: future proof our keras imports (#20317) · 8d6de0b9
      Joao Gante authored
      * future proof our tf code
      
      * parse tf versions
      8d6de0b9
    • Steven Liu's avatar
      Remove double brackets (#20307) · b2c863a3
      Steven Liu authored
      * remove double brackets
      
      * oops get other bracket
      b2c863a3
    • Yih-Dar's avatar
      Pin TF 2.10.1 for Push CI (#20319) · f10cdba2
      Yih-Dar authored
      
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      f10cdba2
    • Zachary Mueller's avatar
      Fix flakey test with seed (#20318) · 9d1ef009
      Zachary Mueller authored
      9d1ef009
    • Nicolas Patry's avatar
      [Proposal] Breaking change `zero-shot-object-detection` for improved consistency. (#20280) · 8e777b3b
      Nicolas Patry authored
      * [Proposal] Breaking change `zero-shot-object-detection` for improved
      consistency.
      
      This is a proposal to modify the output of `zero-shot-object-detection`
      to provide better alignment with other pipelines.
      
      The output is now strictly the same as `object-detection` whereas before
      it would output lists of lists.
      
      The name `candidate_labels` is used throughout for consistency with
      other `zero-shot` pipelines.
      
      The pipeline is changed to `ChunkPipeline` to support batching cleanly.
      
      This removes all the lists and list of lists shenanigans, it's now a
      matter of the base pipeline handling all this not this specific one.
      
      **Breaking change**: It did remove complex calls potentials `pipe(images = [image1, image2],
      text_queries=[candidates1, candidates2])` to support only
      `pipe([{"image": image1, "candidate_labels": candidates1}, {"image": image2, "candidate_labels": candidates2}])`
      when dealing with lists and/or datasets.
      We could keep them, but it will add a lot of complexity to the code
      base, since the pipeline is rather young, I'd rather break to keep the
      code simpler, but we can revert this.
      
      **Breaking change**: The name of the argument is now `image` instead of
      `images` since it expects by default only 1 image. This is revertable
      like the previous one.
      
      **Breaking change**: The types is now simplified and flattened:
      
      `pipe(inputs) == [{**object1}, {**object2}]`
      instead of the previous
      `pipe(inputs) == [[{**object1}, {**object1}], [{**object2}]]`
      Where the different instances would be grouped by candidate labels
      within lists.
      IMHO this is not really desirable, since it would output empty lists and
      is only adding superflous indirection compared to
      `zero-shot-object-detection`.
      
      It is relatively change free in terms of how the results, it does change
      computation however since now the batching is handled by the pipeline
      itself. It **did** change the results for the small models so there
      seems to be a real difference in how the models handle this.
      
      * Fixing the doctests.
      
      * Behind is_torch_available.
      8e777b3b
    • atturaioe's avatar
      Add AnyPrecisionAdamW optimizer (#18961) · 84c9cc6d
      atturaioe authored
      * Add AnyPrecisionAdamW optimizer
      
      * Add optim_args argument to TrainingArgs
      
      * Add tests for AnyPrecisionOptimizer
      
      * Change AnyPrecisionAdam default params to float32
      
      * Move default_anyprecision_kwargs in trainer test
      
      * Rename AnyPrecisionAdamW
      84c9cc6d
    • Sylvain Gugger's avatar
      Also pin TensorFlow CPU · 37e01633
      Sylvain Gugger authored
      37e01633
    • Sylvain Gugger's avatar
      Pin to the right version... · a3f74580
      Sylvain Gugger authored
      a3f74580
    • Sylvain Gugger's avatar
      Pin TensorFlow (#20313) · f7ab8c42
      Sylvain Gugger authored
      f7ab8c42
    • amyeroberts's avatar
      Add padding image transformation (#19838) · b9826942
      amyeroberts authored
      * Add padding transformation
      
      * Add in upstream changes
      
      * Update tests & docs
      
      * Code formatting tuples in docstring
      b9826942
    • Sanchit Gandhi's avatar
      [ASR Examples] Update README for Whisper (#20230) · c29a2f7c
      Sanchit Gandhi authored
      * [ASR Examples] Update README for seq2seq
      
      * add language info
      
      * add training results
      
      * re-word
      c29a2f7c
    • Arthur's avatar
      95754b47
    • Arthur's avatar
      Fix blender bot missleading doc (#20301) · 532e60be
      Arthur authored
      * fix the doc to specify that add_prefix_space = False
      
      * add correct expected output
      532e60be
  2. 17 Nov, 2022 14 commits
  3. 16 Nov, 2022 12 commits