1. 13 Dec, 2022 4 commits
    • dhansmair's avatar
      in the resize() function in image_transforms.py, the line 267: (#20728) · 30d8919a
      dhansmair authored
      `image = to_channel_dimension_format(image, ChannelDimension.LAST)`
      is redundant as this same conversion is also applied in to_pil_image().
      
      This redundant call actually makes the training fail in rare cases.
      The problem can be reproduced with the following code snippet:
      ```
      from transformers.models.clip import CLIPFeatureExtractor
      vision_processor = CLIPFeatureExtractor.from_pretrained('openai/clip-vit-large-patch14')
      images = [
          torch.rand(size=(3, 2, 10), dtype=torch.float),
          torch.rand(size=(3, 10, 1), dtype=torch.float),
          torch.rand(size=(3, 1, 10), dtype=torch.float)
      ]
      for image in images:
          processed_image = vision_processor(images=image, return_tensors="pt")['pixel_values']
          print(processed_image.shape)
          assert processed_image.shape == torch.Size([1, 3, 224, 224])
      ```
      
      The last image has a height of 1 pixel.
      The second call to to_channel_dimesion_format() will transpose the image, and the height
      dimension is wrongly treated as the channels dimension afterwards.
      Because of this, the following normalize() step will result in an
      exception.
      30d8919a
    • Matt's avatar
      Fix AdamWeightDecay for TF 2.11 (#20735) · 4f1788b3
      Matt authored
      * Fix AdamWeightDecay for TF
      
      * Fix AdamWeightDecay for TF
      
      * make fixup
      4f1788b3
    • Yih-Dar's avatar
      Change a logic in pipeline test regarding TF (#20710) · a12c5cbc
      Yih-Dar authored
      
      
      * Fix the pipeline test regarding TF
      
      * Fix the pipeline test regarding TF
      
      * update comment
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      a12c5cbc
    • Younes Belkada's avatar
      Add `keep_in_fp32_modules` support (#20683) · 1af4bee8
      Younes Belkada authored
      
      
      * add `keep_in_fp32_modules` support
      
      * pass it as class attribute
      
      * few modifs
      
      - make tests `slow`
      - fix logic
      
      * better logic
      
      * fix failing test
      
      * `bfloat16` support
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix
      
      * simplify tests
      
      * simplify tests
      
      * fix test
      
      * modify message
      
      * more checks
      
      * fix failing tests
      
      * add more conditions
      
      - add `is_accelerate_available`
      - fixes pipleine tests that failed
      
      * add suggestions
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix failing `bnb` test
      
      * add last safety checker
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      1af4bee8
  2. 12 Dec, 2022 18 commits
  3. 09 Dec, 2022 7 commits
  4. 08 Dec, 2022 11 commits