1. 15 Nov, 2023 2 commits
  2. 14 Nov, 2023 16 commits
  3. 13 Nov, 2023 17 commits
  4. 10 Nov, 2023 5 commits
    • Yih-Dar's avatar
      Make `examples_torch_job` faster (#27437) · 7ee995fd
      Yih-Dar authored
      
      
      fix
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      7ee995fd
    • amyeroberts's avatar
      Normalize floating point cast (#27249) · ed115b34
      amyeroberts authored
      * Normalize image - cast input images to float32.
      
      This is done if the input image isn't of floating type. Issues can occur when do_rescale=False is set in an image processor. When this happens, the image passed to the call is of type uint8 becuase of the type casting that happens in resize because of the PIL image library. As the mean and std values are cast to match the image dtype, this can cause NaNs and infs to appear in the normalized image, as the floating values being used to divide the image are now set to 0.
      
      The reason the mean and std values are cast is because previously they were set as float32 by default. However, if the input image was of type float16, the normalization would result in the image being upcast to float32 too.
      
      * Add tests
      
      * Remove float32 cast
      ed115b34
    • Susnato Dhar's avatar
      Add Phi-1 and Phi-1_5 (#26170) · e1c3ac25
      Susnato Dhar authored
      * only dir not even init
      
      * init
      
      * tokenizer removed and reference of codegen added
      
      * modeling file updated a lot remaining app_rotary_emb
      
      * conversion script done
      
      * conversion script fixed, a lot of factoring done and most tests pass
      
      * added token_clf and extractive_QA_head
      
      * integration tests pass
      
      * flash attn tests pass!
      
      * config done
      
      * more docs in modeling file
      
      * some style fix
      
      * style and others
      
      * doc test error fix
      
      * more doc fix
      
      * some attention fixes
      
      * most fixes
      
      * style and other fixes
      
      * docs fix and config
      
      * doc fix
      
      * some comments
      
      * conversion script updated
      
      * conversion script updated
      
      * Revert "conversion script updated"
      
      This reverts commit e92378c54084ec0747041b113083d1746ecb6c7f.
      
      * final comments
      
      * add Phi to language_modeling.md
      
      * edit phi.md file
      
      * rebase and fix
      
      * removed phi-1.5 example
      
      * changed model_type from 'phi'->'mixformer-sequential'
      
      * small change
      
      * small change
      
      * revert \small change
      
      * changed mixformer-sequential->phi
      
      * small change
      
      * added phi-1.5 example instead of phi-1
      
      * doc test might pass now
      
      * rebase and small change
      
      * added the dropout layer
      
      * more fixes
      
      * modified .md file
      
      * very very small doc change
      e1c3ac25
    • Yih-Dar's avatar
      At most 2 GPUs for CI (#27435) · 00dc8562
      Yih-Dar authored
      
      
      At most 2 GPUs
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      00dc8562
    • Arthur's avatar
      [`AttentionMaskConverter`] ]Fix-mask-inf (#27114) · 68afca3e
      Arthur authored
      * fix?
      
      * actual fix
      
      * fixups
      
      * add dataclass to the attention mask converter
      
      * refine testing suite
      
      * make sure there are no overflows
      
      * update the test
      68afca3e