1. 13 May, 2024 1 commit
    • fxmarty's avatar
      CI: update to ROCm 6.0.2 and test MI300 (#30266) · 37bba2a3
      fxmarty authored
      
      
      * update to ROCm 6.0.2 and test MI300
      
      * add callers for mi300
      
      * update dockerfile
      
      * fix trainer tests
      
      * remove apex
      
      * style
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * Update tests/trainer/test_trainer_seq2seq.py
      
      * update to torch 2.3
      
      * add workflow dispatch target
      
      * we may need branches: mi300-ci after all
      
      * nit
      
      * fix docker build
      
      * nit
      
      * add check runner
      
      * remove docker-gpu
      
      * fix issues
      
      * fix
      
      ---------
      Co-authored-by: default avatarYih-Dar <2521628+ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      37bba2a3
  2. 07 May, 2024 1 commit
  3. 06 May, 2024 1 commit
    • Arthur's avatar
      [`CI update`] Try to use dockers and no cache (#29202) · 307f632b
      Arthur authored
      
      
      * change cis
      
      * nits
      
      * update
      
      * minor updates
      
      * [push-ci-image]
      
      * nit [push-ci-image]
      
      * nitsssss
      
      * [build-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * both
      
      * [push-ci-image]
      
      * this?
      
      * [push-ci-image]
      
      * pypi-kenlm needs g++
      
      * [push-ci-image]
      
      * nit
      
      * more nits [push-ci-image]
      
      * nits [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * add vision
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * add new dummy file but will need to update them [push-ci-image]
      
      * [push-ci-image]
      
      * show package size as well
      
      * [push-ci-image]
      
      * potentially ignore failures
      
      * workflow updates
      
      * nits [push-ci-image]
      
      * [push-ci-image]
      
      * fix consistency
      
      * clean nciida triton
      
      * also show big packages [push-ci-image]
      
      * nit
      
      * update
      
      * another one
      
      * line escape?
      
      * add accelerate [push-ci-image]
      
      * updates [push-ci-image]
      
      * nits to run tests, no push-ci
      
      * try to parse skip reason to make sure nothing is skipped that should no be skippped
      
      * nit?
      
      * always show skipped reasons
      
      * nits
      
      * better parsing of the test outputs
      
      * action="store_true",
      
      * failure on failed
      
      * show matched
      
      * debug
      
      * update short summary with skipped, failed and errors
      
      * nits
      
      * nits
      
      * coolu pdates
      
      * remove docbuilder
      
      * fix
      
      * always run checks
      
      * oups
      
      * nits
      
      * don't error out on library printing
      
      * non zero exi codes
      
      * no warning
      
      * nit
      
      * WAT?
      
      * format nit
      
      * [push-ci-image]
      
      * fail if fail is needed
      
      * [push-ci-image]
      
      * sound file for torch light?
      
      * [push-ci-image]
      
      * order is important [push-ci-image]
      
      * [push-ci-image] reduce even further
      
      * [push-ci-image]
      
      * use pytest rich !
      
      * yes [push-ci-image]
      
      * oupsy
      
      * bring back the full traceback, but pytest rich should help
      
      * nit
      
      * [push-ci-image]
      
      * re run
      
      * nit
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * empty push to trigger
      
      * [push-ci-image]
      
      * nit? [push-ci-image]
      
      * empty
      
      * try to install timm with no deps
      
      * [push-ci-image]
      
      * oups [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image] ?
      
      * [push-ci-image] open ssh client for git checkout fast
      
      * empty for torch light
      
      * updates [push-ci-image]
      
      * nit
      
      * @v4 for checkout
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fix fetch tests with parallelism
      
      * [push-ci-image]
      
      * more parallelism
      
      * nit
      
      * more nits
      
      * empty to re-trigger
      
      * empty to re-trigger
      
      * split by timing
      
      * did not work with previous commit
      
      * junit.xml
      
      * no path?
      
      * mmm this?
      
      * junitxml format
      
      * split by timing
      
      * nit
      
      * fix junit family
      
      * now we can test if the xunit1 is compatible!
      
      * this?
      
      * fully list tests
      
      * update
      
      * update
      
      * oups
      
      * finally
      
      * use classname
      
      * remove working directory to make sure the path does not interfere
      
      * okay no juni should have the correct path
      
      * name split?
      
      * sort by classname is what make most sense
      
      * some testing
      
      * naem
      
      * oups
      
      * test something fun
      
      * autodetect
      
      * 18?
      
      * nit
      
      * file size?
      
      * uip
      
      * 4 is best
      
      * update to see versions
      
      * better print
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * please install the correct keras version
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * uv is fucking me up
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * nits
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * install issues an pins
      
      * tapas as well
      
      * nits
      
      * more paralellism
      
      * short tb
      
      * soundfile
      
      * soundfile
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * oups
      
      * [push-ci-image]
      
      * fix some things
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * use torch-light for hub
      
      * small git lfs for hub job
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fix tf tapas
      
      * [push-ci-image]
      
      * nits
      
      * [push-ci-image]
      
      * don't update the test
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * no use them
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * update tf proba
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * woops
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * test with built dockers
      
      * [push-ci-image]
      
      * skip annoying tests
      
      * revert fix copy
      
      * update test values
      
      * update
      
      * last skip and fixup
      
      * nit
      
      * ALL GOOOD
      
      * quality
      
      * Update tests/models/layoutlmv2/test_image_processing_layoutlmv2.py
      
      * Update docker/quality.dockerfile
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Update src/transformers/models/tapas/modeling_tf_tapas.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * use torch-speed
      
      * updates
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      * fuck ken-lm [push-ci-image]
      
      * [push-ci-image]
      
      * [push-ci-image]
      
      ---------
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      307f632b
  4. 02 May, 2024 1 commit
    • mobicham's avatar
      Add HQQ quantization support (#29637) · 59952994
      mobicham authored
      
      
      * update HQQ transformers integration
      
      * push import_utils.py
      
      * add force_hooks check in modeling_utils.py
      
      * fix | with Optional
      
      * force bias as param
      
      * check bias is Tensor
      
      * force forward for multi-gpu
      
      * review fixes pass
      
      * remove torch grad()
      
      * if any key in linear_tags fix
      
      * add cpu/disk check
      
      * isinstance return
      
      * add multigpu test + refactor tests
      
      * clean hqq_utils imports in hqq.py
      
      * clean hqq_utils imports in quantizer_hqq.py
      
      * delete hqq_utils.py
      
      * Delete src/transformers/utils/hqq_utils.py
      
      * ruff init
      
      * remove torch.float16 from __init__ in test
      
      * refactor test
      
      * isinstance -> type in quantizer_hqq.py
      
      * cpu/disk device_map check in quantizer_hqq.py
      
      * remove type(module) nn.linear check in quantizer_hqq.py
      
      * add BaseQuantizeConfig import inside HqqConfig init
      
      * remove hqq import in hqq.py
      
      * remove accelerate import from test_hqq.py
      
      * quant config.py doc update
      
      * add hqqconfig to main_classes doc
      
      * make style
      
      * __init__ fix
      
      * ruff __init__
      
      * skip_modules list
      
      * hqqconfig format fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * hqqconfig doc fix
      
      * test_hqq.py remove mistral comment
      
      * remove self.using_multi_gpu is False
      
      * torch_dtype default val set and logger.info
      
      * hqq.py isinstance fix
      
      * remove torch=None
      
      * torch_device test_hqq
      
      * rename test_hqq
      
      * MODEL_ID in test_hqq
      
      * quantizer_hqq setattr fix
      
      * quantizer_hqq typo fix
      
      * imports quantizer_hqq.py
      
      * isinstance quantizer_hqq
      
      * hqq_layer.bias reformat quantizer_hqq
      
      * Step 2 as comment in quantizer_hqq
      
      * prepare_for_hqq_linear() comment
      
      * keep_in_fp32_modules fix
      
      * HqqHfQuantizer reformat
      
      * quantization.md hqqconfig
      
      * quantization.md model example reformat
      
      * quantization.md # space
      
      * quantization.md space   })
      
      * quantization.md space   })
      
      * quantization_config fix doc
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * axis value check in quantization_config
      
      * format
      
      * dynamic config explanation
      
      * quant config method in quantization.md
      
      * remove shard-level progress
      
      * .cuda fix modeling_utils
      
      * test_hqq fixes
      
      * make fix-copies
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      59952994
  5. 23 Apr, 2024 1 commit
  6. 22 Apr, 2024 1 commit
  7. 16 Apr, 2024 1 commit
  8. 10 Apr, 2024 1 commit
  9. 09 Apr, 2024 1 commit
    • Marc Sun's avatar
      Fix quantization tests (#29914) · 58a939c6
      Marc Sun authored
      * revert back to torch 2.1.1
      
      * run test
      
      * switch to torch 2.2.1
      
      * udapte dockerfile
      
      * fix awq tests
      
      * fix test
      
      * run quanto tests
      
      * update tests
      
      * split quantization tests
      
      * fix
      
      * fix again
      
      * final fix
      
      * fix report artifact
      
      * build docker again
      
      * Revert "build docker again"
      
      This reverts commit 399a5f9d9308da071d79034f238c719de0f3532e.
      
      * debug
      
      * revert
      
      * style
      
      * new notification system
      
      * testing notfication
      
      * rebuild docker
      
      * fix_prev_ci_results
      
      * typo
      
      * remove warning
      
      * fix typo
      
      * fix artifact name
      
      * debug
      
      * issue fixed
      
      * debug again
      
      * fix
      
      * fix time
      
      * test notif with faling test
      
      * typo
      
      * issues again
      
      * final fix ?
      
      * run all quantization tests again
      
      * remove name to clear space
      
      * revert modfiication done on workflow
      
      * fix
      
      * build docker
      
      * build only quant docker
      
      * fix quantization ci
      
      * fix
      
      * fix report
      
      * better quantization_matrix
      
      * add print
      
      * revert to the basic one
      58a939c6
  10. 26 Mar, 2024 1 commit
  11. 21 Mar, 2024 1 commit
  12. 15 Mar, 2024 1 commit
  13. 05 Mar, 2024 1 commit
  14. 28 Feb, 2024 1 commit
    • Marc Sun's avatar
      [CI] Quantization workflow (#29046) · f54d82ca
      Marc Sun authored
      
      
      * [CI] Quantization workflow
      
      * build dockerfile
      
      * fix dockerfile
      
      * update self-cheduled.yml
      
      * test build dockerfile on push
      
      * fix torch install
      
      * udapte to python 3.10
      
      * update aqlm version
      
      * uncomment build dockerfile
      
      * tests if the scheduler works
      
      * fix docker
      
      * do not trigger on psuh again
      
      * add additional runs
      
      * test again
      
      * all good
      
      * style
      
      * Update .github/workflows/self-scheduled.yml
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * test build dockerfile with torch 2.2.0
      
      * fix extra
      
      * clean
      
      * revert changes
      
      * Revert "revert changes"
      
      This reverts commit 4cb52b8822da9d1786a821a33e867e4fcc00d8fd.
      
      * revert correct change
      
      ---------
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      f54d82ca
  15. 27 Feb, 2024 1 commit
  16. 23 Feb, 2024 1 commit
  17. 14 Feb, 2024 1 commit
  18. 11 Jan, 2024 2 commits
  19. 10 Jan, 2024 1 commit
  20. 09 Jan, 2024 1 commit
  21. 25 Dec, 2023 1 commit
  22. 20 Dec, 2023 1 commit
  23. 11 Dec, 2023 1 commit
    • Ella Charlaix's avatar
      Add deepspeed test to amd scheduled CI (#27633) · 39acfe84
      Ella Charlaix authored
      
      
      * add deepspeed scheduled test for amd
      
      * fix image
      
      * add dockerfile
      
      * add comment
      
      * enable tests
      
      * trigger
      
      * remove trigger for this branch
      
      * trigger
      
      * change runner env to trigger the docker build image test
      
      * use new docker image
      
      * remove test suffix from docker image tag
      
      * replace test docker image with original image
      
      * push new image
      
      * Trigger
      
      * add back amd tests
      
      * fix typo
      
      * add amd tests back
      
      * fix
      
      * comment until docker image build scheduled test fix
      
      * remove deprecated deepspeed build option
      
      * upgrade torch
      
      * update docker & make tests pass
      
      * Update docker/transformers-pytorch-deepspeed-amd-gpu/Dockerfile
      
      * fix
      
      * tmp disable test
      
      * precompile deepspeed to avoid timeout during tests
      
      * fix comment
      
      * trigger deepspeed tests with new image
      
      * comment tests
      
      * trigger
      
      * add sklearn dependency to fix slow tests
      
      * enable back other tests
      
      * final update
      
      ---------
      Co-authored-by: default avatarFelix Marty <felix@hf.co>
      Co-authored-by: default avatarF茅lix Marty <9808326+fxmarty@users.noreply.github.com>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      39acfe84
  24. 05 Dec, 2023 2 commits
  25. 21 Nov, 2023 1 commit
  26. 13 Nov, 2023 1 commit
  27. 07 Nov, 2023 1 commit
  28. 06 Nov, 2023 1 commit
  29. 01 Nov, 2023 1 commit
  30. 11 Oct, 2023 1 commit
  31. 05 Oct, 2023 2 commits
  32. 20 Sep, 2023 1 commit
    • Funtowicz Morgan's avatar
      Integrate AMD GPU in CI/CD environment (#26007) · 2d71307d
      Funtowicz Morgan authored
      
      
      * Add a Dockerfile for PyTorch + ROCm based on official AMD released artifact
      
      * Add a new artifact single-amdgpu testing on main
      
      * Attempt to test the workflow without merging.
      
      * Changed BERT to check if things are triggered
      
      * Meet the dependencies graph on workflow
      
      * Revert BERT changes
      
      * Add check_runners_amdgpu to correctly mount and check availability
      
      * Rename setup to setup_gpu for CUDA and add setup_amdgpu for AMD
      
      * Fix all the needs.setup -> needs.setup_[gpu|amdgpu] dependencies
      
      * Fix setup dependency graph to use check_runner_amdgpu
      
      * Let's do the runner status check only on AMDGPU target
      
      * Update the Dockerfile.amd to put ourselves in / rather than /var/lib
      
      * Restore the whole setup for CUDA too.
      
      * Let's redisable them
      
      * Change BERT to trigger tests
      
      * Restore BERT
      
      * Add torchaudio with rocm 5.6 to AMD Dockerfile (#26050)
      
      fix dockerfile
      Co-authored-by: default avatarFelix Marty <felix@hf.co>
      
      * Place AMD GPU tests in a separate workflow (correct branch) (#26105)
      
      AMDGPU CI lives in an other workflow
      
      * Fix invalid job name is dependencies.
      
      * Remove tests multi-amdgpu for now.
      
      * Use single-amdgpu
      
      * Use --net=host for now.
      
      * Remote host networking.
      
      * Removed duplicated check_runners_amdgpu step
      
      * Let's tag machine-types with mi210 for now.
      
      * Machine type should be only mi210
      
      * Remove unnecessary push.branches item
      
      * Apply review suggestions moving from `x-amdgpu` to `x-gpu` introducing `amd-gpu` and `miXXX` labels.
      
      * Remove amdgpu from step names.
      
      * finalize
      
      * delete
      
      ---------
      Co-authored-by: default avatarfxmarty <9808326+fxmarty@users.noreply.github.com>
      Co-authored-by: default avatarFelix Marty <felix@hf.co>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      2d71307d
  33. 24 Aug, 2023 1 commit
  34. 18 Aug, 2023 1 commit
  35. 17 Aug, 2023 1 commit
  36. 10 Aug, 2023 1 commit
  37. 07 Aug, 2023 1 commit