"docs/source/ko/tasks/monocular_depth_estimation.md" did not exist on "1fe1e3caa44617047f149bcc0c0b566343b714a7"
  1. 01 May, 2023 1 commit
  2. 28 Apr, 2023 1 commit
  3. 26 Apr, 2023 1 commit
  4. 20 Apr, 2023 1 commit
  5. 19 Apr, 2023 1 commit
  6. 18 Apr, 2023 2 commits
  7. 17 Apr, 2023 1 commit
  8. 13 Apr, 2023 1 commit
  9. 12 Apr, 2023 1 commit
  10. 04 Apr, 2023 1 commit
  11. 24 Mar, 2023 1 commit
  12. 20 Mar, 2023 2 commits
  13. 14 Mar, 2023 1 commit
  14. 13 Mar, 2023 1 commit
  15. 09 Mar, 2023 2 commits
  16. 22 Feb, 2023 2 commits
  17. 20 Feb, 2023 1 commit
    • AlexWertheim's avatar
      Enable PyTorch/XLA Fully Sharded Data Parallel (FSDP) (#21406) · 7735e040
      AlexWertheim authored
      
      
      * Reinserted import statement accidentally removed during rebasing.
      
      * Added auto_wrap functionality, restructured XLA FSDP logic to more closely match PyTorch FSDP logic.
      
      * Fixed flag descriptions; changed several instances of fsdp_ to xla_fsdp_; pass in auto_wrap_policy and auto_wrapper_callable directly to avoid lambda saving.
      
      * Moved XLA FSDP logic to be adjacent to Fairscale FSDP logic in trainer.
      
      * Formatted changes in accordance with HF style requirements.
      
      * Added back in warning which was accidentally removed.
      
      * - Merged XLA FSDP training arguments into `fsdp_config`
      - Added `xla` boolean flag to `fsdp_config` to specify XLA FSDP wrapping
      - Merged XLA FSDP wrapping logic into FSDP wrapping logic within trainer
        class
      
      * Cleaned up errors, moved argument to fsdp_config
      
      - Set `xla` and `xla_fsdp_grad_ckpt` flags by default in fsdp_config
      - Added missing colons following conditionals
      - Moved `fsdp_transformer_layer_cls_to_wrap` to `fsdp_config`
      - Modified `fsdp_transformer_layer_cls_to_wrap` to be list of strings,
        not just one string
      - Changed Fairscale FSDP logic to allow for set of layer classes to wrap
      - Removed unnecessary checks for `xla_fsdp`
      
      * Corrected small errors, improved layer class flag
      
      - Correctly set default values for `xla` and `xla_fsdp_grad_ckpt`
        arguments
      - Made `fsdp_transformer_layer_cls_to_wrap` a list of strings instead of
        a single string
      - Added processing to ensure that `fsdp_transformer_layer_cls_to_wrap`
        works as expected if passed as a single string
      - Updated PyTorch FSDP logic to accept a list of layers to wrap, as done
        with XLA FSDP
      - Replaced instances of `getattr()` with `.get()` for dictionary
        retrievals with default values, including when setting
        `fsdp_min_num_params`
      - Corrected `self.fsdp is not None` to `len(self.fsdp) > 0`
      - Removed extraneous `xla_fsdp` argument descriptions from outside
        `fsdp_config`
      
      * Changed xla-fsdp-settings to be dictionary
      
      - Modified xla-fsdp-settings to be entered directly as dictionary
        instead of loaded through JSON file
      - Made small style corrections
      
      * Reverted unintentional local_rank TPU check
      
      * Do not block XLA FSDP if local rank is -1
      
      * Rebased and applied automatic formatting
      
      - Rebased
      - Applied automatic formatting changes via `make style`
      
      * Applied automatic formatting with latest version of black
      
      * Replaced  expression with
      
      * Reran black examples tests src utils
      ruff examples tests src utils --fix
      make autogenerate_code
      make[1]: Entering directory '/usr/local/google/home/awertheim/HF-FSDP-PR/transformers'
      make[1]: Leaving directory '/usr/local/google/home/awertheim/HF-FSDP-PR/transformers' after additional formatting changes
      
      * Additionall automatic formatting changes
      
      * Remove unnecessary whitespace characters from src/transformers/training_args.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      7735e040
  18. 07 Feb, 2023 1 commit
  19. 06 Feb, 2023 1 commit
    • Sylvain Gugger's avatar
      Update quality tooling for formatting (#21480) · 6f79d264
      Sylvain Gugger authored
      * Result of black 23.1
      
      * Update target to Python 3.7
      
      * Switch flake8 to ruff
      
      * Configure isort
      
      * Configure isort
      
      * Apply isort with line limit
      
      * Put the right black version
      
      * adapt black in check copies
      
      * Fix copies
      6f79d264
  20. 31 Jan, 2023 1 commit
  21. 24 Jan, 2023 1 commit
  22. 18 Jan, 2023 1 commit
    • jeffhataws's avatar
      Add AWS Neuron torchrun support (#20806) · c59d71b2
      jeffhataws authored
      * Add XLA torchrun support
      
      * Clarify that currently DDP doesn't work with torch.distributed XLA backend yet
      
      * Enable DDP with torchrun and XLA (now available in PT-XLA 1.13)
      
      * Add check for AWS Neuron availability and AWS Neuron specific compiler flag
      
      * Change the new test's name to TestTrainerDistributedNeuronCore
      
      * Remove "assert" and replace raised exception
      
      * Remove compiler flag as it is optional. If needed, will be another PR.
      
      * Use TORCHELASTIC_RUN_ID to determine whether torchrun is used
      c59d71b2
  23. 29 Dec, 2022 1 commit
  24. 14 Dec, 2022 1 commit
  25. 08 Dec, 2022 2 commits
  26. 30 Nov, 2022 2 commits
  27. 28 Nov, 2022 2 commits
  28. 18 Nov, 2022 1 commit
    • atturaioe's avatar
      Add AnyPrecisionAdamW optimizer (#18961) · 84c9cc6d
      atturaioe authored
      * Add AnyPrecisionAdamW optimizer
      
      * Add optim_args argument to TrainingArgs
      
      * Add tests for AnyPrecisionOptimizer
      
      * Change AnyPrecisionAdam default params to float32
      
      * Move default_anyprecision_kwargs in trainer test
      
      * Rename AnyPrecisionAdamW
      84c9cc6d
  29. 15 Nov, 2022 1 commit
  30. 14 Oct, 2022 1 commit
  31. 29 Sep, 2022 1 commit
  32. 22 Sep, 2022 1 commit
  33. 21 Sep, 2022 1 commit