1. 28 Oct, 2021 1 commit
  2. 04 Oct, 2021 1 commit
    • Philip Meier's avatar
      Add ufmt (usort + black) as code formatter (#4384) · 5f0edb97
      Philip Meier authored
      
      
      * add ufmt as code formatter
      
      * cleanup
      
      * quote ufmt requirement
      
      * split imports into more groups
      
      * regenerate circleci config
      
      * fix CI
      
      * clarify local testing utils section
      
      * use ufmt pre-commit hook
      
      * split relative imports into local category
      
      * Revert "split relative imports into local category"
      
      This reverts commit f2e224cde2008c56c9347c1f69746d39065cdd51.
      
      * pin black and usort dependencies
      
      * fix local test utils detection
      
      * fix ufmt rev
      
      * add reference utils to local category
      
      * fix usort config
      
      * remove custom categories sorting
      
      * Run pre-commit without fixing flake8
      
      * got a double import in merge
      Co-authored-by: default avatarNicolas Hug <nicolashug@fb.com>
      5f0edb97
  3. 27 Aug, 2021 1 commit
  4. 20 Jul, 2021 1 commit
  5. 27 Apr, 2021 1 commit
  6. 24 Mar, 2021 1 commit
  7. 26 Jan, 2021 1 commit
  8. 07 Jan, 2021 1 commit
  9. 01 Dec, 2020 1 commit
    • Francisco Massa's avatar
      concatenate small tensors into big ones to reduce the use of shared f… (#1795) · 9fc6522d
      Francisco Massa authored
      * concatenate small tensors into big ones to reduce the use of shared file descriptor (#1694)
      
      Summary:
      Pull Request resolved: https://github.com/pytorch/vision/pull/1694
      
      
      
      - PT dataloader forks worker process to speed up the fetching of dataset example.  The recommended way of multiprocess context is `forkserver` rather than `fork`.
      
      - Main process and worker processes will share the dataset class instance, which avoid duplicating the dataset and save memory. In this process, `ForkPickler(..).dumps(...)` will be called to serialize the objects, including objects within dataset instance recursively. `VideoClips` instance internally uses O(N) `torch.Tensor` to store per-video information, such as pts, and possible clips, where N is the No. of videos.
      
      - During dumping, each `torch.Tensor` will use one File Descriptor (FD). The OS default max limit of FD is 65K by using `ulimit -n` to query. The number of tensors in `VideoClips` often exceeds the limit.
      
      - To resolve this issue, we use a few big tensors by concatenating small tensors in the `__getstate__()` method, which will be called during pickling. This will only require O(1) tensors.
      
      - When this diff is landed, we can abondon D19173248
      
      In D19173397, in ClassyVision, we change the mp context from `fork` to `forkserver`, and finally can run the PT dataloader without hanging issues.
      
      Reviewed By: fmassa
      
      Differential Revision: D19179991
      
      fbshipit-source-id: c8716775c7c154aa33d93b25d112d2a59ea688a9
      
      * Try to fix Windows
      
      * Try fix Windows v2
      
      * Disable tests on Windows
      
      * Add back necessary part
      
      * Try fix OSX (and maybe Windows)
      
      * Fix
      
      * Try enabling Windows
      Co-authored-by: default avatarZhicheng Yan <zyan3@fb.com>
      9fc6522d
  10. 07 May, 2020 1 commit
    • Guillem Orellana Trullols's avatar
      Update ucf101.py (#2186) · 14af9de6
      Guillem Orellana Trullols authored
      Now the dataset is not working properly because of this line of code `indices = [i for i in range(len(video_list)) if video_list[i][len(self.root) + 1:] in selected_files]`. 
      Performing the `len(self.root) + 1` only make sense if there is no training / to root
      
      ```
      >>> root = 'data/ucf-101/videos'
      >>> video_path = 'data/ucf-101/videos/activity/video.avi'
      >>> video_path [len(root ):]
      '/activity/video.avi'
      >>> video_path [len(root ) + 1:]
      'activity/video.avi'
      ```
      
      Appending the root path also to the selected files is a simple solution and make the dataset works with and without a trailing slash.
      14af9de6
  11. 07 Oct, 2019 1 commit
  12. 03 Oct, 2019 1 commit
  13. 20 Sep, 2019 1 commit
  14. 28 Aug, 2019 1 commit
  15. 01 Aug, 2019 1 commit
  16. 24 Jul, 2019 1 commit