- 17 Jun, 2025 3 commits
-
-
Thorsten Kurth authored
-
Thorsten Kurth authored
-
Thorsten Kurth authored
-
- 13 Jun, 2025 7 commits
-
-
Thorsten Kurth authored
fixing attention perf test attempt 1
-
Thorsten Kurth authored
-
Thorsten Kurth authored
Optimized CUDA kernels for S2 Attention (forward and backward)
-
Thorsten Kurth authored
-
Thorsten Kurth authored
Merge branch 'mr/bwd-channel-permute-experiments' of https://github.com/rietmann-nv/torch-harmonics into mr/bwd-channel-permute-experiments
-
Max Rietmann authored
-
Thorsten Kurth authored
-
- 11 Jun, 2025 3 commits
-
-
Max Rietmann authored
-
Max Rietmann authored
Also: Made fwd kernel use modified memory layout with standard shape
-
Max Rietmann authored
Also match the gradient output to the input, in terms of memory layout
-
- 06 Jun, 2025 1 commit
-
-
Max Rietmann authored
Detect memory layout (B,C,H,W) (stride for C should be 1, if not, fix it) This ensures that the backwards kernel is fast
-
- 04 Jun, 2025 1 commit
-
-
Max Rietmann authored
putting qy in shared is a little faster Changing internal memory layout means we can leave code in standard shape and only change layout external to kernel
-
- 02 Jun, 2025 1 commit
-
-
Max Rietmann authored
Introduce new CUDA kernels, `s2_attention_bwd_dkvq_kernel_mbT` and `s2_attention_kernel_mbT`, for more efficient computation of backward gradients and forward attention respectively. These changes optimize memory access patterns and employ coalesced operations by leveraging tensor transpositions. Forward kernel written by Mauro Bisson Backwards kernel written by Andrea Paris (aparis@ethz.ch) and Max Rietmann Parallelization strategy computes 1 output per Warp, with threads computing the dot-product in parallel. Because inputs are transposed to have channel dimension last, the dot-product memory access pattern is perfectly coalesced, leading to excellent performance. This is true across both forward and backward kernels. Co-authored-by:
Mauro Bisson <maurob@nvidia.com> Co-authored-by:
Max Rietmann <mrietmann@nvidia.com> Co-authored-by:
Andrea Paris <aparis@ethz.ch>
-
- 26 May, 2025 1 commit
-
-
Thorsten Kurth authored
-
- 24 May, 2025 7 commits
-
-
Boris Bonev authored
-
Boris Bonev authored
fixing bug in quadrature weights for full attention. Adding better unit tests for attention. Cleanup in the cuda code.
-
Boris Bonev authored
-
Boris Bonev authored
-
Boris Bonev authored
-
Boris Bonev authored
-
Boris Bonev authored
-
- 08 May, 2025 1 commit
-
-
Thorsten Kurth authored
* setting imaginary parts of DCT and nyquist frequency to zero in IRSHT variants
-
- 29 Apr, 2025 2 commits
-
-
Boris Bonev authored
This reverts commit 82881276.
-
Thorsten Kurth authored
* setting imaginary parts of DCT and nyquist frequency to zero in IRSHT variants * small fix * making einsum result contiguous * adding zero frequency to distributed sht
-
- 26 Feb, 2025 1 commit
-
-
Thorsten Kurth authored
* small hotfix for lobatto grid precomputation routine * adding lobatto grid to tests
-
- 21 Feb, 2025 1 commit
-
-
Thorsten Kurth authored
* adding caching * replacing many numpy calls with torch calls * bumping up version number to 0.7.6
-
- 21 Jan, 2025 1 commit
-
-
Boris Bonev authored
* Improved computation of Morlet filter basis and switched to a Hann window. * addresses #064 and some cleanup
-
- 17 Jan, 2025 1 commit
-
-
Mike McCann authored
Without putting `signal` on `device`, we get `RuntimeError: Expected all tensors to be on the same device` when `sht` is called.
-
- 14 Jan, 2025 9 commits
-
-
Boris Bonev authored
-
Boris Bonev authored
-
Boris Bonev authored
-
Boris Bonev authored
switched psi tensor computation to double precision and implemented a fudge factor for theta_cutoff to avoid aliasing issues with the grid width
-
Thorsten Kurth authored
-
Boris Bonev authored
-
Thorsten Kurth authored
-
Boris Bonev authored
-
Boris Bonev authored
-