Prevent users from attempting to pin PyTorch non-contiguous tensors or views...
Prevent users from attempting to pin PyTorch non-contiguous tensors or views only encompassing part of tensor. (#3992)
* Disable pinning non-contiguous memory
* Prevent views from being converted for write
* Fix linting
* Add unit tests
* Improve error message for users
* Switch to pytest function
* exclude mxnet and tensorflow from inplace pinning
* Add skip
* Restrict to pytorch backend
* Use backend to retrieve device
* Fix capitalization in decorator
Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com>
Showing
Please register or sign in to comment