• Jesse Gross's avatar
    backend: Consistently use int (vs. int64) for tensor shapes · 0e38297f
    Jesse Gross authored
    Currently there is a mixture of int and int64 used when dealing with
    tensor dimensions and shapes, which causes unnecessary conversions -
    they all should be the same type.
    
    In general, most interfaces (such as Pytorch) use int64 for
    generality but most implementations (such as CUDA) use int32 for
    performance. There isn't much benefit to us to being more flexible
    than the implementations we are likely to run on.
    
    In addition, as a practical matter, a model with a tensor with a single
    dimension larger than 32 bits is unlikely to run on a 32-bit machine.
    0e38297f
cache.go 1.43 KB