"tests/vscode:/vscode.git/clone" did not exist on "abe2204ddd1d5f60cd3303ead4df22deb5303a5b"
Commit 371e2657 authored by Daniel Povey's avatar Daniel Povey
Browse files

Start changing some names

parent 9c56e510
......@@ -4,7 +4,7 @@ excluding any dependencies.
BSD 3-Clause License
Copyright (c) 2021, Anton Obukhov
Copyright (c) 2021, Xiaomi Corporation (written by: Daniel Povey)
All rights reserved.
Redistribution and use in source and binary forms, with or without
......
include requirements.txt
include pyproject.toml
include LICENSE*
recursive-include torch_discounted_cumsum *
recursive-include torch_mutual_information *
recursive-include doc/img *
recursive-include tests *
global-exclude *.pyc
\ No newline at end of file
......@@ -9,48 +9,31 @@ with open('requirements.txt') as f:
long_description = """
This package implements an efficient parallel algorithm for the computation of discounted cumulative sums
with differentiable bindings to PyTorch. The `cumsum` operation is frequently seen in data science
domains concerned with time series, including Reinforcement Learning (RL).
This package implements an efficient parallel algorithm for the computation of
mutual information between sequences with differentiable bindings to PyTorch.
The traditional sequential algorithm performs the computation of the output elements in a loop. For an input of size
`N`, it requires `O(N)` operations and takes `O(N)` time steps to complete.
The proposed parallel algorithm requires a total of `O(N log N)` operations, but takes only `O(log N)` time, which is a
considerable trade-off in many applications involving large inputs.
Features of the parallel algorithm:
- Speed logarithmic in the input size
- Better numerical precision than sequential algorithms
Features of the package:
- CPU: sequential algorithm in C++
- GPU: parallel algorithm in CUDA
- Gradients computation wrt input
- Both left and right directions of summation supported
- PyTorch bindings
Find more details and the most up-to-date information on the project webpage:
https://www.github.com/toshas/torch-discounted-cumsum
[TODO]
"""
def configure_extensions():
out = [
CppExtension(
'torch_learned_nonlin_cpu',
'torch_mutual_information_cpu',
[
os.path.join('torch_learned_nonlin', 'learned_nonlin_cpu.cpp'),
os.path.join('torch_mutual_information', 'mutual_information_cpu.cpp'),
],
)
]
try:
out.append(
CUDAExtension(
'torch_learned_nonlin_cuda',
'torch_mutual_information_cuda',
[
os.path.join('torch_learned_nonlin', 'learned_nonlin_cuda.cpp'),
os.path.join('torch_learned_nonlin', 'learned_nonlin_cuda_kernel.cu'),
os.path.join('torch_mutual_information', 'mutual_information_cuda.cpp'),
os.path.join('torch_mutual_information', 'mutual_information_cuda_kernel.cu'),
],
)
)
......@@ -60,9 +43,9 @@ def configure_extensions():
setup(
name='torch_learned_nonlin',
name='torch_mutual_information',
version='1.0.2',
description='Fast discounted cumulative sums in PyTorch',
description='Mutual information between sequences of vectors',
long_description=long_description,
long_description_content_type='text/markdown',
install_requires=requirements,
......@@ -70,13 +53,11 @@ setup(
packages=find_packages(),
author='Dan Povey',
license='BSD',
url='https://www.github.com/toshas/torch-discounted-cumsum',
ext_modules=configure_extensions(),
cmdclass={
'build_ext': BuildExtension
},
keywords=[
'pytorch', 'discounted', 'cumsum', 'cumulative', 'sum', 'scan', 'differentiable',
'reinforcement', 'learning', 'rewards', 'time', 'series'
'pytorch', 'sequence', 'mutual', 'information'
],
)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment