Commit 8f8fbb9f authored by Hang Zhang's avatar Hang Zhang
Browse files

v1.0.1

parent aa9af7fd
......@@ -3,3 +3,4 @@
*.pyc
build/
data/
docs/html/
......@@ -13,6 +13,7 @@ year = {2017}
```
## [Documentation](http://hangzh.com/PyTorch-Encoding/)
Please visit the [**Docs**](http://hangzh.com/PyTorch-Encoding/) for detail instructions of installation and usage.
(If you would like to reproduce the texture recognition benchmark in the paper, please visit our original [Torch implementation](https://github.com/zhanghang1989/Deep-Encoding).)
- Please visit the [**Docs**](http://hangzh.com/PyTorch-Encoding/) for detail instructions of installation and usage.
- [**Link**](http://hangzh.com/PyTorch-Encoding/experiments/texture.html) to the experiments and pre-trained models.
body {
font-family: "Lato","proxima-nova","Helvetica Neue",Arial,sans-serif;
}
/* Default header fonts are ugly */
h1, h2, .rst-content .toctree-wrapper p.caption, h3, h4, h5, h6, legend, p.caption {
font-family: "Lato","proxima-nova","Helvetica Neue",Arial,sans-serif;
}
/* Use white for docs background */
.wy-side-nav-search {
background-color: #a0e2ff;
}
.wy-nav-content-wrap, .wy-menu li.current > a {
background-color: #fff;
}
@media screen and (min-width: 1400px) {
.wy-nav-content-wrap {
background-color: rgba(0, 0, 0, 0.0470588);
}
.wy-nav-content {
background-color: #fff;
}
}
/* Fixes for mobile */
.wy-nav-top {
background-color: #fff;
background-repeat: no-repeat;
background-position: center;
padding: 0;
margin: 0.4045em 0.809em;
color: #333;
}
.wy-nav-top > a {
display: none;
}
@media screen and (max-width: 768px) {
.wy-side-nav-search>a img.logo {
height: 60px;
}
}
/* This is needed to ensure that logo above search scales properly */
.wy-side-nav-search a {
display: block;
}
/* This ensures that multiple constructors will remain in separate lines. */
.rst-content dl:not(.docutils) dt {
display: table;
}
/* Use our blue for literals */
.rst-content tt.literal, .rst-content tt.literal, .rst-content code.literal {
color: #4080bf;
}
.rst-content tt.xref, a .rst-content tt, .rst-content tt.xref,
.rst-content code.xref, a .rst-content tt, a .rst-content code {
color: #404040;
}
/* Change link colors (except for the menu) */
a {
color: #4080bf;
}
a:hover {
color: #4080bf;
}
a:visited {
color: #306293;
}
.wy-menu a {
color: #b3b3b3;
}
.wy-menu a:hover {
color: #b3b3b3;
}
/* Default footer text is quite big */
footer {
font-size: 80%;
}
footer .rst-footer-buttons {
font-size: 125%; /* revert footer settings - 1/80% = 125% */
}
footer p {
font-size: 100%;
}
/* For hidden headers that appear in TOC tree */
/* see http://stackoverflow.com/a/32363545/3343043
*/
.rst-content .hidden-section {
display: none;
}
nav .hidden-section {
display: inherit;
}
.wy-side-nav-search>div.version {
color: #000;
}
This diff is collapsed.
......@@ -20,6 +20,7 @@
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import encoding
import sphinx_rtd_theme
......@@ -47,7 +48,7 @@ extensions = [
napoleon_use_ivar = True
googleanalytics_id = 'UA-90545585-1'
googleanalytics_id = 'UA-54746507-1'
googleanalytics_enabled = True
# Add any paths that contain templates here, relative to this directory.
......@@ -56,8 +57,8 @@ templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
source_suffix = ['.rst', '.md']
#source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
......@@ -72,7 +73,7 @@ author = 'Hang Zhang'
# built documents.
#
# The short X.Y version.
version = 'master (0.0.1)'
version = 'master (' + encoding.__version__ + ')'
# The full version, including alpha/beta/rc tags.
# TODO: verify this works as expected
release = 'master'
......@@ -124,6 +125,7 @@ html_static_path = ['_static']
html_context = {
'css_files': [
'https://fonts.googleapis.com/css?family=Lato',
'_static/css/encoding.css'
],
}
#'_static/css/hangzh.css'
......
.. role:: hidden
:class: hidden-section
Dilated Networks
================
We provide correct dilated pre-trained ResNet and DenseNet for semantic segmentation.
For dilation of ResNet, we replace the stride of 2 Conv3x3 at begining of certain stage and update the dilation of the conv layers afterwards.
For dilation of DenseNet, we provide DilatedAvgPool2d that handles the dilation of the transition layers, then update the dilation of the conv layers afterwards.
All provided models have been verified.
.. automodule:: encoding.dilated
.. currentmodule:: encoding.dilated
ResNet
------
:hidden:`ResNet`
~~~~~~~~~~~~~~~~
.. autoclass:: ResNet
:members:
:hidden:`resnet18`
~~~~~~~~~~~~~~~~~~
.. autofunction:: resnet18
:hidden:`resnet34`
~~~~~~~~~~~~~~~~~~
.. autofunction:: resnet34
:hidden:`resnet50`
~~~~~~~~~~~~~~~~~~
.. autofunction:: resnet50
:hidden:`resnet101`
~~~~~~~~~~~~~~~~~~~
.. autofunction:: resnet101
:hidden:`resnet152`
~~~~~~~~~~~~~~~~~~~
.. autofunction:: resnet152
DenseNet
--------
:hidden:`DenseNet`
~~~~~~~~~~~~~~~~~~
.. autoclass:: DenseNet
:members:
:hidden:`densenet161`
~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: densenet161
:hidden:`densenet121`
~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: densenet121
:hidden:`densenet169`
~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: densenet169
:hidden:`densenet201`
~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: densenet201
.. role:: hidden
:class: hidden-section
Encoding Layer
==============
.. automodule:: encoding
My NN Layers
============
Modules
-------
.. currentmodule:: encoding
.. currentmodule:: encoding.nn
:hidden:`Encoding`
~~~~~~~~~~~~~~~~~~
......@@ -18,6 +15,24 @@ Modules
.. autoclass:: Encoding
:members:
:hidden:`Inspiration`
~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: Inspiration
:members:
:hidden:`DilatedAvgPool2d`
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: DilatedAvgPool2d
:members:
:hidden:`GramMatrix`
~~~~~~~~~~~~~~~~~~~~
.. autoclass:: GramMatrix
:members:
:hidden:`Aggregate`
~~~~~~~~~~~~~~~~~~~
......@@ -27,23 +42,22 @@ Modules
Functions
---------
.. currentmodule:: encoding.functions
:hidden:`aggregate`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: aggregate
:members:
.. autofunction:: aggregate
:hidden:`scaledL2`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: scaledL2
:members:
.. autofunction:: scaledL2
:hidden:`residual`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: residual
:members:
.. autofunction:: residual
:hidden:`assign`
......
MSG-Net Style Transfer Example
==============================
.. image:: https://raw.githubusercontent.com/zhanghang1989/MSG-Net/master/images/figure1.jpg
:width: 55%
:align: left
We provide PyTorh Implementation of `MSG-Net`_ and `Neural Style`_ in the `GitHub repo <https://github.com/zhanghang1989/PyTorch-Style-Transfer>`_.
We also provide `Torch <https://github.com/zhanghang1989/MSG-Net/>`_ and
`MXNet <https://github.com/zhanghang1989/MXNet-Gluon-Style-Transfer>`_ implementations.
Tabe of content
---------------
- Real-time Style Transfer using `MSG-Net`_
* `Stylize Images using Pre-trained Model`_
* `Train Your Own MSG-Net Model`_
- `Neural Style`_
MSG-Net
-------
.. note::
Hang Zhang, and Kristin Dana. "Multi-style Generative Network for Real-time Transfer."::
@article{zhang2017multistyle,
title={Multi-style Generative Network for Real-time Transfer},
author={Zhang, Hang and Dana, Kristin},
journal={arXiv preprint arXiv:1703.06953},
year={2017}
}
Stylize Images Using Pre-trained Model
--------------------------------------
- Clone the repo and download the pre-trained model::
git clone git@github.com:zhanghang1989/PyTorch-Style-Transfer.git
cd PyTorch-Style-Transfer/experiments
bash models/download_model.sh
- Camera Demo::
python camera_demo.py demo --model models/9styles.model
.. image:: https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/myimage.gif
- Test the model::
python main.py eval --content-image images/content/venice-boat.jpg --style-image images/9styles/candy.jpg --model models/9styles.model --content-size 1024
If you don't have a GPU, simply set ``--cuda=0``. For a different style, set ``--style-image path/to/style``.
If you would to stylize your own photo, change the ``--content-image path/to/your/photo``. More options:
* ``--content-image``: path to content image you want to stylize.
* ``--style-image``: path to style image (typically covered during the training).
* ``--model``: path to the pre-trained model to be used for stylizing the image.
* ``--output-image``: path for saving the output image.
* ``--content-size``: the content image size to test on.
* ``--cuda``: set it to 1 for running on GPU, 0 for CPU.
.. raw:: html
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/1.jpg" width="260px" /> <img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/2.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/3.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/4.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/5.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/6.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/7.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/8.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/9.jpg" width="260px" />
Train Your Own MSG-Net Model
----------------------------
- Download the dataset::
bash dataset/download_dataset.sh
- Train the model::
python main.py train --epochs 4
If you would like to customize styles, set ``--style-folder path/to/your/styles``. More options:
* ``--style-folder``: path to the folder style images.
* ``--vgg-model-dir``: path to folder where the vgg model will be downloaded.
* ``--save-model-dir``: path to folder where trained model will be saved.
* ``--cuda``: set it to 1 for running on GPU, 0 for CPU.
Neural Style
-------------
`Image Style Transfer Using Convolutional Neural Networks <http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf>`_ by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge::
python main.py optim --content-image images/content/venice-boat.jpg --style-image images/9styles/candy.jpg
* ``--content-image``: path to content image.
* ``--style-image``: path to style image.
* ``--output-image``: path for saving the output image.
* ``--content-size``: the content image size to test on.
* ``--style-size``: the style image size to test on.
* ``--cuda``: set it to 1 for running on GPU, 0 for CPU.
.. raw:: html
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g1.jpg" width="260px" /> <img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g2.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g3.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g4.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g5.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g6.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g7.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g8.jpg" width="260px" />
<img src ="https://raw.githubusercontent.com/zhanghang1989/PyTorch-Style-Transfer/master/images/g9.jpg" width="260px" />
Deep TEN: Deep Texture Encoding Network Example
===============================================
.. image:: http://hangzh.com/figure/cvpr17.svg
:width: 100%
:align: left
In this section, we show an example of training/testing Encoding-Net for texture recognition on MINC-2500 dataset. Comparing to original Torch implementation, we use *different learning rate* for pre-trained base network and encoding layer (10x), disable color jittering after reducing lr and adopt much *smaller training image size* (224 instead of 352).
.. note::
**Make Sure** to `Install PyTorch Encoding <../notes/compile.html>`_ First.
Test Pre-trained Model
----------------------
- Clone the GitHub repo (I am sure you did during the installation)::
git clone git@github.com:zhanghang1989/PyTorch-Encoding.git
- Download the `MINC-2500 <http://opensurfaces.cs.cornell.edu/publications/minc/>`_ dataset to ``$HOME/data/minc`` folder. Download pre-trained model (training `curve`_ as bellow, pre-trained on train-1 split using single training size of 224, with an error rate of :math:`19.98\%` using single crop on test-1 set)::
cd PyTorch-Encoding/experiments
bash model/download_models.sh
.. _curve:
.. image:: ../_static/img/deep_ten_curve.svg
:width: 70%
- Test pre-trained model on MINC-2500::
>>> python main.py --dataset minc --model encodingnet --resume model/minc.pth.tar --eval
# Teriminal Output:
#[======================================== 23/23 ===================================>...] Step: 104ms | Tot: 3s256ms | Loss: 0.719 | Err: 19.983% (1149/5750)
Train Your Own Model
--------------------
- Example training command::
python main.py --dataset minc --model encodingnet --batch-size 64 --lr 0.01 --epochs 60
- Training options::
-h, --help show this help message and exit
--dataset DATASET training dataset (default: cifar10)
--model MODEL network model type (default: densenet)
--widen N widen factor of the network (default: 4)
--batch-size N batch size for training (default: 128)
--test-batch-size N batch size for testing (default: 1000)
--epochs N number of epochs to train (default: 300)
--start_epoch N the epoch number to start (default: 0)
--lr LR learning rate (default: 0.1)
--momentum M SGD momentum (default: 0.9)
--weight-decay M SGD weight decay (default: 1e-4)
--no-cuda disables CUDA training
--plot matplotlib
--seed S random seed (default: 1)
--resume RESUME put the path to resuming file if needed
--checkname set the checkpoint name
--eval evaluating
.. todo::
Provide example code for extracting features.
Extending the Software
----------------------
This code includes an integrated pipeline and some visualization tools (progress bar, real-time training curve plots). It is easy to use and extend for your own model or dataset:
- Write your own Dataloader ``mydataset.py`` to ``dataset/`` folder
- Write your own Model ``mymodel.py`` to ``model/`` folder
- Run the program::
python main.py --dataset mydataset --model mymodel
Citation
--------
.. note::
* Hang Zhang, Jia Xue, and Kristin Dana. "Deep TEN: Texture Encoding Network." *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017*::
@InProceedings{Zhang_2017_CVPR,
author = {Zhang, Hang and Xue, Jia and Dana, Kristin},
title = {Deep TEN: Texture Encoding Network},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}
.. role:: hidden
:class: hidden-section
Other Functions
===============
.. automodule:: encoding.functions
.. currentmodule:: encoding.functions
:hidden:`dilatedavgpool2d`
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: dilatedavgpool2d
.. Encoding documentation master file, created by
sphinx-quickstart on Fri Dec 23 13:31:47 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. Encoding documentation master file
:github_url: https://github.com/zhanghang1989/PyTorch-Encoding
Encoding documentation
Encoding Documentation
======================
PyTorch-Encoding is an optimized PyTorch package using GPU, including Encoding Layer, Multi-GPU Synchronized Batch Normalization.
Created by `Hang Zhang <http://hangzh.com/>`_
PyTorch-Encoding is an optimized PyTorch package with CUDA backend, including Encoding Layer, Multi-GPU Synchronized Batch Normalization and useful util functions. Example systems are also provided in `experiments section <experiments/texture.html>`_. We hope this software will accelerate your research, please cite our `papers <notes/compile.html>`_.
.. toctree::
:glob:
......@@ -18,13 +17,23 @@ PyTorch-Encoding is an optimized PyTorch package using GPU, including Encoding L
notes/*
.. toctree::
:maxdepth: 3
:maxdepth: 1
:caption: Package Reference
encoding
syncbn
parallel
dilated
nn
functions
utils
.. toctree::
:glob:
:maxdepth: 1
:caption: Experiment Systems
experiments/*
Indices and tables
==================
......
.. role:: hidden
:class: hidden-section
Other NN Layers
===============
.. automodule:: encoding.nn
Customized Layers
-----------------
:hidden:`Normalize`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: Normalize
:members:
:hidden:`View`
~~~~~~~~~~~~~~
.. autoclass:: View
:members:
Standard Layers
---------------
Standard Layers as in PyTorch but in :class:`encoding.parallel.SelfDataParallel` mode. Use together with SyncBN.
:hidden:`Conv1d`
~~~~~~~~~~~~~~~~
.. autoclass:: Conv1d
:members:
:hidden:`Conv2d`
~~~~~~~~~~~~~~~~
.. autoclass:: Conv2d
:members:
:hidden:`ConvTranspose2d`
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: ConvTranspose2d
:members:
:hidden:`ReLU`
~~~~~~~~~~~~~~
.. autoclass:: ReLU
:members:
:hidden:`Sigmoid`
~~~~~~~~~~~~~~~~~
.. autoclass:: Sigmoid
:members:
:hidden:`MaxPool2d`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: MaxPool2d
:members:
:hidden:`AvgPool2d`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: AvgPool2d
:members:
:hidden:`AdaptiveAvgPool2d`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: AdaptiveAvgPool2d
:members:
:hidden:`Dropout2d`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: Dropout2d
:members:
:hidden:`Linear`
~~~~~~~~~~~~~~~~
.. autoclass:: Linear
:members:
......@@ -16,12 +16,16 @@ Install PyTorch-Encoding
python setup.py install
* On MAC OSX::
* On Mac OSX::
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
- Reference:
Hang Zhang, Jia Xue, and Kristin Dana. "Deep TEN: Texture Encoding Network." *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017*::
.. note::
If using the code in your research, please cite our paper.
* Hang Zhang, Jia Xue, and Kristin Dana. "Deep TEN: Texture Encoding Network." *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017*::
@InProceedings{Zhang_2017_CVPR,
author = {Zhang, Hang and Xue, Jia and Dana, Kristin},
......
Implementing Synchronized Multi-GPU Batch Normalization
=======================================================
We will release the implementation detail of Multi-GPU Batch Normalization in later version.
Why Synchronize?
----------------
- Standard Implementation
How to Synchronize?
-------------------
- Forward and Backward Pass
- Synchronized DataParallel
- Cross GPU Autograd
Comparing Performance
---------------------
......@@ -7,10 +7,8 @@ Data Parallel
Current PyTorch DataParallel Table is not supporting mutl-gpu loss calculation, which makes the gpu memory usage very in-efficient. We address this issue here by doing CriterionDataParallel.
The DataParallel compatible with SyncBN will be released later.
Modules
-------
.. currentmodule:: encoding
.. automodule:: encoding.parallel
.. currentmodule:: encoding.parallel
:hidden:`ModelDataParallel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
......@@ -24,3 +22,21 @@ Modules
.. autoclass:: CriterionDataParallel
:members:
:hidden:`SelfDataParallel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: SelfDataParallel
:members:
:hidden:`AllReduce`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: AllReduce
:members:
:hidden:`Broadcast`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: Broadcast
:members:
......@@ -8,26 +8,46 @@ The current BN is implementated insynchronized accross the gpus, which is a big
To synchronize the batchnorm accross multiple gpus is not easy to implment within the current Dataparallel framework. We address this difficulty by making each layer 'self-parallel', that is accepting the inputs from multi-gpus. Therefore, we can handle different layers seperately for synchronizing it across gpus.
We will release the whole SyncBN Module and compatible DataParallel later.
.. currentmodule:: encoding.nn
Modules
-------
:hidden:`BatchNorm1d`
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: BatchNorm1d
:members:
:hidden:`BatchNorm2d`
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: BatchNorm2d
:members:
.. currentmodule:: encoding
Functions
---------
.. currentmodule:: encoding.functions
:hidden:`batchnormtrain`
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: batchnormtrain
:members:
.. autofunction:: batchnormtrain
:hidden:`batchnormeval`
~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: batchnormeval
:members:
.. autofunction:: batchnormeval
:hidden:`sum_square`
~~~~~~~~~~~~~~~~~~~~
.. autoclass:: sum_square
:members:
.. autofunction:: sum_square
.. role:: hidden
:class: hidden-section
My PyTorch Utils
================
Useful util functions.
.. automodule:: encoding.utils
.. currentmodule:: encoding.utils
:hidden:`CosLR_Scheduler`
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: CosLR_Scheduler
:members:
:hidden:`get_optimizer`
~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: get_optimizer
:hidden:`save_checkpoint`
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: save_checkpoint
:hidden:`progress_bar`
~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: progress_bar
......@@ -75,6 +75,8 @@ IF(ENCODING_SO_VERSION)
SOVERSION ${ENCODING_SO_VERSION})
ENDIF(ENCODING_SO_VERSION)
FILE(GLOB src-header kernel/generic/*.h)
INSTALL(TARGETS ENCODING LIBRARY DESTINATION ${ENCODING_INSTALL_LIB_SUBDIR})
INSTALL(FILES kernel/thc_encoding.h DESTINATION "${ENCODING_INSTALL_INCLUDE_SUBDIR}/ENCODING")
INSTALL(FILES kernel/generic/encoding_kernel.h DESTINATION "${ENCODING_INSTALL_INCLUDE_SUBDIR}/ENCODING/generic")
INSTALL(FILES ${src-header} DESTINATION "${ENCODING_INSTALL_INCLUDE_SUBDIR}/ENCODING/generic")
......@@ -8,8 +8,10 @@
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
from .functions import *
from .modules import *
from .syncbn import sum_square, batchnormtrain, batchnormeval
from .parallel import ModelDataParallel, CriterionDataParallel
__version__ = '1.0.1'
import encoding.nn
import encoding.functions
import encoding.dilated
import encoding.parallel
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment