Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Warpctc
Commits
01ec9b4b
Commit
01ec9b4b
authored
Jun 16, 2023
by
lishen
Browse files
对readme改名,将开源的readme.md改为readme_origin,DCU上的readme_HIP.md改为readme.md
parent
8393e73f
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
115 additions
and
50 deletions
+115
-50
README.md
README.md
+49
-50
README_origin.md
README_origin.md
+66
-0
No files found.
README.md
View file @
01ec9b4b
#
PyTorch bindings for Warp-ctc
#
DLIB
[

](https://travis-ci.org/SeanNaren/warp-ctc)
## 环境配置
This is an extension onto the original repo found
[
here
](
https://github.com/baidu-research/warp-ctc
)
.
使用DCU编译之前,需要准备编译环境。参考
[
environment prepare
](
environment_prepare.md
)
##
Installation
##
使用源码安装
Install
[
PyTorch
](
https://github.com/pytorch/pytorch#installation
)
v0.4.
### 编译环境准备(以dtk-23.04版本为例)
`WARP_CTC_PATH`
should be set to the location of a built WarpCTC
-
拉取代码
(i.e.
`libwarpctc.so`
). This defaults to
`../build`
, so from within a
new warp-ctc clone you could build WarpCTC like this:
```
bash
```
git clone https://github.com/SeanNaren/warp-ctc.git
git clone -b develop http://developer.hpccube.com/codes/aicomponent/warpctc.git
cd
warp-ctc
```
mkdir
build
;
cd
build
-
在
[
开发者社区
](
https://developer.hpccube.com/tool/#sdk
)
DCU Toolkit 中下载 DTK-23.04 解压至 /opt/ 路径下,并建立软链接
cmake ..
make
```
```
cd /opt && ln -s dtk-23.04 dtk
```
-
导入环境变量以及安装必要依赖库
```
shell
source
/opt/dtk/env.sh
```
Now install the bindings:
### 编译安装
```
bash
#### 编译 Python API
-
使用python安装
```
shell
cd
pytorch_binding
cd
pytorch_binding
python setup.py
install
python setup.py
install
```
```
If you try the above and get a dlopen error on OSX with anaconda3 (as recommended by pytorch):
-
使用python编译whl包
```
bash
cd
../pytorch_binding
```
shell
python setup.py
install
cd
pytorch_binding
cd
../build
python setup.py bdist_wheel
cp
libwarpctc.dylib /Users/
$WHOAMI
/anaconda3/lib
```
```
This will resolve the library not loaded error. This can be easily modified to work with other python installs if needed.
### 测试
Example to use the bindings below.
-
验证warpctc的loss正确性(CPU和GPU的一致性)
```
python
import
torch
```
shell
from
warpctc_pytorch
import
CTCLoss
cd
pytorch_binding/tests
ctc_loss
=
CTCLoss
()
python3 test_gpu.py
# expected shape of seqLength x batchSize x alphabet_size
probs
=
torch
.
FloatTensor
([[[
0.1
,
0.6
,
0.1
,
0.1
,
0.1
],
[
0.1
,
0.1
,
0.6
,
0.1
,
0.1
]]]).
transpose
(
0
,
1
).
contiguous
()
labels
=
torch
.
IntTensor
([
1
,
2
])
label_sizes
=
torch
.
IntTensor
([
2
])
probs_sizes
=
torch
.
IntTensor
([
2
])
probs
.
requires_grad_
(
True
)
# tells autograd to compute gradients for probs
cost
=
ctc_loss
(
probs
,
labels
,
probs_sizes
,
label_sizes
)
cost
.
backward
()
```
```
## Documentation
-
验证warpctc的loss的GPU加速效果
```
shell
cd
pytorch_binding/tests
python3 test_gpu_speed.py
```
```
CTCLoss(size_average=False, length_average=False)
# size_average (bool): normalize the loss by the batch size (default: False)
# length_average (bool): normalize the loss by the total number of frames in the batch. If True, supersedes size_average (default: False)
forward(acts, labels, act_lens, label_lens)
# acts: Tensor of (seqLength x batch x outputDim) containing output activations from network (before softmax)
# labels: 1 dimensional Tensor containing all the targets of the batch in one large sequence
# act_lens: Tensor of size (batch) containing size of each output sequence from the network
# label_lens: Tensor of (batch) containing label length of each example
```
\ No newline at end of file
README_origin.md
0 → 100644
View file @
01ec9b4b
# PyTorch bindings for Warp-ctc
[

](https://travis-ci.org/SeanNaren/warp-ctc)
This is an extension onto the original repo found
[
here
](
https://github.com/baidu-research/warp-ctc
)
.
## Installation
Install
[
PyTorch
](
https://github.com/pytorch/pytorch#installation
)
v0.4.
`WARP_CTC_PATH`
should be set to the location of a built WarpCTC
(i.e.
`libwarpctc.so`
). This defaults to
`../build`
, so from within a
new warp-ctc clone you could build WarpCTC like this:
```
bash
git clone https://github.com/SeanNaren/warp-ctc.git
cd
warp-ctc
mkdir
build
;
cd
build
cmake ..
make
```
Now install the bindings:
```
bash
cd
pytorch_binding
python setup.py
install
```
If you try the above and get a dlopen error on OSX with anaconda3 (as recommended by pytorch):
```
bash
cd
../pytorch_binding
python setup.py
install
cd
../build
cp
libwarpctc.dylib /Users/
$WHOAMI
/anaconda3/lib
```
This will resolve the library not loaded error. This can be easily modified to work with other python installs if needed.
Example to use the bindings below.
```
python
import
torch
from
warpctc_pytorch
import
CTCLoss
ctc_loss
=
CTCLoss
()
# expected shape of seqLength x batchSize x alphabet_size
probs
=
torch
.
FloatTensor
([[[
0.1
,
0.6
,
0.1
,
0.1
,
0.1
],
[
0.1
,
0.1
,
0.6
,
0.1
,
0.1
]]]).
transpose
(
0
,
1
).
contiguous
()
labels
=
torch
.
IntTensor
([
1
,
2
])
label_sizes
=
torch
.
IntTensor
([
2
])
probs_sizes
=
torch
.
IntTensor
([
2
])
probs
.
requires_grad_
(
True
)
# tells autograd to compute gradients for probs
cost
=
ctc_loss
(
probs
,
labels
,
probs_sizes
,
label_sizes
)
cost
.
backward
()
```
## Documentation
```
CTCLoss(size_average=False, length_average=False)
# size_average (bool): normalize the loss by the batch size (default: False)
# length_average (bool): normalize the loss by the total number of frames in the batch. If True, supersedes size_average (default: False)
forward(acts, labels, act_lens, label_lens)
# acts: Tensor of (seqLength x batch x outputDim) containing output activations from network (before softmax)
# labels: 1 dimensional Tensor containing all the targets of the batch in one large sequence
# act_lens: Tensor of size (batch) containing size of each output sequence from the network
# label_lens: Tensor of (batch) containing label length of each example
```
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment