Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Torchaudio
Commits
ffeba11a
Commit
ffeba11a
authored
Sep 02, 2024
by
mayp777
Browse files
UPDATE
parent
29deb085
Changes
449
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
0 additions
and
6063 deletions
+0
-6063
.circleci/build_docs/commit_docs.sh
.circleci/build_docs/commit_docs.sh
+0
-35
.circleci/build_docs/install_wheels.sh
.circleci/build_docs/install_wheels.sh
+0
-15
.circleci/config.yml
.circleci/config.yml
+0
-4096
.circleci/config.yml.in
.circleci/config.yml.in
+0
-924
.circleci/regenerate.py
.circleci/regenerate.py
+0
-278
.circleci/smoke_test/docker/Dockerfile
.circleci/smoke_test/docker/Dockerfile
+0
-36
.circleci/smoke_test/docker/build_and_push.sh
.circleci/smoke_test/docker/build_and_push.sh
+0
-8
.circleci/unittest/linux/README.md
.circleci/unittest/linux/README.md
+0
-6
.circleci/unittest/linux/docker/.dockerignore
.circleci/unittest/linux/docker/.dockerignore
+0
-2
.circleci/unittest/linux/docker/.gitignore
.circleci/unittest/linux/docker/.gitignore
+0
-2
.circleci/unittest/linux/docker/Dockerfile
.circleci/unittest/linux/docker/Dockerfile
+0
-56
.circleci/unittest/linux/docker/build_and_push.sh
.circleci/unittest/linux/docker/build_and_push.sh
+0
-26
.circleci/unittest/linux/docker/scripts/copy_kaldi_executables.sh
...i/unittest/linux/docker/scripts/copy_kaldi_executables.sh
+0
-58
.circleci/unittest/linux/scripts/install.sh
.circleci/unittest/linux/scripts/install.sh
+0
-81
.circleci/unittest/linux/scripts/run_clang_format.py
.circleci/unittest/linux/scripts/run_clang_format.py
+0
-310
.circleci/unittest/linux/scripts/run_style_checks.sh
.circleci/unittest/linux/scripts/run_style_checks.sh
+0
-49
.circleci/unittest/linux/scripts/run_test.sh
.circleci/unittest/linux/scripts/run_test.sh
+0
-22
.circleci/unittest/linux/scripts/setup_env.sh
.circleci/unittest/linux/scripts/setup_env.sh
+0
-39
.circleci/unittest/windows/README.md
.circleci/unittest/windows/README.md
+0
-4
.circleci/unittest/windows/scripts/environment.yml
.circleci/unittest/windows/scripts/environment.yml
+0
-16
No files found.
.circleci/build_docs/commit_docs.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
set
-ex
if
[
"
$2
"
==
""
]
;
then
echo
call as
"
$0
"
"<src>"
"<target branch>"
echo
where src is the root of the built documentation git checkout and
echo
branch should be
"main"
or
"1.7"
or so
exit
1
fi
src
=
$1
target
=
$2
echo
"committing docs from
${
src
}
to
${
target
}
"
pushd
"
${
src
}
"
git checkout gh-pages
mkdir
-p
./
"
${
target
}
"
rm
-rf
./
"
${
target
}
"
/
*
cp
-r
"
${
src
}
/docs/build/html/"
*
./
"
$target
"
if
[
"
${
target
}
"
==
"main"
]
;
then
mkdir
-p
./_static
rm
-rf
./_static/
*
cp
-r
"
${
src
}
/docs/build/html/_static/"
*
./_static
git add
--all
./_static
||
true
fi
git add
--all
./
"
${
target
}
"
||
true
git config user.email
"soumith+bot@pytorch.org"
git config user.name
"pytorchbot"
# If there aren't changes, don't make a commit; push is no-op
git commit
-m
"auto-generating sphinx docs"
||
true
git remote add https https://github.com/pytorch/audio.git
git push
-u
https gh-pages
.circleci/build_docs/install_wheels.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
set
-ex
if
[[
-z
"
$PYTORCH_VERSION
"
]]
;
then
# Nightly build
pip
install
--progress-bar
off
--pre
torch
-f
"https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html"
else
# Release branch
pip
install
--progress-bar
off
"torch==
${
PYTORCH_VERSION
}
+cpu"
\
-f
https://download.pytorch.org/whl/torch_stable.html
\
-f
"https://download.pytorch.org/whl/
${
UPLOAD_CHANNEL
}
/torch_
${
UPLOAD_CHANNEL
}
.html"
fi
pip
install
--progress-bar
off
--no-deps
~/workspace/torchaudio
*
pip
install
--progress-bar
off
-r
docs/requirements.txt
-r
docs/requirements-tutorials.txt
.circleci/config.yml
deleted
100644 → 0
View file @
29deb085
This diff is collapsed.
Click to expand it.
.circleci/config.yml.in
deleted
100644 → 0
View file @
29deb085
This diff is collapsed.
Click to expand it.
.circleci/regenerate.py
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env python3
"""
This script should use a very simple, functional programming style.
Avoid Jinja macros in favor of native Python functions.
Don't go overboard on code generation; use Python only to generate
content that can't be easily declared statically using CircleCI's YAML API.
Data declarations (e.g. the nested loops for defining the configuration matrix)
should be at the top of the file for easy updating.
See this comment for design rationale:
https://github.com/pytorch/vision/pull/1321#issuecomment-531033978
"""
import
os.path
import
jinja2
import
yaml
from
jinja2
import
select_autoescape
PYTHON_VERSIONS
=
[
"3.7"
,
"3.8"
,
"3.9"
,
"3.10"
]
CU_VERSIONS_DICT
=
{
"linux"
:
[
"cpu"
,
"cu116"
,
"cu117"
,
"rocm5.1.1"
,
"rocm5.2"
],
"windows"
:
[
"cpu"
,
"cu116"
,
"cu117"
],
"macos"
:
[
"cpu"
],
}
DOC_VERSION
=
(
"linux"
,
"3.8"
)
def
build_workflows
(
prefix
=
""
,
upload
=
False
,
filter_branch
=
None
,
indentation
=
6
):
w
=
[]
w
+=
build_download_job
(
filter_branch
)
for
os_type
in
[
"linux"
,
"macos"
,
"windows"
]:
w
+=
build_ffmpeg_job
(
os_type
,
filter_branch
)
for
btype
in
[
"wheel"
,
"conda"
]:
for
os_type
in
[
"linux"
,
"macos"
,
"windows"
]:
for
python_version
in
PYTHON_VERSIONS
:
for
cu_version
in
CU_VERSIONS_DICT
[
os_type
]:
fb
=
filter_branch
if
cu_version
.
startswith
(
"rocm"
)
and
btype
==
"conda"
:
continue
if
not
fb
and
(
os_type
==
"linux"
and
btype
==
"wheel"
and
python_version
==
"3.8"
and
cu_version
==
"cpu"
):
# the fields must match the build_docs "requires" dependency
fb
=
"/.*/"
w
+=
build_workflow_pair
(
btype
,
os_type
,
python_version
,
cu_version
,
fb
,
prefix
,
upload
)
if
not
filter_branch
:
# Build on every pull request, but upload only on nightly and tags
w
+=
build_doc_job
(
"/.*/"
)
w
+=
upload_doc_job
(
"nightly"
)
w
+=
docstring_parameters_sync_job
(
None
)
return
indent
(
indentation
,
w
)
def
build_download_job
(
filter_branch
):
job
=
{
"name"
:
"download_third_parties"
,
}
if
filter_branch
:
job
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
return
[{
"download_third_parties"
:
job
}]
def
build_ffmpeg_job
(
os_type
,
filter_branch
):
job
=
{
"name"
:
f
"build_ffmpeg_
{
os_type
}
"
,
"requires"
:
[
"download_third_parties"
],
}
if
filter_branch
:
job
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
job
[
"python_version"
]
=
"foo"
return
[{
f
"build_ffmpeg_
{
os_type
}
"
:
job
}]
def
build_workflow_pair
(
btype
,
os_type
,
python_version
,
cu_version
,
filter_branch
,
prefix
=
""
,
upload
=
False
):
w
=
[]
base_workflow_name
=
f
"
{
prefix
}
binary_
{
os_type
}
_
{
btype
}
_py
{
python_version
}
_
{
cu_version
}
"
w
.
append
(
generate_base_workflow
(
base_workflow_name
,
python_version
,
cu_version
,
filter_branch
,
os_type
,
btype
))
if
upload
:
w
.
append
(
generate_upload_workflow
(
base_workflow_name
,
filter_branch
,
os_type
,
btype
,
cu_version
))
if
os_type
!=
"macos"
:
pydistro
=
"pip"
if
btype
==
"wheel"
else
"conda"
w
.
append
(
generate_smoketest_workflow
(
pydistro
,
base_workflow_name
,
filter_branch
,
python_version
,
cu_version
,
os_type
)
)
return
w
def
build_doc_job
(
filter_branch
):
job
=
{
"name"
:
"build_docs"
,
"python_version"
:
"3.8"
,
"cuda_version"
:
"cu116"
,
"requires"
:
[
"binary_linux_conda_py3.8_cu116"
,
],
}
if
filter_branch
:
job
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
return
[{
"build_docs"
:
job
}]
def
upload_doc_job
(
filter_branch
):
job
=
{
"name"
:
"upload_docs"
,
"context"
:
"org-member"
,
"python_version"
:
"3.8"
,
"requires"
:
[
"build_docs"
,
],
}
if
filter_branch
:
job
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
return
[{
"upload_docs"
:
job
}]
def
docstring_parameters_sync_job
(
filter_branch
):
job
=
{
"name"
:
"docstring_parameters_sync"
,
"python_version"
:
"3.8"
,
"requires"
:
[
"binary_linux_wheel_py3.8_cpu"
,
],
}
if
filter_branch
:
job
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
return
[{
"docstring_parameters_sync"
:
job
}]
def
generate_base_workflow
(
base_workflow_name
,
python_version
,
cu_version
,
filter_branch
,
os_type
,
btype
):
d
=
{
"name"
:
base_workflow_name
,
"python_version"
:
python_version
,
"cuda_version"
:
cu_version
,
"requires"
:
[
f
"build_ffmpeg_
{
os_type
}
"
],
}
if
btype
==
"conda"
:
d
[
"conda_docker_image"
]
=
f
'pytorch/conda-builder:
{
cu_version
.
replace
(
"cu1"
,
"cuda1"
)
}
'
elif
cu_version
.
startswith
(
"cu"
):
d
[
"wheel_docker_image"
]
=
f
'pytorch/manylinux-
{
cu_version
.
replace
(
"cu1"
,
"cuda1"
)
}
'
elif
cu_version
.
startswith
(
"rocm"
):
d
[
"wheel_docker_image"
]
=
f
"pytorch/manylinux-rocm:
{
cu_version
[
len
(
'rocm'
):]
}
"
if
filter_branch
:
d
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
return
{
f
"binary_
{
os_type
}
_
{
btype
}
"
:
d
}
def
gen_filter_branch_tree
(
*
branches
):
return
{
"branches"
:
{
"only"
:
list
(
branches
),
},
"tags"
:
{
# Using a raw string here to avoid having to escape
# anything
"only"
:
r
"/v[0-9]+(\.[0-9]+)*-rc[0-9]+/"
},
}
def
generate_upload_workflow
(
base_workflow_name
,
filter_branch
,
os_type
,
btype
,
cu_version
):
d
=
{
"name"
:
"{base_workflow_name}_upload"
.
format
(
base_workflow_name
=
base_workflow_name
),
"context"
:
"org-member"
,
"requires"
:
[
base_workflow_name
],
}
if
btype
==
"wheel"
:
d
[
"subfolder"
]
=
""
if
os_type
==
"macos"
else
cu_version
+
"/"
if
filter_branch
:
d
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
return
{
"binary_{btype}_upload"
.
format
(
btype
=
btype
):
d
}
def
generate_smoketest_workflow
(
pydistro
,
base_workflow_name
,
filter_branch
,
python_version
,
cu_version
,
os_type
):
smoke_suffix
=
f
"smoke_test_
{
pydistro
}
"
.
format
(
pydistro
=
pydistro
)
d
=
{
"name"
:
f
"
{
base_workflow_name
}
_
{
smoke_suffix
}
"
,
"requires"
:
[
base_workflow_name
],
"python_version"
:
python_version
,
"cuda_version"
:
cu_version
,
}
if
filter_branch
:
d
[
"filters"
]
=
gen_filter_branch_tree
(
filter_branch
)
smoke_name
=
f
"smoke_test_
{
os_type
}
_
{
pydistro
}
"
if
pydistro
==
"conda"
and
(
os_type
==
"linux"
or
os_type
==
"windows"
)
and
cu_version
!=
"cpu"
:
smoke_name
+=
"_gpu"
return
{
smoke_name
:
d
}
def
indent
(
indentation
,
data_list
):
return
(
"
\n
"
+
" "
*
indentation
).
join
(
yaml
.
dump
(
data_list
).
splitlines
())
def
unittest_python_versions
(
os
):
return
{
"windows"
:
PYTHON_VERSIONS
[:
1
],
"macos"
:
PYTHON_VERSIONS
[:
1
],
"linux"
:
PYTHON_VERSIONS
,
}.
get
(
os
)
def
unittest_workflows
(
indentation
=
6
):
jobs
=
[]
jobs
+=
build_download_job
(
None
)
for
os_type
in
[
"linux"
,
"windows"
,
"macos"
]:
for
device_type
in
[
"cpu"
,
"gpu"
]:
if
os_type
==
"macos"
and
device_type
==
"gpu"
:
continue
for
i
,
python_version
in
enumerate
(
unittest_python_versions
(
os_type
)):
job
=
{
"name"
:
f
"unittest_
{
os_type
}
_
{
device_type
}
_py
{
python_version
}
"
,
"python_version"
:
python_version
,
"cuda_version"
:
"cpu"
if
device_type
==
"cpu"
else
"cu116"
,
"requires"
:
[
"download_third_parties"
],
}
jobs
.
append
({
f
"unittest_
{
os_type
}
_
{
device_type
}
"
:
job
})
if
i
==
0
and
os_type
==
"linux"
and
device_type
==
"cpu"
:
jobs
.
append
(
{
"stylecheck"
:
{
"name"
:
f
"stylecheck_py
{
python_version
}
"
,
"python_version"
:
python_version
,
"cuda_version"
:
"cpu"
,
}
}
)
return
indent
(
indentation
,
jobs
)
if
__name__
==
"__main__"
:
d
=
os
.
path
.
dirname
(
__file__
)
env
=
jinja2
.
Environment
(
loader
=
jinja2
.
FileSystemLoader
(
d
),
lstrip_blocks
=
True
,
autoescape
=
select_autoescape
(
enabled_extensions
=
(
"html"
,
"xml"
)),
)
with
open
(
os
.
path
.
join
(
d
,
"config.yml"
),
"w"
)
as
f
:
f
.
write
(
env
.
get_template
(
"config.yml.in"
).
render
(
build_workflows
=
build_workflows
,
unittest_workflows
=
unittest_workflows
,
)
)
f
.
write
(
"
\n
"
)
.circleci/smoke_test/docker/Dockerfile
deleted
100644 → 0
View file @
29deb085
# this Dockerfile is for torchaudio smoke test, it will be created periodically via CI system
# if you need to do it locally, follow below steps once you have Docker installed
# assuming you're within the directory where this Dockerfile located
# to test the build use : docker build . -t torchaudio/smoketest
# to upload the Dockerfile use build_and_push.sh script
FROM
ubuntu:latest
RUN
apt-get
-qq
update
&&
apt-get
-qq
-y
install
curl bzip2 sox libsox-dev libsox-fmt-all
\
&&
curl
-sSL
https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
-o
/tmp/miniconda.sh
\
&&
bash /tmp/miniconda.sh
-bfp
/usr/local
\
&&
rm
-rf
/tmp/miniconda.sh
\
&&
conda
install
-c
conda-forge gcc
\
&&
conda
install
-y
python
=
3
\
&&
conda update conda
\
&&
apt-get
-qq
-y
remove curl bzip2
\
&&
apt-get
-qq
-y
autoremove
\
&&
apt-get autoclean
\
&&
rm
-rf
/var/lib/apt/lists/
*
/var/log/dpkg.log
\
&&
conda clean
--all
--yes
ENV
PATH /opt/conda/bin:$PATH
RUN
conda create
-y
--name
python3.7
python
=
3.7
RUN
conda create
-y
--name
python3.8
python
=
3.8
RUN
conda create
-y
--name
python3.9
python
=
3.9
RUN
conda create
-y
--name
python3.10
python
=
3.10
SHELL
[ "/bin/bash", "-c" ]
RUN
echo
"source /usr/local/etc/profile.d/conda.sh"
>>
~/.bashrc
RUN
source
/usr/local/etc/profile.d/conda.sh
&&
conda activate python3.7
&&
conda
install
-y
-c
conda-forge sox
&&
conda
install
-y
numpy
RUN
source
/usr/local/etc/profile.d/conda.sh
&&
conda activate python3.8
&&
conda
install
-y
-c
conda-forge sox
&&
conda
install
-y
numpy
RUN
source
/usr/local/etc/profile.d/conda.sh
&&
conda activate python3.9
&&
conda
install
-y
-c
conda-forge sox
&&
conda
install
-y
numpy
RUN
source
/usr/local/etc/profile.d/conda.sh
&&
conda activate python3.10
&&
conda
install
-y
-c
conda-forge sox
&&
conda
install
-y
numpy
CMD
[ "/bin/bash"]
.circleci/smoke_test/docker/build_and_push.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
set
-euo
pipefail
datestr
=
"
$(
date
"+%Y%m%d"
)
"
image
=
"pytorch/torchaudio_unittest_base:smoke_test-
${
datestr
}
"
docker build
-t
"
${
image
}
"
.
docker push
"
${
image
}
"
.circleci/unittest/linux/README.md
deleted
100644 → 0
View file @
29deb085
This directory contains;
-
docker
Docker image definition and scripts to build and update Docker image for unittest.
-
scripts
Scripts used by CircleCI to run unit tests.
.circleci/unittest/linux/docker/.dockerignore
deleted
100644 → 0
View file @
29deb085
*
!scripts
.circleci/unittest/linux/docker/.gitignore
deleted
100644 → 0
View file @
29deb085
scripts/build_third_parties.sh
Dockerfile.tmp
.circleci/unittest/linux/docker/Dockerfile
deleted
100644 → 0
View file @
29deb085
FROM
ubuntu:18.04 as builder
RUN
apt update
-q
################################################################################
# Build Kaldi
################################################################################
RUN
apt
install
-q
-y
\
autoconf
\
automake
\
bzip2
\
g++
\
gfortran
\
git
\
libatlas-base-dev
\
libtool
\
make
\
python2.7
\
python3
\
sox
\
subversion
\
unzip
\
wget
\
zlib1g-dev
# KALDI uses MKL as a default math library, but we are going to copy featbin binaries and dependent
# shared libraries to the final image, so we use ATLAS, which is easy to reinstall in the final image.
RUN
git clone
--depth
1 https://github.com/kaldi-asr/kaldi.git /opt/kaldi
&&
\
cd
/opt/kaldi/tools
&&
\
make
-j
$(
nproc
)
&&
\
cd
/opt/kaldi/src
&&
\
./configure
--shared
--mathlib
=
ATLAS
--use-cuda
=
no
&&
\
make featbin
-j
$(
nproc
)
# Copy featbins and dependent libraries
ADD
./scripts /scripts
RUN
bash /scripts/copy_kaldi_executables.sh /opt/kaldi /kaldi
################################################################################
# Build the final image
################################################################################
FROM
BASE_IMAGE
RUN
apt update
&&
apt
install
-y
\
g++
\
gfortran
\
git
\
libatlas3-base
\
libsndfile1
\
wget
\
curl
\
make
\
file
\
pkg-config
\
&&
rm
-rf
/var/lib/apt/lists/
*
COPY
--from=builder /kaldi /kaldi
ENV
PATH="${PATH}:/kaldi/bin" LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/kaldi/lib"
.circleci/unittest/linux/docker/build_and_push.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
set
-euo
pipefail
if
[
$#
-ne
1
]
;
then
printf
"Usage %s <CUDA_VERSION>
\n\n
"
"
$0
"
exit
1
fi
datestr
=
"
$(
date
"+%Y%m%d"
)
"
if
[
"
$1
"
=
"cpu"
]
;
then
base_image
=
"ubuntu:18.04"
image
=
"pytorch/torchaudio_unittest_base:manylinux-
${
datestr
}
"
else
base_image
=
"nvidia/cuda:
$1
-devel-ubuntu18.04"
docker pull
"
${
base_image
}
"
image
=
"pytorch/torchaudio_unittest_base:manylinux-cuda
$1
-
${
datestr
}
"
fi
cd
"
$(
dirname
"
${
BASH_SOURCE
[0]
}
"
)
"
# docker build also accepts reading from STDIN
# but in that case, no context (other files) can be passed, so we write out Dockerfile
sed
"s|BASE_IMAGE|
${
base_image
}
|g"
Dockerfile
>
Dockerfile.tmp
docker build
-t
"
${
image
}
"
-f
Dockerfile.tmp
.
docker push
"
${
image
}
"
.circleci/unittest/linux/docker/scripts/copy_kaldi_executables.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
list_executables
()
{
# List up executables in the given directory
find
"
$1
"
-type
f
-executable
}
list_kaldi_libraries
()
{
# List up shared libraries used by executables found in the given directory ($1)
# that reside in Kaldi directory ($2)
while
read
file
;
do
ldd
"
${
file
}
"
|
grep
-o
"
${
2
}
.* "
;
done
< <
(
list_executables
"
$1
"
)
|
sort
-u
}
set
-euo
pipefail
kaldi_root
=
"
$(
realpath
"
$1
"
)
"
target_dir
=
"
$(
realpath
"
$2
"
)
"
bin_dir
=
"
${
target_dir
}
/bin"
lib_dir
=
"
${
target_dir
}
/lib"
mkdir
-p
"
${
bin_dir
}
"
"
${
lib_dir
}
"
# 1. Copy featbins
printf
"Copying executables to %s
\n
"
"
${
bin_dir
}
"
while
read
file
;
do
printf
" %s
\n
"
"
${
file
}
"
cp
"
${
file
}
"
"
${
bin_dir
}
"
done
< <
(
list_executables
"
${
kaldi_root
}
/src/featbin"
)
# 2. Copy dependent libraries from Kaldi
printf
"Copying libraries to %s
\n
"
"
${
lib_dir
}
"
while
read
file
;
do
printf
" %s
\n
"
"
$file
"
# If it is not symlink, just copy to the target directory
if
[
!
-L
"
${
file
}
"
]
;
then
cp
"
${
file
}
"
"
${
lib_dir
}
"
continue
fi
# If it is symlink,
# 1. Copy the actual library to the target directory.
library
=
"
$(
realpath
"
${
file
}
"
)
"
cp
"
${
library
}
"
"
${
lib_dir
}
"
# 2. then if the name of the symlink is different from the actual library name,
# create the symlink in the target directory.
lib_name
=
"
$(
basename
"
${
library
}
"
)
"
link_name
=
"
$(
basename
"
${
file
}
"
)
"
if
[
"
${
lib_name
}
"
!=
"
${
link_name
}
"
]
;
then
printf
" Linking %s -> %s
\n
"
"
${
lib_name
}
"
"
${
link_name
}
"
(
cd
"
${
lib_dir
}
"
ln
-sf
"
${
lib_name
}
"
"
${
link_name
}
"
)
fi
done
< <
(
list_kaldi_libraries
"
${
bin_dir
}
"
"
${
kaldi_root
}
"
)
.circleci/unittest/linux/scripts/install.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
unset
PYTORCH_VERSION
# For unittest, nightly PyTorch is used as the following section,
# so no need to set PYTORCH_VERSION.
# In fact, keeping PYTORCH_VERSION forces us to hardcode PyTorch version in config.
set
-e
root_dir
=
"
$(
git rev-parse
--show-toplevel
)
"
conda_dir
=
"
${
root_dir
}
/conda"
env_dir
=
"
${
root_dir
}
/env"
cd
"
${
root_dir
}
"
case
"
$(
uname
-s
)
"
in
Darwin
*
)
os
=
MacOSX
;;
*
)
os
=
Linux
esac
# 0. Activate conda env
eval
"
$(
"
${
conda_dir
}
/bin/conda"
shell.bash hook
)
"
conda activate
"
${
env_dir
}
"
# 1. Install PyTorch
if
[
-z
"
${
CUDA_VERSION
:-}
"
]
;
then
if
[
"
${
os
}
"
==
MacOSX
]
;
then
cudatoolkit
=
''
else
cudatoolkit
=
"cpuonly"
fi
version
=
"cpu"
else
version
=
"
$(
python
-c
"print('.'.join(
\"
${
CUDA_VERSION
}
\"
.split('.')[:2]))"
)
"
export
CUDATOOLKIT_CHANNEL
=
"nvidia"
cudatoolkit
=
"pytorch-cuda=
${
version
}
"
fi
printf
"Installing PyTorch with %s
\n
"
"
${
cudatoolkit
}
"
(
if
[
"
${
os
}
"
==
MacOSX
]
;
then
# TODO: this can be removed as soon as linking issue could be resolved
# see https://github.com/pytorch/pytorch/issues/62424 from details
MKL_CONSTRAINT
=
'mkl==2021.2.0'
pytorch_build
=
pytorch
else
MKL_CONSTRAINT
=
''
pytorch_build
=
"pytorch[build="
*${
version
}*
"]"
fi
set
-x
if
[[
-z
"
$cudatoolkit
"
]]
;
then
conda
install
${
CONDA_CHANNEL_FLAGS
:-}
-y
-c
"pytorch-
${
UPLOAD_CHANNEL
}
"
$MKL_CONSTRAINT
"pytorch-
${
UPLOAD_CHANNEL
}
::
${
pytorch_build
}
"
else
conda
install
pytorch
${
cudatoolkit
}
${
CONDA_CHANNEL_FLAGS
:-}
-y
-c
"pytorch-
${
UPLOAD_CHANNEL
}
"
-c
nvidia
$MKL_CONSTRAINT
fi
)
# 2. Install torchaudio
printf
"* Installing torchaudio
\n
"
python setup.py
install
# 3. Install Test tools
printf
"* Installing test tools
\n
"
NUMBA_DEV_CHANNEL
=
""
if
[[
"
$(
python
--version
)
"
=
*
3.9
*
||
"
$(
python
--version
)
"
=
*
3.10
*
]]
;
then
# Numba isn't available for Python 3.9 and 3.10 except on the numba dev channel and building from source fails
# See https://github.com/librosa/librosa/issues/1270#issuecomment-759065048
NUMBA_DEV_CHANNEL
=
"-c numba/label/dev"
fi
# Note: installing librosa via pip fail because it will try to compile numba.
(
set
-x
conda
install
-y
-c
conda-forge
${
NUMBA_DEV_CHANNEL
}
'librosa>=0.8.0'
parameterized
'requests>=2.20'
pip
install
kaldi-io SoundFile coverage pytest pytest-cov
'scipy==1.7.3'
transformers expecttest unidecode inflect Pillow sentencepiece pytorch-lightning
'protobuf<4.21.0'
demucs tinytag
)
# Install fairseq
git clone https://github.com/pytorch/fairseq
cd
fairseq
git checkout e47a4c8
pip
install
.
.circleci/unittest/linux/scripts/run_clang_format.py
deleted
100755 → 0
View file @
29deb085
This diff is collapsed.
Click to expand it.
.circleci/unittest/linux/scripts/run_style_checks.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
set
-eux
root_dir
=
"
$(
git rev-parse
--show-toplevel
)
"
conda_dir
=
"
${
root_dir
}
/conda"
env_dir
=
"
${
root_dir
}
/env"
this_dir
=
"
$(
cd
"
$(
dirname
"
${
BASH_SOURCE
[0]
}
"
)
"
>
/dev/null 2>&1
&&
pwd
)
"
eval
"
$(
"
${
conda_dir
}
/bin/conda"
shell.bash hook
)
"
conda activate
"
${
env_dir
}
"
# 1. Install tools
conda
install
-y
flake8
==
3.9.2
printf
"Installed flake8: "
flake8
--version
clangformat_path
=
"
${
root_dir
}
/clang-format"
curl https://oss-clang-format.s3.us-east-2.amazonaws.com/linux64/clang-format-linux64
-o
"
${
clangformat_path
}
"
chmod
+x
"
${
clangformat_path
}
"
printf
"Installed clang-fortmat"
"
${
clangformat_path
}
"
--version
# 2. Run style checks
# We want to run all the style checks even if one of them fail.
set
+e
exit_status
=
0
printf
"
\x
1b[34mRunning flake8:
\x
1b[0m
\n
"
flake8 torchaudio
test
tools/setup_helpers docs/source/conf.py examples
status
=
$?
exit_status
=
"
$((
exit_status+status
))
"
if
[
"
${
status
}
"
-ne
0
]
;
then
printf
"
\x
1b[31mflake8 failed. Check the format of Python files.
\x
1b[0m
\n
"
fi
printf
"
\x
1b[34mRunning clang-format:
\x
1b[0m
\n
"
"
${
this_dir
}
"
/run_clang_format.py
\
-r
torchaudio/csrc third_party/kaldi/src
\
--clang-format-executable
"
${
clangformat_path
}
"
\
&&
git diff
--exit-code
status
=
$?
exit_status
=
"
$((
exit_status+status
))
"
if
[
"
${
status
}
"
-ne
0
]
;
then
printf
"
\x
1b[31mC++ files are not formatted. Please use clang-format to format CPP files.
\x
1b[0m
\n
"
fi
exit
$exit_status
.circleci/unittest/linux/scripts/run_test.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
set
-e
eval
"
$(
./conda/bin/conda shell.bash hook
)
"
conda activate ./env
python
-m
torch.utils.collect_env
env
|
grep
TORCHAUDIO
||
true
export
PATH
=
"
${
PWD
}
/third_party/install/bin/:
${
PATH
}
"
declare
-a
args
=(
'-v'
'--cov=torchaudio'
"--junitxml=
${
PWD
}
/test-results/junit.xml"
'--durations'
'20'
)
cd test
pytest
"
${
args
[@]
}
"
torchaudio_unittest
coverage html
.circleci/unittest/linux/scripts/setup_env.sh
deleted
100755 → 0
View file @
29deb085
#!/usr/bin/env bash
# This script is for setting up environment in which unit test is ran.
# To speed up the CI time, the resulting environment is cached.
#
# Do not install PyTorch and torchaudio here, otherwise they also get cached.
set
-ex
root_dir
=
"
$(
git rev-parse
--show-toplevel
)
"
conda_dir
=
"
${
root_dir
}
/conda"
env_dir
=
"
${
root_dir
}
/env"
cd
"
${
root_dir
}
"
case
"
$(
uname
-s
)
"
in
Darwin
*
)
os
=
MacOSX
;;
*
)
os
=
Linux
esac
# 1. Install conda at ./conda
if
[
!
-d
"
${
conda_dir
}
"
]
;
then
printf
"* Installing conda
\n
"
curl
--silent
-L
-o
miniconda.sh
"http://repo.continuum.io/miniconda/Miniconda3-latest-
${
os
}
-x86_64.sh"
bash ./miniconda.sh
-b
-f
-p
"
${
conda_dir
}
"
fi
eval
"
$(
"
${
conda_dir
}
/bin/conda"
shell.bash hook
)
"
# 2. Create test environment at ./env
if
[
!
-d
"
${
env_dir
}
"
]
;
then
printf
"* Creating a test environment with PYTHON_VERSION=%s
\n
"
"
${
PYTHON_VERSION
}
\n
"
conda create
--prefix
"
${
env_dir
}
"
-y
python
=
"
${
PYTHON_VERSION
}
"
fi
conda activate
"
${
env_dir
}
"
# 3. Install minimal build tools
pip
--quiet
install
cmake ninja
conda
install
--quiet
-y
'ffmpeg>=4.1'
pkg-config
.circleci/unittest/windows/README.md
deleted
100644 → 0
View file @
29deb085
This directory contains;
-
scripts
Scripts used by CircleCI to run unit tests.
.circleci/unittest/windows/scripts/environment.yml
deleted
100644 → 0
View file @
29deb085
channels
:
-
defaults
dependencies
:
-
flake8
-
pytest
-
pytest-cov
-
codecov
-
scipy >= 1.4.1
-
pip
-
pip
:
-
kaldi-io
-
PySoundFile
-
future
-
parameterized
-
dataclasses
-
expecttest
Prev
1
2
3
4
5
…
23
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment