Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
gpu-base-image-build
Commits
ec5e8d89
Commit
ec5e8d89
authored
Nov 05, 2024
by
chenpangpang
Browse files
Merge branch 'paddle' into 'dev'
Paddle See merge request
!5
parents
1440eb08
338ca45c
Changes
8
Show whitespace changes
Inline
Side-by-side
Showing
8 changed files
with
223 additions
and
13 deletions
+223
-13
README.md
README.md
+20
-2
attach/paddle.json
attach/paddle.json
+128
-0
auto_build.py
auto_build.py
+48
-2
build_space/Dockerfile.jupyterlab_ubuntu
build_space/Dockerfile.jupyterlab_ubuntu
+22
-4
build_space/extension.sh
build_space/extension.sh
+2
-2
script/1_base_test.sh
script/1_base_test.sh
+1
-1
script/2_text_test.sh
script/2_text_test.sh
+1
-1
script/3_image_test.sh
script/3_image_test.sh
+1
-1
No files found.
README.md
View file @
ec5e8d89
...
@@ -52,13 +52,31 @@
...
@@ -52,13 +52,31 @@
- 参数3: 基础镜像
- 参数3: 基础镜像
- TENSORFLOW_VERSION:tensorflow版本
- TENSORFLOW_VERSION:tensorflow版本
- CONDA_URL:安装conda的url
- CONDA_URL:安装conda的url
- paddlepaddle
```bash
cd build_space && \
./build_ubuntu.sh jupyterlab \
jupyterlab-paddle:2.6-py3.11-cuda12.0-ubuntu22.04-devel \
nvidia/cuda:12.0.0-cudnn8-devel-ubuntu22.04 \
PADDLEPADDLE_VERSION="2.6.0.post120" \
PADDLENLP_VERSION="2.7.2" \
CONDA_URL="https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py311_24.7.1-0-Linux-x86_64.sh" \
PADDLE_URL="https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
```
-
参数1: ide,不需要改动
-
参数2: 输出镜像名
-
参数3: 基础镜像
-
PADDLEPADDLE_VERSION:paddlepaddle版本
-
PADDLENLP_VERSION:
-
CONDA_URL:安装conda的url
-
PADDLE_URL: paddlepaddle 安装源,为空表示从默认源下载(清华源)
### 相关链接
### 相关链接
-
pytorch镜像(
**选择devel镜像**
):https://hub.docker.com/r/pytorch/pytorch/tags
-
pytorch镜像(
**选择devel镜像**
):https://hub.docker.com/r/pytorch/pytorch/tags
-
nvidia镜像(
**选择devel镜像**
):https://hub.docker.com/r/nvidia/cuda/tags
-
nvidia镜像(
**选择devel镜像**
):https://hub.docker.com/r/nvidia/cuda/tags
-
torch、torchvision、torchaudio、cuda版本对应:https://pytorch.org/get-started/previous-versions/
-
torch、torchvision、torchaudio、cuda版本对应:https://pytorch.org/get-started/previous-versions/
-
conda安装:https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/
-
conda安装:https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/
-
paddlepaddle依赖参考:attach/paddle.json
## 镜像验证
## 镜像验证
1.
版本验证:运行:
`sh script/1_base_test.sh $IMAGE_NAME`
,输出:
1.
版本验证:运行:
`sh script/1_base_test.sh $IMAGE_NAME`
,输出:
```
```
...
...
attach/paddle.json
0 → 100644
View file @
ec5e8d89
[
{
"paddle_excel_version"
:
"3.0-beta"
,
"cuda_version"
:
"cuda12.3"
,
"paddle_version"
:
"3.0.0b2"
,
"paddlenlp_version"
:
"3.0.0b0"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/packages/stable/cu123/"
},
{
"paddle_excel_version"
:
"3.0-beta"
,
"cuda_version"
:
"cuda11.8"
,
"paddle_version"
:
"3.0.0b2"
,
"paddlenlp_version"
:
"3.0.0b0"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/packages/stable/cu118/"
},
{
"paddle_excel_version"
:
"2.6"
,
"cuda_version"
:
"cuda12.0"
,
"paddle_version"
:
"2.6.0.post120"
,
"paddlenlp_version"
:
"2.7.2"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.6"
,
"cuda_version"
:
"cuda11.8"
,
"paddle_version"
:
"2.6.0"
,
"paddlenlp_version"
:
"2.8.1"
,
"paddle_url"
:
null
},
{
"paddle_excel_version"
:
"2.5"
,
"cuda_version"
:
"cuda12.0"
,
"paddle_version"
:
"2.5.2.post120"
,
"paddlenlp_version"
:
"2.5.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.5"
,
"cuda_version"
:
"cuda11.7"
,
"paddle_version"
:
"2.5.2.post117"
,
"paddlenlp_version"
:
"2.5.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.4"
,
"cuda_version"
:
"cuda11.7"
,
"paddle_version"
:
"2.4.2.post117"
,
"paddlenlp_version"
:
"2.4.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.4"
,
"cuda_version"
:
"cuda11.6"
,
"paddle_version"
:
"2.4.2.post116"
,
"paddlenlp_version"
:
"2.5.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.3"
,
"cuda_version"
:
"cuda11.6"
,
"paddle_version"
:
"2.3.2.post116"
,
"paddlenlp_version"
:
"2.5.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.3"
,
"cuda_version"
:
"cuda11.2"
,
"paddle_version"
:
"2.3.2.post112"
,
"paddlenlp_version"
:
"2.4.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.2"
,
"cuda_version"
:
"cuda11.2"
,
"paddle_version"
:
"2.2.2.post112"
,
"paddlenlp_version"
:
"2.2.0"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.2"
,
"cuda_version"
:
"cuda11.1"
,
"paddle_version"
:
"2.2.2.post110"
,
"paddlenlp_version"
:
"2.2.0"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.1"
,
"cuda_version"
:
"cuda11.2"
,
"paddle_version"
:
"2.1.3.post112"
,
"paddlenlp_version"
:
"2.1.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.1"
,
"cuda_version"
:
"cuda11.0"
,
"paddle_version"
:
"2.1.3.post110"
,
"paddlenlp_version"
:
"2.1.1"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"2.0"
,
"cuda_version"
:
"cuda10.1"
,
"paddle_version"
:
"2.0.2.post101"
,
"paddlenlp_version"
:
"2.0.0"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/mkl/stable.html"
},
{
"paddle_excel_version"
:
"2.0"
,
"cuda_version"
:
"cuda10.0"
,
"paddle_version"
:
"2.0.2.post100"
,
"paddlenlp_version"
:
"2.0.0"
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/mkl/stable.html"
},
{
"paddle_excel_version"
:
"1.8"
,
"cuda_version"
:
"cuda10.1"
,
"paddle_version"
:
"1.8.5.post107"
,
"paddlenlp_version"
:
null
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
},
{
"paddle_excel_version"
:
"1.8"
,
"cuda_version"
:
"cuda10.0"
,
"paddle_version"
:
"1.8.5.post107"
,
"paddlenlp_version"
:
null
,
"paddle_url"
:
"https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html"
}
]
auto_build.py
View file @
ec5e8d89
...
@@ -7,7 +7,8 @@ import time
...
@@ -7,7 +7,8 @@ import time
from
concurrent.futures
import
ThreadPoolExecutor
,
wait
,
ALL_COMPLETED
from
concurrent.futures
import
ThreadPoolExecutor
,
wait
,
ALL_COMPLETED
import
argparse
import
argparse
import
logging
import
logging
import
json
from
packaging.version
import
Version
class
MyLogger
:
class
MyLogger
:
def
__init__
(
self
,
logger_name
,
log_file
,
console_handler
=
True
,
level
=
logging
.
INFO
):
def
__init__
(
self
,
logger_name
,
log_file
,
console_handler
=
True
,
level
=
logging
.
INFO
):
...
@@ -92,12 +93,29 @@ def package_and_transfer(image_name, tar_file, image_result_dir, logger):
...
@@ -92,12 +93,29 @@ def package_and_transfer(image_name, tar_file, image_result_dir, logger):
logger
.
info
(
f
"==== 镜像
{
image_name
}
传输完毕 ===="
)
logger
.
info
(
f
"==== 镜像
{
image_name
}
传输完毕 ===="
)
# 从json中获取paddle安装信息
def
get_paddle_info
(
paddlepaddle_version
,
cuda_version
):
# 读取 JSON 数据
with
open
(
"attach/paddle.json"
,
"r"
,
encoding
=
"utf-8"
)
as
file
:
version_data
=
json
.
load
(
file
)
for
item
in
version_data
:
if
item
[
"paddle_excel_version"
]
==
paddlepaddle_version
and
item
[
"cuda_version"
]
==
cuda_version
:
return
{
"paddle_version"
:
item
[
"paddle_version"
],
"paddlenlp_version"
:
item
[
"paddlenlp_version"
],
"paddle_url"
:
item
[
"paddle_url"
]
}
return
None
def
run
():
def
run
():
# 读取Excel文件
# 读取Excel文件
df
=
pd
.
read_excel
(
args
.
input_file
)
df
=
pd
.
read_excel
(
args
.
input_file
)
os
.
makedirs
(
args
.
log_dir
,
exist_ok
=
True
)
os
.
makedirs
(
args
.
log_dir
,
exist_ok
=
True
)
paddle_version
=
None
paddlenlp_version
=
None
paddle_url
=
None
# 创建线程池
# 创建线程池
with
ThreadPoolExecutor
()
as
executor
:
with
ThreadPoolExecutor
()
as
executor
:
# 遍历每一行数据,自动构建镜像
# 遍历每一行数据,自动构建镜像
...
@@ -107,6 +125,7 @@ def run():
...
@@ -107,6 +125,7 @@ def run():
framework_version
=
row
[
'框架版本'
]
# 直接获取框架版本作为 framework_VERSION
framework_version
=
row
[
'框架版本'
]
# 直接获取框架版本作为 framework_VERSION
other_dependencies
=
row
[
'其他依赖包'
]
other_dependencies
=
row
[
'其他依赖包'
]
conda_url
=
row
[
'conda url'
]
# 获取conda URL
conda_url
=
row
[
'conda url'
]
# 获取conda URL
cuda_version
=
row
[
'Runtime版本'
].
strip
().
lower
()
# 获取 CUDA 版本
# 日志文件
# 日志文件
if
os
.
path
.
exists
(
os
.
path
.
join
(
args
.
log_dir
,
image_name
)):
if
os
.
path
.
exists
(
os
.
path
.
join
(
args
.
log_dir
,
image_name
)):
...
@@ -137,6 +156,17 @@ def run():
...
@@ -137,6 +156,17 @@ def run():
if
torchaudio_version
is
None
:
if
torchaudio_version
is
None
:
torchaudio_version
=
"未找到版本号"
torchaudio_version
=
"未找到版本号"
# 处理比较复杂的下载或依赖关系
if
isinstance
(
base_image
,
str
):
if
"paddle"
in
image_name
:
paddle_info
=
get_paddle_info
(
str
(
framework_version
),
str
(
cuda_version
))
if
paddle_info
:
paddle_version
=
paddle_info
[
"paddle_version"
]
paddlenlp_version
=
paddle_info
[
"paddlenlp_version"
]
paddle_url
=
paddle_info
[
"paddle_url"
]
else
:
print
(
"未找到指定的 PaddlePaddle 和 CUDA 版本信息"
)
# 基于 PyTorch 或 NVIDIA 镜像的构建逻辑
# 基于 PyTorch 或 NVIDIA 镜像的构建逻辑
if
isinstance
(
base_image
,
str
):
if
isinstance
(
base_image
,
str
):
if
"pytorch"
in
image_name
:
if
"pytorch"
in
image_name
:
...
@@ -166,6 +196,17 @@ def run():
...
@@ -166,6 +196,17 @@ def run():
CONDA_URL="
{
conda_url
}
"
\
CONDA_URL="
{
conda_url
}
"
\
2>&1 | tee ../
{
args
.
log_dir
}
/
{
image_name
}
/build.log
2>&1 | tee ../
{
args
.
log_dir
}
/
{
image_name
}
/build.log
"""
"""
elif
"paddle"
in
image_name
:
build_command
=
f
"""
cd build_space &&
\
./build_ubuntu.sh jupyterlab
{
image_name
}
{
base_image
}
\
PADDLEPADDLE_VERSION="
{
paddle_version
}
"
\
PADDLENLP_VERSION="
{
paddlenlp_version
}
"
\
CONDA_URL="
{
conda_url
}
"
\
PADDLE_URL="
{
paddle_url
}
"
\
2>&1 | tee ../
{
args
.
log_dir
}
/
{
image_name
}
/build.log
"""
# 打印构建命令(用于调试)
# 打印构建命令(用于调试)
logger
.
info
(
build_command
)
logger
.
info
(
build_command
)
...
@@ -190,6 +231,11 @@ def run():
...
@@ -190,6 +231,11 @@ def run():
if
"pytorch"
in
image_name
:
if
"pytorch"
in
image_name
:
test_commands
.
append
(
test_commands
.
append
(
f
"mv gpu-base-image-test/pytorch/stable-diffusion-v1-4/output.png
{
image_result_dir
}
"
)
f
"mv gpu-base-image-test/pytorch/stable-diffusion-v1-4/output.png
{
image_result_dir
}
"
)
elif
"paddle"
in
image_name
:
# 使用 Version 进行版本比较
if
Version
(
paddle_version
)
>=
Version
(
"2.4"
):
test_commands
.
append
(
f
"mv gpu-base-image-test/paddle/output.png
{
image_result_dir
}
"
)
# 执行测试命令
# 执行测试命令
for
test_command
in
test_commands
:
for
test_command
in
test_commands
:
...
...
build_space/Dockerfile.jupyterlab_ubuntu
View file @
ec5e8d89
...
@@ -104,8 +104,15 @@ RUN if [ $TENSORFLOW_VERSION == "2.16.1" ]; then \
...
@@ -104,8 +104,15 @@ RUN if [ $TENSORFLOW_VERSION == "2.16.1" ]; then \
# ----- paddlepaddle install -----
# ----- paddlepaddle install -----
RUN if [ -n "$PADDLEPADDLE_VERSION" ] && [ -n "$PADDLE_URL" ]; then \
RUN if [ -n "$PADDLEPADDLE_VERSION" ] && [ -n "$PADDLE_URL" ]; then \
pip install paddlepaddle-gpu==$PADDLEPADDLE_VERSION -f $PADDLE_URL -i $PADDLE_URL \
if [ "$(echo -e "$PADDLEPADDLE_VERSION\n3.0" | sort -V | head -n1)" = "3.0" ]; then \
&& rm -r /root/.cache/pip; \
# 处理 PADDLEPADDLE_VERSION >= 3.0 的情况
pip install paddlepaddle-gpu==$PADDLEPADDLE_VERSION -f $PADDLE_URL -i $PADDLE_URL && \
rm -r /root/.cache/pip; \
else \
# 处理 PADDLEPADDLE_VERSION < 3.0 的情况
pip install paddlepaddle-gpu==$PADDLEPADDLE_VERSION -f $PADDLE_URL && \
rm -r /root/.cache/pip; \
fi; \
fi
fi
RUN if [ -n "$PADDLEPADDLE_VERSION" ] && [ -z "$PADDLE_URL" ]; then \
RUN if [ -n "$PADDLEPADDLE_VERSION" ] && [ -z "$PADDLE_URL" ]; then \
...
@@ -114,10 +121,21 @@ RUN if [ -n "$PADDLEPADDLE_VERSION" ] && [ -z "$PADDLE_URL" ]; then \
...
@@ -114,10 +121,21 @@ RUN if [ -n "$PADDLEPADDLE_VERSION" ] && [ -z "$PADDLE_URL" ]; then \
fi
fi
RUN if [ -n "$PADDLENLP_VERSION" ] ; then \
RUN if [ -n "$PADDLENLP_VERSION" ] ; then \
pip install paddlenlp==$PADDLENLP_VERSION ppdiffusers huggingface_hub --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple && \
if [ -n "$PADDLEPADDLE_VERSION" ] && \
pip install --upgrade ppdiffusers --no-deps && rm -r /root/.cache/pip; \
[ "$(echo -e "$PADDLEPADDLE_VERSION\n2.3" | sort -V | head -n1)" = "2.3" ]; then \
# 处理 PADDLEPADDLE_VERSION >= 2.3 的情况
pip install paddlenlp==$PADDLENLP_VERSION ppdiffusers huggingface_hub==0.25.0 --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple && \
rm -r /root/.cache/pip; \
else \
# 处理 PADDLEPADDLE_VERSION < 2.3 的情况
pip install paddlenlp==$PADDLENLP_VERSION numpy==1.19.5 protobuf==3.20.3 --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple && \
# 替换小于2.3版本中np.object为object
sed -i 's/np\.object/object/g' /opt/conda/lib/python3.*/site-packages/paddle/**/*.py && \
rm -r /root/.cache/pip; \
fi; \
fi
fi
COPY ./python-requirements.txt /tmp/
COPY ./python-requirements.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/python-requirements.txt
RUN pip install --no-cache-dir -r /tmp/python-requirements.txt
...
...
build_space/extension.sh
View file @
ec5e8d89
...
@@ -56,7 +56,7 @@ cp -a $WORKSPACE/static/index.html ${jupyter_file_path}/static/index.html
...
@@ -56,7 +56,7 @@ cp -a $WORKSPACE/static/index.html ${jupyter_file_path}/static/index.html
cp
-a
$WORKSPACE
/static/scnet-loading.gif
${
jupyter_file_path
}
/static/scnet-loading.gif
cp
-a
$WORKSPACE
/static/scnet-loading.gif
${
jupyter_file_path
}
/static/scnet-loading.gif
pip3 uninstall
-r
$WORKSPACE
/requirements.txt
pip3 uninstall
-r
$WORKSPACE
/requirements.txt
-y
pip3
install
--no-index
--find-links
=
$WORKSPACE
/
-r
$WORKSPACE
/requirements.txt
pip3
install
--no-index
--find-links
=
$WORKSPACE
/
-r
$WORKSPACE
/requirements.txt
...
...
script/1_base_test.sh
View file @
ec5e8d89
...
@@ -51,7 +51,7 @@ elif [[ "$1" == *"tensorflow"* ]]; then
...
@@ -51,7 +51,7 @@ elif [[ "$1" == *"tensorflow"* ]]; then
os.system('nvcc -V | tail -n 2')
os.system('nvcc -V | tail -n 2')
"
;
fi
"
;
fi
elif
[[
"
$1
"
==
*
"paddle"
*
]]
;
then
elif
[[
"
$1
"
==
*
"paddle"
*
]]
;
then
TARGET_DIR
=
gpu-base-image-test/paddle
test
TARGET_DIR
=
gpu-base-image-test/paddle
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace
$1
python base_test.py
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace
$1
python base_test.py
else
else
...
...
script/2_text_test.sh
View file @
ec5e8d89
...
@@ -21,7 +21,7 @@ if [[ "$1" == *"tensorflow"* ]]; then
...
@@ -21,7 +21,7 @@ if [[ "$1" == *"tensorflow"* ]]; then
else
else
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace/tensorflow/bert
$1
python infer.py
;
fi
;
fi
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace/tensorflow/bert
$1
python infer.py
;
fi
;
fi
if
[[
"
$1
"
==
*
"paddle"
*
]]
;
then
if
[[
"
$1
"
==
*
"paddle"
*
]]
;
then
TARGET_DIR
=
gpu-base-image-test/paddle
test
TARGET_DIR
=
gpu-base-image-test/paddle
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace
$1
python text.py
;
fi
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace
$1
python text.py
;
fi
script/3_image_test.sh
View file @
ec5e8d89
...
@@ -22,7 +22,7 @@ if [[ "$1" == *"tensorflow"* ]]; then
...
@@ -22,7 +22,7 @@ if [[ "$1" == *"tensorflow"* ]]; then
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace/tensorflow/mnist
$1
python train.py
;
fi
;
fi
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace/tensorflow/mnist
$1
python train.py
;
fi
;
fi
if
[[
"
$1
"
==
*
"paddle"
*
]]
;
then
if
[[
"
$1
"
==
*
"paddle"
*
]]
;
then
TARGET_DIR
=
gpu-base-image-test/paddle
test
TARGET_DIR
=
gpu-base-image-test/paddle
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace
$1
python image.py
;
fi
docker run
--rm
--platform
=
linux/amd64
--gpus
all
-v
./
$TARGET_DIR
:/workspace
--workdir
/workspace
$1
python image.py
;
fi
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment