Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
wangsen
MinerU
Commits
e0f591ec
Unverified
Commit
e0f591ec
authored
Feb 19, 2025
by
Xiaomeng Zhao
Committed by
GitHub
Feb 19, 2025
Browse files
Merge pull request #1711 from myhloli/dev
docs(windows): add numpy version limit for CUDA installation
parents
a6870016
77374343
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
4 additions
and
3 deletions
+4
-3
docker/ascend_npu/Dockerfile
docker/ascend_npu/Dockerfile
+2
-1
docs/README_Windows_CUDA_Acceleration_en_US.md
docs/README_Windows_CUDA_Acceleration_en_US.md
+1
-1
docs/README_Windows_CUDA_Acceleration_zh_CN.md
docs/README_Windows_CUDA_Acceleration_zh_CN.md
+1
-1
No files found.
docker/ascend_npu/Dockerfile
View file @
e0f591ec
...
...
@@ -36,7 +36,8 @@ RUN /bin/bash -c "source /opt/mineru_venv/bin/activate && \
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/ascend_npu/requirements.txt -O requirements.txt &&
\
pip3 install -r requirements.txt --extra-index-url https://wheels.myhloli.com -i https://mirrors.aliyun.com/pypi/simple &&
\
wget https://gitee.com/ascend/pytorch/releases/download/v6.0.rc2-pytorch2.3.1/torch_npu-2.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl &&
\
pip install torch_npu-2.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl"
pip3 install torch_npu-2.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl &&
\
pip3 install https://gcore.jsdelivr.net/gh/myhloli/wheels@main/assets/whl/paddle-custom-npu/paddle_custom_npu-0.0.0-cp310-cp310-linux_aarch64.whl"
# Copy the configuration file template and install magic-pdf latest
RUN
/bin/bash
-c
"wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/magic-pdf.template.json &&
\
...
...
docs/README_Windows_CUDA_Acceleration_en_US.md
View file @
e0f591ec
...
...
@@ -65,7 +65,7 @@ If your graphics card has at least 8GB of VRAM, follow these steps to test CUDA-
1.
**Overwrite the installation of torch and torchvision**
supporting CUDA.
```
pip install --force-reinstall torch==2.3.1 torchvision==0.18.1 --index-url https://download.pytorch.org/whl/cu118
pip install --force-reinstall torch==2.3.1 torchvision==0.18.1
"numpy<2.0.0"
--index-url https://download.pytorch.org/whl/cu118
```
2.
**Modify the value of `"device-mode"`**
in the
`magic-pdf.json`
configuration file located in your user directory.
...
...
docs/README_Windows_CUDA_Acceleration_zh_CN.md
View file @
e0f591ec
...
...
@@ -66,7 +66,7 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i h
**1.覆盖安装支持cuda的torch和torchvision**
```
bash
pip
install
--force-reinstall
torch
==
2.3.1
torchvision
==
0.18.1
--index-url
https://download.pytorch.org/whl/cu118
pip
install
--force-reinstall
torch
==
2.3.1
torchvision
==
0.18.1
"numpy<2.0.0"
--index-url
https://download.pytorch.org/whl/cu118
```
**2.修改【用户目录】中配置文件magic-pdf.json中"device-mode"的值**
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment