Commit 323a2615 authored by dengjb's avatar dengjb
Browse files

Update README.md

parent cb0dff28
......@@ -71,15 +71,20 @@ Synth90k(合成文本数据集-该数据集包含900万张由一组90k常见英
数据集的目录结构如下:
训练之前需要对数据集进行格式转换,具体操作如下:
1、copy create_dataset.py文件到数据集解压路径下的mnt/ramdisk/max/下
2、修改dataset_output路径,然后运行代码:`python create_dataset.py`
3、得到输出数据集
```
├── IIIT5K_lmdb
│   ├── data.mdb
│   ├── error_image_log.txt
│   └── lock.mdb
└── MJ_LMDB
└── Synth90k/train
├── data.mdb
└── lock.mdb
└── Synth90k/val
├── data.mdb
└── lock.mdb
```
......@@ -90,7 +95,7 @@ Synth90k(合成文本数据集-该数据集包含900万张由一组90k常见英
```
export HIP_VISIBLE_DEVICES=0
export USE_MIOPEN_BATCHNORM=1
python3 train.py --adadelta --trainRoot ../Datasets/Synth90k/MJ_LMDB --valRoot ../Datasets/Synth90k/IIIT5K_lmdb --cuda --ngpu 1 --batchSize 64 --workers 8
python3 train.py --adadelta --trainRoot ../Datasets/Synth90k/train --valRoot ../Datasets/Synth90k/val --cuda --ngpu 1 --batchSize 64 --workers 8
```
### 单机多卡
......@@ -100,7 +105,7 @@ python3 train.py --adadelta --trainRoot ../Datasets/Synth90k/MJ_LMDB --valRoot .
export HSA_FORCE_FINE_GRAIN_PCIE=1
export USE_MIOPEN_BATCHNORM=1
export HIP_VISIBLE_DEVICES=0,1,2,3
python -m torch.distributed.launch --nproc_per_node=4 train_ddp.py --adadelta --trainRoot ../Datasets/Synth90k/MJ_LMDB --valRoot ../Datasets/Synth90k/IIIT5K_lmdb --cuda --ngpu 4 --batchSize 64 --workers 8
python -m torch.distributed.launch --nproc_per_node=4 train_ddp.py --adadelta --trainRoot ../Datasets/Synth90k --valRoot ../Datasets/Synth90k --cuda --ngpu 4 --batchSize 64 --workers 8
```
## 推理
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment