How_to_Generate.md 2.87 KB
Newer Older
dengjb's avatar
update  
dengjb committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# Fastreid Model Deployment

The `gen_wts.py` script convert a fastreid model to [.wts format](https://github.com/wang-xinyu/tensorrtx/blob/master/tutorials/getting_started.md#the-wts-content-format) file, then it will be used in [FastRT](https://github.com/JDAI-CV/fast-reid/blob/master/projects/FastRT) directly. 

### Convert Environment

* Same as fastreid.
    
### How to Generate

This is a general example for converting fastreid to TensorRT model. We use `FastRT` to build the model with nvidia TensorRT APIs.

In this part you need to convert the pytorch model to '.wts' file using `gen_wts.py` follow instructions below.

1. Run command line below to generate the '.wts' file from pytorch model
   
   It's similar to how you use fastreid.
    ```bash
    python projects/FastRT/tools/gen_wts.py --config-file='config/you/use/in/fastreid/xxx.yml' \
    --verify --show_model --wts_path='outputs/trt_model_file/xxx.wts' \
    MODEL.WEIGHTS '/path/to/checkpoint_file/model_best.pth' MODEL.DEVICE "cuda:0"
    ```

    then you can check the TensorRT model weights `outputs/trt_model_file/xxx.wts`.

3. Copy the `outputs/trt_model_file/xxx.wts` to [FastRT](https://github.com/JDAI-CV/fast-reid/blob/master/projects/FastRT)


### More convert examples

+ Ex1. `sbs_R50-ibn`
    - [x] resnet50, ibn, non-local, gempoolp
    ```bash
    python projects/FastRT/tools/gen_wts.py --config-file='configs/DukeMTMC/sbs_R50-ibn.yml' \
    --verify --show_model --wts_path='outputs/trt_model_file/sbs_R50-ibn.wts' \
    MODEL.WEIGHTS '/path/to/checkpoint_file/model_best.pth' MODEL.DEVICE "cuda:0"
    ```
    
+ Ex2. `sbs_R50`
    - [x] resnet50, gempoolp   
    ```bash
    python projects/FastRT/tools/gen_wts.py --config-file='configs/DukeMTMC/sbs_R50.yml' \
    --verify --show_model --wts_path='outputs/trt_model_file/sbs_R50.wts' \
    MODEL.WEIGHTS '/path/to/checkpoint_file/model_best.pth' MODEL.DEVICE "cuda:0"
    ``` 
    
* Ex3. `sbs_r34_distill`
    - [x] train-alone distill-r34 (hint: distill-resnet is slightly different from resnet34), gempoolp
    ```bash
    python projects/FastRT/tools/gen_wts.py --config-file='projects/FastDistill/configs/sbs_r34.yml' \
    --verify --show_model --wts_path='outputs/to/trt_model_file/sbs_r34_distill.wts' \
    MODEL.WEIGHTS '/path/to/checkpoint_file/model_best.pth' MODEL.DEVICE "cuda:0"
    ```

* Ex4.`kd-r34-r101_ibn`
    - [x] teacher model(r101_ibn), student model(distill-r34). the one for deploying is student model, gempoolp
    ```bash
    python projects/FastRT/tools/gen_wts.py --config-file='projects/FastDistill/configs/kd-sbs_r101ibn-sbs_r34.yml' \
    --verify --show_model --wts_path='outputs/to/trt_model_file/kd_r34_distill.wts' \
    MODEL.WEIGHTS '/path/to/checkpoint_file/model_best.pth' MODEL.DEVICE "cuda:0"
    ```

## Acknowledgements

Thanks to [tensorrtx](https://github.com/wang-xinyu/tensorrtx) for demonstrating the usage of trt network definition APIs.