README.md 1.68 KB
Newer Older
chicm-ms's avatar
chicm-ms committed
1
2
## 33rd place solution code for Kaggle [TGS Salt Identification Chanllenge](https://www.kaggle.com/c/tgs-salt-identification-challenge)

3
This example shows how to enable AutoML for competition code by running it on NNI without any code change.
chicm-ms's avatar
chicm-ms committed
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
To run this code on NNI, firstly you need to run it standalone, then configure the config.yml and:
```
nnictl create --config config.yml
```

This code can still run standalone, the code is for reference, it requires at least one week effort to reproduce the competition result.

[Solution summary](https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/69593)

Preparation:

Download competition data, run preprocess.py to prepare training data.

Stage 1:

Train fold 0-3 for 100 epochs, for each fold, train 3 models:
```
21
python3 train.py --ifolds 0 --epochs 100 --model_name UNetResNetV4
chicm-ms's avatar
chicm-ms committed
22
23
24
25
26
27
28
29
30
python3 train.py --ifolds 0 --epochs 100 --model_name UNetResNetV5 --layers 50
python3 train.py --ifolds 0 --epochs 100 --model_name UNetResNetV6
```

Stage 2:

Fine tune stage 1 models for 300 epochs with cosine annealing lr scheduler:

```
31
python3 train.py --ifolds 0 --epochs 300 --lrs cosine --lr 0.001 --min_lr 0.0001 --model_name UNetResNetV4
chicm-ms's avatar
chicm-ms committed
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
```

Stage 3:

Fine tune Stage 2 models with depths channel:

```
python3 train.py --ifolds 0 --epochs 300 --lrs cosine --lr 0.001 --min_lr 0.0001 --model_name UNetResNetV4 --depths
```

Stage 4:

Make prediction for each model,  then ensemble the result to generate peasdo labels.

Stage 5:

Fine tune stage 3 models with pseudo labels

```
python3 train.py --ifolds 0 --epochs 300 --lrs cosine --lr 0.001 --min_lr 0.0001 --model_name UNetResNetV4 --depths --pseudo
```

Stage 6:
Ensemble all stage 3 and stage 5 models.