README.md 1.61 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
# Source Separation Example

This directory contains reference implementations for source separations. For the detail of each model, please checkout the followings.

- [Conv-TasNet](./conv_tasnet/README.md)

## Usage

### Overview

11
12
To training a model, you can use [`lightning_train.py`](./lightning_train.py). This script takes the form of
`lightning_train.py [parameters]`
13
14

    ```
15
16
17
    python lightning_train.py \
            [--data-dir DATA_DIR] \
            [--num-gpu NUM_GPU] \
18
            [--num-workers NUM_WORKERS] \
19
            ...
20
21

    # For the detail of the parameter values, use;
22
    python lightning_train.py --help
23
24
    ```

25
This script runs training in PyTorch-Lightning framework with Distributed Data Parallel (DDP) backend.
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
### SLURM

<details><summary>Example scripts for running the training on SLURM cluster</summary>

- **launch_job.sh**

```bash
#!/bin/bash

#SBATCH --job-name=source_separation

#SBATCH --output=/checkpoint/%u/jobs/%x/%j.out

#SBATCH --error=/checkpoint/%u/jobs/%x/%j.err

#SBATCH --nodes=1

43
#SBATCH --ntasks-per-node=2
44
45
46
47
48

#SBATCH --cpus-per-task=8

#SBATCH --mem-per-cpu=16G

49
#SBATCH --gpus-per-node=2
50
51
52
53
54
55
56
57
58
59
60

#srun env
srun wrapper.sh $@
```

- **wrapper.sh**

```bash
#!/bin/bash
num_speakers=2
this_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
61
62
exp_dir="/checkpoint/${USER}/exp/"
dataset_dir="/dataset/Libri${num_speakers}mix//wav8k/min"
63
64


65
mkdir -p "${exp_dir}"
66
67

python -u \
68
  "${this_dir}/lightning_train.py" \
69
70
  --num-speakers "${num_speakers}" \
  --sample-rate 8000 \
71
72
  --data-dir "${dataset_dir}" \
  --exp-dir "${exp_dir}" \
73
74
75
76
  --batch-size $((16 / SLURM_NTASKS))
```

</details>