Unverified Commit 32e57007 authored by Bruno Korbar's avatar Bruno Korbar Committed by GitHub
Browse files

[docs] minor README changes for VideoReference PR (#2957)



* removing the tab?

* initial commit

* Addressing Victor's comments
Co-authored-by: default avatarvfdev <vfdev.5@gmail.com>
parent 3298a96d
# Video Classification
TODO: Add some info about the context, dataset we use etc
We present a simple training script that can be used for replicating the result of [resenet-based video models](https://research.fb.com/wp-content/uploads/2018/04/a-closer-look-at-spatiotemporal-convolutions-for-action-recognition.pdf). All models are trained on [Kinetics400 dataset](https://deepmind.com/research/open-source/kinetics), a benchmark dataset for human-action recognition. The accuracy is reported on the traditional validation split.
## Data preparation
If you already have downloaded [Kinetics400 dataset](https://deepmind.com/research/open-source/kinetics),
please proceed directly to the next section.
To download videos, one can use https://github.com/Showmax/kinetics-downloader
To download videos, one can use https://github.com/Showmax/kinetics-downloader. Please note that the dataset can take up upwards of 400GB, depending on the quality setting during download.
## Training
We assume the training and validation AVI videos are stored at `/data/kinectics400/train` and
`/data/kinectics400/val`.
`/data/kinectics400/val`. For training we suggest starting with the hyperparameters reported in the [paper](https://research.fb.com/wp-content/uploads/2018/04/a-closer-look-at-spatiotemporal-convolutions-for-action-recognition.pdf), in order to match the performance of said models. Clip sampling strategy is a particularly important parameter during training, and we suggest using random temporal jittering during training - in other words sampling multiple training clips from each video with random start times during at every epoch. This functionality is built into our training script, and optimal hyperparameters are set by default.
### Multiple GPUs
......@@ -21,7 +21,8 @@ Run the training on a single node with 8 GPUs:
python -m torch.distributed.launch --nproc_per_node=8 --use_env train.py --data-path=/data/kinectics400 --train-dir=train --val-dir=val --batch-size=16 --cache-dataset --sync-bn --apex
```
**Note:** all our models were trained on 8 nodes with 8 V100 GPUs each for a total of 64 GPUs. Expected training time for 64 GPUs is 24 hours, depending on the storage solution.
**Note 2:** hyperparameters for exact replication of our training can be found [here](https://github.com/pytorch/vision/blob/master/torchvision/models/video/README.md). Some hyperparameters such as learning rate are scaled linearly in proportion to the number of GPUs.
### Single GPU
......@@ -30,6 +31,4 @@ python -m torch.distributed.launch --nproc_per_node=8 --use_env train.py --data-
```bash
python train.py --data-path=/data/kinectics400 --train-dir=train --val-dir=val --batch-size=8 --cache-dataset
```
```
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment