FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction. It uses a customized encoder decoder architecture with spatio-temporal convolutions and channel gating to capture and interpolate complex motion trajectories between frames to generate realistic high frame rate videos. This repository contains original source code.
## 训练及推理
### 环境配置
python依赖安装:
## Inference Times
Python==3.7.11
numpy==1.19.2
PyTorch ==1.10.0a0+git2040069.dtk2210
### 训练
训练命令:
FLAVR delivers a better trade-off between speed and accuracy compared to prior frame interpolation methods.
python main.py --batch_size 32 \
--test_batch_size 32 \
--dataset vimeo90K_septuplet \
--loss 1*L1 \
--max_epoch 200 \
--lr 0.0002 \
--data_root <dataset_path> \
--n_outputs 1
Method | FPS on 512x512 Image (sec)
GoPro 数据集上的训练类似,更改`n_outputs`为 7 以进行 8 倍插值。
| ------------- |:-------------:|
| FLAVR | 3.10 |
| SuperSloMo | 3.33 |
| QVI | 1.02 |
| DAIN | 0.77 |
## Dependencies
We used the following to train and test the model.
For training your own model on the Vimeo-90K dataset, use the following command. You can download the dataset from [this link](http://toflow.csail.mit.edu/). The results reported in the paper are trained using 8GPUs.
The interpolated images will be saved to the folder `Middleburry` in a format that can be readily uploaded to the [leaderboard](https://vision.middlebury.edu/flow/eval/results/results-i2.php).
## SloMo-Filter on custom video
You can use our trained models and apply the slomo filter on your own video (requires OpenCV 4.2.0). Use the following command. If you want to convert a 30FPS video to 240FPS video, simply use the command
by using our [pretrained model](https://drive.google.com/drive/folders/1Gd2l69j7UC1Zua7StbUNcomAAhmE-xFb?usp=sharing) for 8x interpolation. For converting a 30FPS video to 60FPS video, use a 2x model with `factor` 2.
## Baseline Models
We also train models for many other previous works on our setting, and provide models for all these methods. Complete benchmarking scripts will also be released soon.
* SuperSloMo is implemented using code repository from [here](https://github.com/avinashpaliwal/Super-SloMo). Other baselines are implemented using the official codebases.
* The numbers presented here for the baselines are slightly better than those reported in the paper.
## Google Colab
A Colab notebook to try 2x slow-motion filtering on custom videos is available in the *notebooks* directory of this repo.
## Model for Motion-Magnification
Unfortunately, we cannot provide the trained models for motion-magnification at this time. We are working towards making a model available soon.
## Acknowledgement
The code is heavily borrowed from Facebook's official [PyTorch video repository](https://github.com/facebookresearch/VMZ) and [CAIN](https://github.com/myungsub/CAIN).
## Cite
If this code helps in your work, please consider citing us.
``` text
@article{kalluri2021flavr,
title={FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation},
author={Kalluri, Tarun and Pathak, Deepak and Chandraker, Manmohan and Tran, Du},