With a simple yet insightful framework RTQ (Refine, Temporal model, and Query), our model demonstrates outstanding performance even in the absence of video-language pre-training.
- Our systemic analysis reveals that current methods focus only on restricted aspects of video-language understanding, and they are complementary.
- We propose the RTQ framework to jointly model information redundancy, temporal dependency, and scene complexity in video-language understanding.
- We demonstrate that, even without pre-training on video-languag data, our method can achieve superior (or comparable) performance with state-of-the-art pre-training methods.
- Python >= 3.8
- Pytorch >= 1.10.0
- CUDA Version >= 10.2
- Install required packages:
pip install -r requirements.txt
Follow the instructions in [REPO_HOME]/tools/data
to download all datasets. Put them in the [REPO_HOME]/data
directory. You can use softlinks as well.
Download BLIP model
mkdir [REPO_HOME]/modelzoo
wget https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth -P [REPO_HOME]/modelzoo/BLIP
The final file structure is:
- RTQ
- assets
- configs
- data
- msrvtt
- txt_db
- vis_db
- nextqa
......
- lavis
- modelzoo
- BLIP
- model_base_capfilt_large.pth
......
See code examples.
If you find our paper and code useful in your research, please consider giving a star ⭐ and citation 📖.
@inproceedings{wang2023rtq,
author = {Xiao Wang and
Yaoyu Li and
Tian Gan and
Zheng Zhang and
Jingjing Lv and
Liqiang Nie},
title = {{RTQ:} Rethinking Video-language Understanding Based on Image-text
Model},
booktitle = {Proceedings of the {ACM} International Conference on Multimedia, 2023},
pages = {557--566},
publisher = {{ACM}},
year = {2023},
}