TY - GEN
T1 - Scene Consistency Representation Learning for Video Scene Segmentation
AU - Wu, Haoqian
AU - Chen, Keyu
AU - Luo, Yanan
AU - Qiao, Ruizhi
AU - Ren, Bo
AU - Liu, Haozhe
AU - Xie, Weicheng
AU - Shen, Linlin
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - A long-term video, such as a movie or TV show, is composed of various scenes, each of which represents a series of shots sharing the same semantic story. Spotting the correct scene boundary from the long-term video is a challenging task, since a model must understand the storyline of the video to figure out where a scene starts and ends. To this end, we propose an effective Self-Supervised Learning (SSL) framework to learn better shot representations from unlabeled long-term videos. More specifically, we present an SSL scheme to achieve scene consistency, while exploring considerable data augmentation and shuffling methods to boost the model generalizability. Instead of explicitly learning the scene boundary features as in the previous methods, we introduce a vanilla temporal model with less inductive bias to verify the quality of the shot features. Our method achieves the state-of-the-art performance on the task of Video Scene Segmentation. Additionally, we suggest a more fair and reasonable benchmark to evaluate the performance of Video Scene Segmentation methods. The code is made available.11https://github.com/TencentYoutuResearch/SceneSegmentation-SCRL.
AB - A long-term video, such as a movie or TV show, is composed of various scenes, each of which represents a series of shots sharing the same semantic story. Spotting the correct scene boundary from the long-term video is a challenging task, since a model must understand the storyline of the video to figure out where a scene starts and ends. To this end, we propose an effective Self-Supervised Learning (SSL) framework to learn better shot representations from unlabeled long-term videos. More specifically, we present an SSL scheme to achieve scene consistency, while exploring considerable data augmentation and shuffling methods to boost the model generalizability. Instead of explicitly learning the scene boundary features as in the previous methods, we introduce a vanilla temporal model with less inductive bias to verify the quality of the shot features. Our method achieves the state-of-the-art performance on the task of Video Scene Segmentation. Additionally, we suggest a more fair and reasonable benchmark to evaluate the performance of Video Scene Segmentation methods. The code is made available.11https://github.com/TencentYoutuResearch/SceneSegmentation-SCRL.
KW - Efficient learning and inferences
KW - Representation learning
KW - Scene analysis and understanding
KW - Self-& semi-& meta- & unsupervised learning
KW - Video analysis and understanding
UR - http://www.scopus.com/inward/record.url?scp=85143505675&partnerID=8YFLogxK
U2 - 10.1109/CVPR52688.2022.01363
DO - 10.1109/CVPR52688.2022.01363
M3 - Conference contribution
AN - SCOPUS:85143505675
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 14001
EP - 14010
BT - Proceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
PB - IEEE Computer Society
T2 - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
Y2 - 19 June 2022 through 24 June 2022
ER -