-
VEATIC: Video-based Emotion and Affect Tracking in Context Dataset
-
Zhihang Ren and Jefferson Ortega and Yifan Wang and Zhimin Chen and Yunhui Guo and Stella X. Yu and David Whitney
-
IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa Hawaii, 4-8 January 2024
-
Paper
|
Poster
|
Code
|
arXiv
-
Abstract
-
Human affect recognition has been a significant topic in psychophysics and computer vision. However, the currently published datasets have many limitations. For example, most datasets contain frames that contain only information about facial expressions. Due to the limitations of previous datasets, it is very hard to either understand the mechanisms for affect recognition of humans or generalize well on common cases for computer vision models trained on those datasets. In this work, we introduce a brand new large dataset, the Video-based Emotion and Affect Tracking in Context Dataset (\textbf{VEATIC}), that can conquer the limitations of the previous datasets. VEATIC has $124$ video clips from Hollywood movies, documentaries, and home videos with continuous valence and arousal ratings of each frame via real-time annotation. Along with the dataset, we propose a new computer vision task to infer the affect of the selected character via both context and character information in each video frame. Additionally, we propose a simple model to benchmark this new computer vision task. We also compare the performance of the pretrained model using our dataset with other similar datasets. Experiments show the competing results of our pretrained model via VEATIC, indicating the generalizability of VEATIC. Our dataset is available at https://veatic.github.io.
-
Keywords
-
emotion, visual context, affect recognition
|