We introduce a novel method for movie genre classification, capitalizing on a diverse set of readily accessible pretrained models. These models extract high-level features related to visual scenery, objects, characters, text, speech, music, and audio effects. To intelligently fuse these pretrained features, we train small classifier models with low time and memory requirements. Employing the transformer model, our approach utilizes all video and audio frames of movie trailers without performing any temporal pooling, efficiently exploiting the correspondence between all elements, as opposed to the fixed and low number of frames typically used by traditional methods. Our approach fuses features originating from different tasks and modalities, with different dimensionalities, different temporal lengths, and complex dependencies as opposed to current approaches. Our method outperforms state-of-the-art movie genre classification models in terms of precision, recall, and mean average precision (mAP). To foster future research, we make the pretrained features for the entire MovieNet dataset, along with our genre classification code and the trained models, publicly available.
翻译:我们提出了一种新颖的电影类型分类方法,该方法充分利用了一系列易于获取的预训练模型。这些模型能够提取与视觉场景、物体、角色、文本、语音、音乐及音效相关的高级特征。为了智能地融合这些预训练特征,我们训练了具有较低时间和内存需求的小型分类器模型。通过采用Transformer模型,我们的方法利用了电影预告片的所有视频帧和音频帧,而不进行任何时间池化,从而高效地利用了所有元素之间的对应关系,这与传统方法通常使用的固定且数量较少的帧形成对比。与现有方法不同,我们的方法融合了源自不同任务和模态、具有不同维度、不同时间长度以及复杂依赖关系的特征。在精确率、召回率和平均精确率均值(mAP)方面,我们的方法优于当前最先进的电影类型分类模型。为了促进未来研究,我们将整个MovieNet数据集的预训练特征,连同我们的类型分类代码及训练好的模型,一并公开提供。