Visual artifacts are often introduced into streamed video content, due to prevailing conditions during content production and delivery. Since these can degrade the quality of the user's experience, it is important to automatically and accurately detect them in order to enable effective quality measurement and enhancement. Existing detection methods often focus on a single type of artifact and/or determine the presence of an artifact through thresholding objective quality indices. Such approaches have been reported to offer inconsistent prediction performance and are also impractical for real-world applications where multiple artifacts co-exist and interact. In this paper, we propose a Multiple Visual Artifact Detector, MVAD, for video streaming which, for the first time, is able to detect multiple artifacts using a single framework that is not reliant on video quality assessment models. Our approach employs a new Artifact-aware Dynamic Feature Extractor (ADFE) to obtain artifact-relevant spatial features within each frame for multiple artifact types. The extracted features are further processed by a Recurrent Memory Vision Transformer (RMViT) module, which captures both short-term and long-term temporal information within the input video. The proposed network architecture is optimized in an end-to-end manner based on a new, large and diverse training database that is generated by simulating the video streaming pipeline and based on Adversarial Data Augmentation. This model has been evaluated on two video artifact databases, Maxwell and BVI-Artifact, and achieves consistent and improved prediction results for ten target visual artifacts when compared to seven existing single and multiple artifact detectors. The source code and training database will be available at https://chenfeng-bristol.github.io/MVAD/.
翻译:在视频内容的生产和传输过程中,常因普遍存在的条件限制而引入视觉伪影。这些伪影会降低用户的观看体验质量,因此为了支持有效的质量评估与增强,对其进行自动且准确的检测至关重要。现有的检测方法通常仅针对单一类型的伪影,且/或通过设定客观质量指标的阈值来判断伪影是否存在。据报道,此类方法的预测性能不稳定,并且在多种伪影共存并相互影响的现实应用场景中也缺乏实用性。本文提出一种面向视频流的多重视觉伪影检测器 MVAD,该框架首次能够在无需依赖视频质量评估模型的情况下,通过单一框架检测多种伪影。我们的方法采用一种新颖的伪影感知动态特征提取器(ADFE),以针对多种伪影类型从每帧图像中提取与伪影相关的空间特征。提取的特征随后由循环记忆视觉Transformer(RMViT)模块进一步处理,该模块能够捕捉输入视频中的短期与长期时序信息。所提出的网络架构基于一个全新、大规模且多样化的训练数据库,以端到端的方式进行优化;该数据库通过模拟视频流传输流程并结合对抗性数据增强技术生成。该模型已在两个视频伪影数据库(Maxwell 和 BVI-Artifact)上进行了评估,与七种现有的单一及多重伪影检测器相比,在十种目标视觉伪影的检测上取得了稳定且更优的预测结果。源代码与训练数据库将在 https://chenfeng-bristol.github.io/MVAD/ 公开。