Audio-visual quality assessment (AVQA) research has been stalled by limitations of existing datasets: they are typically small in scale, with insufficient diversity in content and quality, and annotated only with overall scores. These shortcomings provide limited support for model development and multimodal perception research. We propose a practical approach for AVQA dataset construction. First, we design a crowdsourced subjective experiment framework for AVQA, breaks the constraints of in-lab settings and achieves reliable annotation across varied environments. Second, a systematic data preparation strategy is further employed to ensure broad coverage of both quality levels and semantic scenarios. Third, we extend the dataset with additional annotations, enabling research on multimodal perception mechanisms and their relation to content. Finally, we validate this approach through YT-NTU-AVQ, the largest and most diverse AVQA dataset to date, consisting of 1,620 user-generated audio and video (A/V) sequences. The dataset and platform code are available at https://github.com/renyu12/YT-NTU-AVQ
翻译:音视频质量评估(AVQA)研究因现有数据集的局限性而进展缓慢:这些数据集通常规模较小,内容与质量多样性不足,且仅标注总体评分。这些缺陷对模型开发与多模态感知研究的支持有限。本文提出一种实用的AVQA数据集构建方法。首先,我们设计了面向AVQA的众包主观实验框架,打破了实验室环境的限制,实现了跨多样化环境的可靠标注。其次,进一步采用系统化的数据准备策略,确保对质量等级与语义场景的广泛覆盖。第三,我们通过补充标注扩展数据集,以支持对多模态感知机制及其与内容关联性的研究。最后,我们通过YT-NTU-AVQ数据集验证了该方法,这是迄今规模最大、多样性最丰富的AVQA数据集,包含1,620条用户生成的音视频(A/V)序列。数据集与平台代码已发布于https://github.com/renyu12/YT-NTU-AVQ