Video question-answering (QA) is a core task in video understanding. Evaluating the quality of video QA and video caption data quality for training video large language models (VideoLLMs) is an essential challenge. Although various methods have been proposed for assessing video caption quality, there remains a lack of dedicated evaluation methods for Video QA. To address this gap, we introduce EVQAScore, a reference-free method that leverages keyword extraction to assess both video caption and video QA data quality. Additionally, we incorporate frame sampling and rescaling techniques to enhance the efficiency and robustness of our evaluation, this enables our score to evaluate the quality of extremely long videos. Our approach achieves state-of-the-art (SOTA) performance (32.8 for Kendall correlation and 42.3 for Spearman correlation, 4.7 and 5.9 higher than the previous method PAC-S++) on the VATEX-EVAL benchmark for video caption evaluation. Furthermore, by using EVQAScore for data selection, we achieved SOTA results with only 12.5\% of the original data volume, outperforming the previous SOTA method PAC-S and 100\% of data.
翻译:视频问答是视频理解的核心任务。评估视频问答数据的质量以及用于训练视频大语言模型的视频描述数据质量,是一个关键挑战。尽管已有多种方法被提出用于评估视频描述质量,但针对视频问答的专门评估方法仍然缺乏。为填补这一空白,我们提出了EVQAScore,一种无需参考的评估方法,它利用关键词提取来同时评估视频描述和视频问答数据的质量。此外,我们结合了帧采样和重缩放技术,以提升评估的效率和鲁棒性,这使得我们的评分方法能够评估极长视频的质量。我们的方法在视频描述评估基准VATEX-EVAL上达到了最先进的性能(肯德尔相关系数为32.8,斯皮尔曼相关系数为42.3,分别比先前方法PAC-S++高出4.7和5.9)。进一步地,通过使用EVQAScore进行数据筛选,我们仅使用原始数据量的12.5%就取得了最先进的结果,超越了先前使用PAC-S方法及100%数据所达到的最佳性能。