Video question-answering (QA) is a core task in video understanding. Evaluating the quality of video QA and video caption data quality for training video large language models (VideoLLMs) is an essential challenge. Although various methods have been proposed for assessing video caption quality, there remains a lack of dedicated evaluation methods for Video QA. To address this gap, we introduce EVQAScore, a reference-free method that leverages keyword extraction to assess both video caption and video QA data quality. Additionally, we incorporate frame sampling and rescaling techniques to enhance the efficiency and robustness of our evaluation, this enables our score to evaluate the quality of extremely long videos. Our approach achieves state-of-the-art (SOTA) performance (32.8 for Kendall correlation and 42.3 for Spearman correlation, 4.7 and 5.9 higher than the previous method PAC-S++) on the VATEX-EVAL benchmark for video caption evaluation. Furthermore, by using EVQAScore for data selection, we achieved SOTA results with only 12.5\% of the original data volume, outperforming the previous SOTA method PAC-S and 100\% of data.
翻译:视频问答是视频理解的核心任务。评估视频问答以及用于训练视频大语言模型(VideoLLMs)的视频描述数据的质量是一个关键挑战。尽管已有多种方法被提出用于评估视频描述质量,但目前仍缺乏专门针对视频问答的评估方法。为填补这一空白,我们提出了EVQAScore,这是一种无参考方法,利用关键词提取来评估视频描述和视频问答数据的质量。此外,我们结合了帧采样和重缩放技术,以提升评估的效率和鲁棒性,这使得我们的评分方法能够评估极长视频的质量。在视频描述评估基准VATEX-EVAL上,我们的方法实现了最先进的性能(肯德尔相关系数为32.8,斯皮尔曼相关系数为42.3,分别比先前方法PAC-S++高出4.7和5.9)。此外,通过使用EVQAScore进行数据筛选,我们仅使用原始数据量的12.5%就取得了最先进的结果,超越了先前使用PAC-S方法和100%数据量的最先进方法。