Automated Audio Captioning is a multimodal task that aims to convert audio content into natural language. The assessment of audio captioning systems is typically based on quantitative metrics applied to text data. Previous studies have employed metrics derived from machine translation and image captioning to evaluate the quality of generated audio captions. Drawing inspiration from auditory cognitive neuroscience research, we introduce a novel metric approach -- Audio Captioning Evaluation on Semantics of Sound (ACES). ACES takes into account how human listeners parse semantic information from sounds, providing a novel and comprehensive evaluation perspective for automated audio captioning systems. ACES combines semantic similarities and semantic entity labeling. ACES outperforms similar automated audio captioning metrics on the Clotho-Eval FENSE benchmark in two evaluation categories.
翻译:自动音频描述是一项多模态任务,旨在将音频内容转换为自然语言。音频描述系统的评估通常基于应用于文本数据的定量指标。以往的研究采用了机器翻译和图像描述领域的指标来评估生成音频描述的质量。受听觉认知神经科学研究的启发,我们提出了一种新颖的指标方法——基于声音语义的音频描述评估(ACES)。ACES考虑人类听众如何从声音中解析语义信息,为自动音频描述系统提供了新颖且全面的评估视角。ACES结合了语义相似性和语义实体标注。在Clotho-Eval FENSE基准的两项评估类别中,ACES优于同类自动音频描述指标。