Beyond conventional paradigms of translating speech and text, recently, there has been interest in automated transcreation of images to facilitate localization of visual content across different cultures. Attempts to define this as a formal Machine Learning (ML) problem have been impeded by the lack of automatic evaluation mechanisms, with previous work relying solely on human evaluation. In this paper, we seek to close this gap by proposing a suite of automatic evaluation metrics inspired by machine translation (MT) metrics, categorized into: a) Object-based, b) Embedding-based, and c) VLM-based. Drawing on theories from translation studies and real-world transcreation practices, we identify three critical dimensions of image transcreation: cultural relevance, semantic equivalence and visual similarity, and design our metrics to evaluate systems along these axes. Our results show that proprietary VLMs best identify cultural relevance and semantic equivalence, while vision-encoder representations are adept at measuring visual similarity. Meta-evaluation across 7 countries shows our metrics agree strongly with human ratings, with average segment-level correlations ranging from 0.55-0.87. Finally, through a discussion of the merits and demerits of each metric, we offer a robust framework for automated image transcreation evaluation, grounded in both theoretical foundations and practical application. Our code can be found here: https://github.com/simran-khanuja/automatic-eval-transcreation
翻译:超越传统的语音和文本翻译范式,近期学界开始关注图像的自动化跨文化改编,以促进视觉内容在不同文化间的本地化。由于缺乏自动评估机制,此前研究仅依赖人工评估,导致将这一问题形式化为机器学习(ML)问题的尝试受阻。本文旨在通过提出一套受机器翻译(MT)指标启发的自动评估指标来填补这一空白,该体系分为三类:a) 基于对象的指标,b) 基于嵌入的指标,以及 c) 基于视觉语言模型(VLM)的指标。借鉴翻译学理论和实际跨文化改编实践,我们确定了图像跨文化改编的三个关键维度:文化相关性、语义等价性与视觉相似性,并据此设计评估指标以衡量系统在这些维度的表现。实验结果表明,专有视觉语言模型在识别文化相关性和语义等价性方面表现最佳,而视觉编码器表征则擅长衡量视觉相似性。在7个国家开展的元评估显示,我们的指标与人工评分高度一致,平均片段级相关系数介于0.55至0.87之间。最后,通过分析各指标的优缺点,我们提出了一个基于理论框架与实践应用的自动化图像跨文化改编评估体系。代码已开源:https://github.com/simran-khanuja/automatic-eval-transcreation