Current state of the art measures like BLEU, CIDEr, VQA score, SigLIP-2 and CLIPScore are often unable to capture semantic or structural accuracy, especially for domain-specific or context-dependent scenarios. For this, this paper proposes a Physics-Constrained Multimodal Data Evaluation (PCMDE) metric combining large language models with reasoning, knowledge based mapping and vision-language models to overcome these limitations. The architecture is comprised of three main stages: (1) feature extraction of spatial and semantic information with multimodal features through object detection and VLMs; (2) Confidence-Weighted Component Fusion for adaptive component-level validation; and (3) physics-guided reasoning using large language models for structural and relational constraints (e.g., alignment, position, consistency) enforcement.
翻译:当前最先进的评测指标如BLEU、CIDEr、VQA分数、SigLIP-2和CLIPScore往往难以捕捉语义或结构准确性,特别是在领域特定或上下文依赖的场景中。为此,本文提出一种物理约束多模态数据评估(PCMDE)指标,通过将大语言模型与推理能力、基于知识的映射以及视觉语言模型相结合来克服这些局限性。该架构包含三个主要阶段:(1)通过目标检测和视觉语言模型提取具有多模态特征的空间与语义信息;(2)采用置信度加权组件融合进行自适应组件级验证;(3)利用大语言模型进行物理引导推理,以强化结构及关系约束(如对齐关系、位置关系、一致性)。