E-commerce product understanding demands by nature, strong multimodal comprehension from text, images, and structured attributes. General-purpose Vision-Language Models (VLMs) enable generalizable multimodal latent modelling, yet there is no documented, well-known strategy for adapting them to the attribute-centric, multi-image, and noisy nature of e-commerce data, without sacrificing general performance. In this work, we show through a large-scale experimental study, how targeted adaptation of general VLMs can substantially improve e-commerce performance while preserving broad multimodal capabilities. Furthermore, we propose a novel extensive evaluation suite covering deep product understanding, strict instruction following, and dynamic attribute extraction.
翻译:电子商务产品理解本质上要求对文本、图像和结构化属性具备强大的多模态理解能力。通用视觉-语言模型(VLMs)能够实现可泛化的多模态潜在建模,然而目前尚无成熟且广为人知的策略,能在不牺牲通用性能的前提下,使其适应电子商务数据以属性为中心、多图像及存在噪声的特性。本工作通过大规模实验研究,展示了针对通用VLMs的有针对性适配如何能显著提升电子商务任务性能,同时保持广泛的多模态能力。此外,我们提出了一套新颖的综合性评估方案,涵盖深度产品理解、严格指令遵循以及动态属性提取。