Existing datasets for attribute value extraction (AVE) predominantly focus on explicit attribute values while neglecting the implicit ones, lack product images, are often not publicly available, and lack an in-depth human inspection across diverse domains. To address these limitations, we present ImplicitAVE, the first, publicly available multimodal dataset for implicit attribute value extraction. ImplicitAVE, sourced from the MAVE dataset, is carefully curated and expanded to include implicit AVE and multimodality, resulting in a refined dataset of 68k training and 1.6k testing data across five domains. We also explore the application of multimodal large language models (MLLMs) to implicit AVE, establishing a comprehensive benchmark for MLLMs on the ImplicitAVE dataset. Six recent MLLMs with eleven variants are evaluated across diverse settings, revealing that implicit value extraction remains a challenging task for MLLMs. The contributions of this work include the development and release of ImplicitAVE, and the exploration and benchmarking of various MLLMs for implicit AVE, providing valuable insights and potential future research directions. Dataset and code are available at https://github.com/HenryPengZou/ImplicitAVE
翻译:现有的属性值抽取(AVE)数据集主要关注显式属性值而忽略了隐式属性值,缺乏产品图像,通常不公开可用,并且缺乏跨多个领域的深入人工检查。为应对这些局限性,我们提出了ImplicitAVE,这是首个公开可用的、用于隐式属性值抽取的多模态数据集。ImplicitAVE源自MAVE数据集,经过精心筛选和扩展以纳入隐式AVE和多模态内容,最终形成一个包含五个领域、涵盖6.8万条训练数据和1.6千条测试数据的精炼数据集。我们还探索了多模态大语言模型(MLLMs)在隐式AVE上的应用,并在ImplicitAVE数据集上为MLLMs建立了一个全面的基准。我们对六个近期MLLMs的十一个变体在不同设置下进行了评估,结果表明隐式值抽取对MLLMs而言仍然是一项具有挑战性的任务。本工作的贡献包括开发并发布了ImplicitAVE,以及对多种MLLMs在隐式AVE任务上的探索和基准测试,为未来研究提供了有价值的见解和潜在方向。数据集和代码可在 https://github.com/HenryPengZou/ImplicitAVE 获取。