E-commerce platforms require structured product data in the form of attribute-value pairs to offer features such as faceted product search or attribute-based product comparison. However, vendors often provide unstructured product descriptions, necessitating the extraction of attribute-value pairs from these texts. BERT-based extraction methods require large amounts of task-specific training data and struggle with unseen attribute values. This paper explores using large language models (LLMs) as a more training-data efficient and robust alternative. We propose prompt templates for zero-shot and few-shot scenarios, comparing textual and JSON-based target schema representations. Our experiments show that GPT-4 achieves the highest average F1-score of 85% using detailed attribute descriptions and demonstrations. Llama-3-70B performs nearly as well, offering a competitive open-source alternative. GPT-4 surpasses the best PLM baseline by 5% in F1-score. Fine-tuning GPT-3.5 increases the performance to the level of GPT-4 but reduces the model's ability to generalize to unseen attribute values.
翻译:电子商务平台需要以属性-值对形式的结构化产品数据,以支持分面产品搜索或基于属性的产品比较等功能。然而,供应商通常提供非结构化的产品描述,因此需要从这些文本中提取属性-值对。基于BERT的提取方法需要大量特定任务的训练数据,并且难以处理未见过的属性值。本文探索使用大型语言模型(LLMs)作为一种更高效利用训练数据且更鲁棒的替代方案。我们为零样本和少样本场景提出了提示模板,并比较了基于文本和基于JSON的目标模式表示。实验表明,GPT-4在使用详细的属性描述和示例时取得了最高的平均F1分数(85%)。Llama-3-70B表现接近,提供了一个具有竞争力的开源替代方案。GPT-4在F1分数上超越了最佳PLM基线5%。对GPT-3.5进行微调可将其性能提升至GPT-4的水平,但会降低模型对未见属性值的泛化能力。