Structured product data, in the form of attribute-value pairs, is essential for e-commerce platforms to support features such as faceted product search and attribute-based product comparison. However, vendors often provide unstructured product descriptions, making attribute value extraction necessary to ensure data consistency and usability. Large language models (LLMs) have demonstrated their potential for product attribute value extraction in few-shot scenarios. Recent research has shown that self-refinement techniques can improve the performance of LLMs on tasks such as code generation and text-to-SQL translation. For other tasks, the application of these techniques has resulted in increased costs due to processing additional tokens, without achieving any improvement in performance. This paper investigates applying two self-refinement techniques, error-based prompt rewriting and self-correction, to the product attribute value extraction task. The self-refinement techniques are evaluated across zero-shot, few-shot in-context learning, and fine-tuning scenarios using GPT-4o. The experiments show that both self-refinement techniques have only a marginal impact on the model's performance across the different scenarios, while significantly increasing processing costs. For scenarios with training data, fine-tuning yields the highest performance, while the ramp-up costs of fine-tuning are balanced out as the amount of product descriptions increases.
翻译:结构化产品数据,以属性-值对的形式,对于电子商务平台支持分面产品搜索和基于属性的产品比较等功能至关重要。然而,供应商通常提供非结构化的产品描述,这使得属性值提取成为确保数据一致性和可用性的必要步骤。大语言模型(LLMs)已在少样本场景中展现出其在产品属性值提取方面的潜力。近期研究表明,自优化技术能够提升LLMs在代码生成和文本到SQL转换等任务上的性能。对于其他任务,应用这些技术则会因处理额外令牌而导致成本增加,且未能带来任何性能提升。本文研究了将两种自优化技术——基于错误的提示重写和自校正——应用于产品属性值提取任务。这些自优化技术在零样本、少样本上下文学习以及使用GPT-4o进行微调的场景下进行了评估。实验表明,两种自优化技术在不同场景下对模型性能仅有边际影响,同时显著增加了处理成本。对于有训练数据的场景,微调能获得最高性能,而随着产品描述数量的增加,微调的启动成本得以平衡。