Structured product data, in the form of attribute-value pairs, is essential for e-commerce platforms to support features such as faceted product search and attribute-based product comparison. However, vendors often provide unstructured product descriptions, making attribute value extraction necessary to ensure data consistency and usability. Large language models (LLMs) have demonstrated their potential for product attribute value extraction in few-shot scenarios. Recent research has shown that self-refinement techniques can improve the performance of LLMs on tasks such as code generation and text-to-SQL translation. For other tasks, the application of these techniques has resulted in increased costs due to processing additional tokens, without achieving any improvement in performance. This paper investigates applying two self-refinement techniques (error-based prompt rewriting and self-correction) to the product attribute value extraction task. The self-refinement techniques are evaluated across zero-shot, few-shot in-context learning, and fine-tuning scenarios using GPT-4o. The experiments show that both self-refinement techniques fail to significantly improve the extraction performance while substantially increasing processing costs. For scenarios with development data, fine-tuning yields the highest performance, while the ramp-up costs of fine-tuning are balanced out as the amount of product descriptions increases.
翻译:以属性-值对形式存在的结构化产品数据对于电子商务平台支持分面产品搜索和基于属性的产品比较等功能至关重要。然而,供应商通常提供非结构化的产品描述,因此需要提取属性值以确保数据的一致性和可用性。大型语言模型(LLMs)已在少样本场景中展现出产品属性值提取的潜力。近期研究表明,自优化技术能够提升LLMs在代码生成和文本到SQL转换等任务上的性能。但对于其他任务,由于需要处理额外的标记,应用这些技术会导致成本增加,却未能带来性能提升。本文研究了将两种自优化技术(基于错误的提示重写和自校正)应用于产品属性值提取任务。这些自优化技术在零样本、少样本上下文学习以及使用GPT-4o的微调场景中进行了评估。实验表明,两种自优化技术均未能显著提升提取性能,同时大幅增加了处理成本。对于具备开发数据的场景,微调能获得最高性能,且随着产品描述数量的增加,微调的启动成本可被抵消。