This research explores the application of large language models (LLMs) to generate synthetic datasets for Product Desirability Toolkit (PDT) testing, a key component in evaluating user sentiment and product experience. Utilizing gpt-4o-mini, a cost-effective alternative to larger commercial LLMs, three methods, Word+Review, Review+Word, and Supply-Word, were each used to synthesize 1000 product reviews. The generated datasets were assessed for sentiment alignment, textual diversity, and data generation cost. Results demonstrated high sentiment alignment across all methods, with Pearson correlations ranging from 0.93 to 0.97. Supply-Word exhibited the highest diversity and coverage of PDT terms, although with increased generation costs. Despite minor biases toward positive sentiments, in situations with limited test data, LLM-generated synthetic data offers significant advantages, including scalability, cost savings, and flexibility in dataset production.
翻译:本研究探索了应用大型语言模型(LLMs)生成合成数据集,用于产品吸引力工具包(PDT)测试,这是评估用户情感和产品体验的关键组成部分。利用gpt-4o-mini(一种相较于大型商业LLMs更具成本效益的替代方案),采用了三种方法——Word+Review、Review+Word和Supply-Word——每种方法合成了1000条产品评论。对生成的数据集进行了情感一致性、文本多样性和数据生成成本的评估。结果显示,所有方法均表现出较高的情感一致性,皮尔逊相关系数在0.93至0.97之间。Supply-Word方法在PDT术语方面表现出最高的多样性和覆盖度,尽管其生成成本有所增加。尽管存在对积极情感的轻微偏向,但在测试数据有限的情况下,LLM生成的合成数据提供了显著优势,包括可扩展性、成本节约以及数据集生产的灵活性。