The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering methods, or model settings affect the quality of paraphrased data (and downstream models). In this study, we investigate three text diversity incentive methods well established in crowdsourcing: taboo words, hints by previous outlier solutions, and chaining on previous outlier solutions. Using these incentive methods as part of instructions to LLMs augmenting text datasets, we measure their effects on generated texts lexical diversity and downstream model performance. We compare the effects over 5 different LLMs, 6 datasets and 2 downstream models. We show that diversity is most increased by taboo words, but downstream model performance is highest with hints.
翻译:最新一代生成式大语言模型(LLMs)已在数据增强任务中得到应用,即通过LLM对少量文本样本进行释义,随后用于微调下游模型。然而,关于不同提示词、种子数据选择策略、过滤方法或模型设置如何影响释义数据(及下游模型)质量的研究尚不充分。本研究探讨了众包领域中三种成熟的文本多样性激励方法:禁忌词、基于先前异常解决方案的提示,以及基于先前异常解决方案的链式生成。通过将这些激励方法作为指令的一部分,引导LLM增强文本数据集,我们测量了它们对生成文本词汇多样性及下游模型性能的影响。我们在5种不同的LLM、6个数据集和2种下游模型上比较了这些效应。结果表明,禁忌词对多样性的提升最为显著,但使用提示方法时下游模型的性能最佳。