Recently some studies have highlighted the potential of Large Language Models (LLMs) as effective generators of supervised training data, offering advantages such as enhanced inference efficiency and reduced costs associated with data collection. However, these studies have predominantly focused on English language tasks. In this paper, we address the fundamental research question: Can LLMs serve as proficient training data generators for other language tasks? Specifically, we leverage LLMs to synthesize supervised training data under few-shot and zero-shot learning scenarios across six diverse Japanese downstream tasks. Subsequently, we utilize this synthesized data to train compact models (e.g., BERT). This novel methodology is termed JAPAGEN. Our experimental findings underscore that JAPAGEN achieves robust performance in classification tasks that necessitate formal text inputs, demonstrating competitive results compared to conventional LLM prompting strategies.
翻译:近期一些研究强调了大语言模型(LLMs)作为监督训练数据生成器的潜力,其优势包括提升推理效率以及降低数据收集成本。然而,这些研究主要集中于英语语言任务。本文探讨一个基础性研究问题:大语言模型能否成为其他语言任务的高效训练数据生成器?具体而言,我们利用大语言模型在六个不同的日语下游任务中,针对少样本与零样本学习场景合成监督训练数据。随后,我们使用该合成数据训练轻量级模型(例如BERT)。这一创新方法被命名为JAPAGEN。我们的实验结果表明,在需要正式文本输入的分类任务中,JAPAGEN展现出强劲的性能,相较于传统的大语言模型提示策略取得了具有竞争力的结果。