In this paper, we introduce Kun, a novel approach for creating high-quality instruction-tuning datasets for large language models (LLMs) without relying on manual annotations. Adapting a self-training algorithm based on instruction back-translation and answer polishment, Kun leverages unlabelled data from diverse sources such as Wudao, Wanjuan, and SkyPile to generate a substantial dataset of over a million Chinese instructional data points. This approach significantly deviates from traditional methods by using a self-curation process to refine and select the most effective instruction-output pairs. Our experiments with the 6B-parameter Yi model across various benchmarks demonstrate Kun's robustness and scalability. Our method's core contributions lie in its algorithmic advancement, which enhances data retention and clarity, and its innovative data generation approach that substantially reduces the reliance on costly and time-consuming manual annotations. This methodology presents a scalable and efficient solution for improving the instruction-following capabilities of LLMs, with significant implications for their application across diverse fields. The code and dataset can be found at https://github.com/Zheng0428/COIG-Kun
翻译:本文提出了一种名为“坤”的新颖方法,可在无需人工标注的情况下,为大型语言模型(LLMs)构建高质量指令微调数据集。通过采用基于指令反向翻译与答案精炼的自训练算法,该方法利用来自悟道、万卷、天池等多元来源的无标注数据,生成了包含超过百万条中文指令数据的大规模数据集。这一方法通过自我策展流程精炼并筛选最有效的指令-输出对,显著区别于传统方法。我们在多个基准上对6B参数的Yi模型进行的实验,验证了“坤”的鲁棒性与可扩展性。本方法的核心贡献在于其算法先进性——提升了数据保留率与清晰度,以及其创新的数据生成方式,大幅降低了对成本高昂且耗时的人工标注的依赖。该方法论为提升LLM指令遵循能力提供了可扩展且高效的解决方案,对其在跨领域应用具有重要影响。相关代码与数据集详见 https://github.com/Zheng0428/COIG-Kun