The integration of Large Language Models (LLMs) like GPT-4 into traditional Natural Language Processing (NLP) tasks has opened new avenues for enhancing model performance while reducing the reliance on extensive human annotations. This paper presents a novel approach that leverages the Chain of Thought (CoT) prompting technique to distill knowledge from GPT-4, subsequently applying it to improve the efficiency and effectiveness of a smaller model, BERT, on Named Entity Recognition (NER) tasks. Our method involves a two-phase training process: initially employing GPT-4 annotated data for pre-training and then refining the model with a combination of distilled and original human-annotated data. The results demonstrate that our mixed-training strategy significantly outperforms models trained solely on human annotations, achieving superior F1-scores and showcasing a cost-effective solution for resource-limited or closed-network settings. The study also discusses the challenges encountered, such as LLM output variability and the tendency towards hallucinations, proposing future work directions to enhance prompt design and annotation selection. Our findings indicate a promising synergy between LLM insights and traditional NLP techniques, paving the way for more accessible and robust NLP applications.
翻译:大型语言模型(如GPT-4)与传统自然语言处理(NLP)任务的融合,为提升模型性能并减少对大量人工标注的依赖开辟了新途径。本文提出了一种创新方法,利用思维链(Chain of Thought, CoT)提示技术从GPT-4中蒸馏知识,并将其应用于提升较小模型BERT在命名实体识别(NER)任务中的效率与效果。本方法包含两阶段训练流程:首先使用GPT-4标注的数据进行预训练,随后结合蒸馏数据与原始人工标注数据对模型进行微调。结果表明,我们的混合训练策略显著优于仅使用人工标注训练的模型,取得了更优的F1分数,并为资源受限或封闭网络环境提供了经济高效的解决方案。本研究还探讨了所面临的挑战,如LLM输出变异性和趋向幻觉的倾向,并提出了改进提示设计与标注选择的未来研究方向。研究结果揭示了LLM洞察力与传统NLP技术之间的良好协同效应,为构建更易获取且更鲁棒的NLP应用奠定了基础。