With the rapid development of artificial intelligence technology, especially the increasingly widespread application of question-and-answer systems, high-quality question generation has become a key component in supporting the development of these systems. This article focuses on knowledge-based question generation technology, which aims to enable computers to simulate the human questioning process based on understanding specific texts or knowledge bases. In light of the issues of hallucination and knowledge gaps present in large-scale language models when applied to knowledge-intensive tasks, this paper proposes an enhanced question generation method that incorporates contrastive learning. This method utilizes multiple models to jointly mine domain knowledge and uses contrastive learning to guide the model in reducing noise and hallucinations in generation. Experimental results show that by designing prompts containing contrasting examples, the model's performance in question generation improves considerably, particularly when contrasting instructions and examples are used simultaneously, leading to the highest quality of generated questions and improved accuracy. These results demonstrate that the method proposed in this study, which combines contrasting context and chain-of-thought prompts, can effectively improve both the quality and the practicality of question generation.
翻译:随着人工智能技术的飞速发展,尤其是问答系统的日益广泛应用,高质量的问题生成已成为支撑这些系统发展的关键组成部分。本文聚焦于知识型问题生成技术,该技术旨在使计算机在理解特定文本或知识库的基础上模拟人类的提问过程。针对大规模语言模型应用于知识密集型任务时存在的幻觉与知识鸿沟问题,本文提出了一种融合对比学习的增强型问题生成方法。该方法利用多个模型联合挖掘领域知识,并采用对比学习引导模型减少生成过程中的噪声与幻觉。实验结果表明,通过设计包含对比示例的提示,模型在问题生成方面的性能显著提升,尤其在同时使用对比指令与示例时,所生成问题的质量最高且准确性得到改善。这些结果证明,本研究提出的结合对比上下文与思维链提示的方法,能够有效提升问题生成的质量与实用性。