Question-Options Generation (QOG) is a task that involves generating a set of question-options pairs given context. This task has various applications, including fine-tuning large models, information retrieval, and automated multiple-choice question generation for education. In this paper, we develop QOG models using three different methods based on fine-tuning sequence-to-sequence language models (LMs). Experiments demonstrate that the end-to-end QOG model is computationally efficient and stable during both training and inference, outperforming other methods. Furthermore, our analysis indicates that our QOG models are competitive on the QOG task compared to the large language model Llama 3-8B.
翻译:问题-选项生成(QOG)是一项在给定上下文的情况下生成一组问题-选项对的任务。该任务具有多种应用,包括微调大模型、信息检索以及为教育自动生成多项选择题。在本文中,我们基于微调序列到序列语言模型(LMs),采用三种不同方法开发了QOG模型。实验表明,端到端的QOG模型在训练和推理过程中计算效率高且稳定,性能优于其他方法。此外,我们的分析表明,与大型语言模型Llama 3-8B相比,我们的QOG模型在QOG任务上具有竞争力。