Large language models (LLMs) have shown remarkable performance across various tasks, yet their ability to handle long-context reading remains challenging. This study explores the effectiveness of leveraging high-quality academic peer review data for fine-tuning LLMs to enhance their long-context capabilities. We compare the Direct Preference Optimization (DPO) method with the Supervised Fine-Tuning (SFT) method, demonstrating DPO's superiority and data efficiency. Our experiments show that the fine-tuned model achieves a 4.04-point improvement over phi-3 and a 2.6\% increase on the Qasper benchmark using only 2000 samples. Despite facing limitations in data scale and processing costs, this study underscores the potential of DPO and high-quality data in advancing LLM performance. Additionally, the zero-shot benchmark results indicate that aggregated high-quality human reviews are overwhelmingly preferred over LLM-generated responses, even for the most capable models like GPT-4o. This suggests that high-quality human reviews are extremely rich in information, reasoning, and long-context retrieval, capabilities that even the most advanced models have not fully captured. These findings highlight the high utility of leveraging human reviews to further advance the field.
翻译:大语言模型(LLMs)在各种任务中展现出卓越性能,但其处理长上下文阅读的能力仍具挑战性。本研究探讨利用高质量学术同行评审数据对大语言模型进行微调以增强其长上下文能力的有效性。我们比较了直接偏好优化(DPO)方法与监督微调(SFT)方法,证明了DPO的优越性和数据效率。实验表明,仅使用2000个样本,微调后的模型在phi-3上实现了4.04分的提升,并在Qasper基准测试中取得了2.6%的性能增长。尽管面临数据规模和计算成本的限制,本研究仍凸显了DPO与高质量数据在推进大语言模型性能方面的潜力。此外,零样本基准测试结果表明,聚合的高质量人工评审结果显著优于大语言模型生成的回复,即使对于GPT-4o等最先进的模型也是如此。这表明高质量人工评审在信息密度、推理逻辑和长上下文检索方面具有极高价值,这些能力甚至最先进的模型也尚未完全掌握。这些发现揭示了利用人工评审进一步推动该领域发展的重要价值。