Recent advancements in the field of large language models, particularly through the Chain of Thought (CoT) approach, have demonstrated significant improvements in solving complex problems. However, existing models either tend to sacrifice detailed reasoning for brevity due to user preferences, or require extensive and expensive training data to learn complicated reasoning ability, limiting their potential in solving complex tasks. To bridge this gap, following the concept of scaling test-time, we propose a simple method by encouraging models to adopt a more patient reasoning style without the need of introducing new knowledge or skills. To employ a preference optimization approach, we generate detailed reasoning processes as positive examples and simple answers as negative examples, thereby training the model to favor thoroughness in its responses. Our results demonstrate a performance increase of up to 6.7% on GSM8k with training just on a lightweight dataset.
翻译:近年来,大语言模型领域通过思维链(CoT)方法取得了显著进展,在解决复杂问题方面表现出明显提升。然而,现有模型往往因用户偏好而倾向于为求简洁牺牲详细推理,或需要大量昂贵的训练数据来学习复杂推理能力,这限制了其在解决复杂任务方面的潜力。为弥补这一不足,遵循扩展测试时间的概念,我们提出一种简单方法,鼓励模型采用更具耐心的推理风格,而无需引入新知识或技能。通过采用偏好优化方法,我们生成详细推理过程作为正例,简单答案作为负例,从而训练模型倾向于给出更全面的回答。我们的实验结果表明,仅使用轻量级数据集进行训练,模型在GSM8k上的性能最高可提升6.7%。