Large Language Models (LLMs) have shown remarkable capabilities across various tasks, but their deployment in high-stake domains requires consistent performance across multiple interaction rounds. This paper introduces a comprehensive framework for evaluating and improving LLM response consistency, making three key contributions. First, we propose a novel Position-Weighted Consistency (PWC) score that captures both the importance of early-stage stability and recovery patterns in multi-turn interactions. Second, we present a carefully curated benchmark dataset spanning diverse domains and difficulty levels, specifically designed to evaluate LLM consistency under various challenging follow-up scenarios. Third, we introduce Confidence-Aware Response Generation (CARG), a framework that significantly improves response stability by incorporating model confidence signals into the generation process. Empirical results demonstrate that CARG significantly improves response stability without sacrificing accuracy, underscoring its potential for reliable LLM deployment in critical applications.
翻译:大语言模型(LLMs)在各种任务中展现出卓越的能力,但其在高风险领域的部署要求模型在多轮交互中保持一致的性能。本文提出了一个评估和提升大语言模型响应一致性的综合框架,并做出三项关键贡献。首先,我们提出了一种新颖的**位置加权一致性**(Position-Weighted Consistency, PWC)评分,该评分同时捕捉多轮交互中早期阶段稳定性和后续恢复模式的重要性。其次,我们提供了一个精心构建的基准数据集,涵盖多个领域和难度级别,专门用于评估大语言模型在各种具有挑战性的后续场景下的一致性。第三,我们引入了**置信度感知响应生成**(Confidence-Aware Response Generation, CARG)框架,该框架通过将模型置信度信号融入生成过程,显著提升了响应稳定性。实证结果表明,CARG在保持准确性的同时显著提高了响应稳定性,这凸显了其在关键应用中实现可靠大语言模型部署的潜力。