Abstention Ability (AA) is a critical aspect of Large Language Model (LLM) reliability, referring to an LLM's capability to withhold responses when uncertain or lacking a definitive answer, without compromising performance. Although previous studies have attempted to improve AA, they lack a standardised evaluation method and remain unsuitable for black-box models where token prediction probabilities are inaccessible. This makes comparative analysis challenging, especially for state-of-the-art closed-source commercial LLMs. This paper bridges this gap by introducing a black-box evaluation approach and a new dataset, Abstain-QA, crafted to rigorously assess AA across varied question types (answerable and unanswerable), domains (well-represented and under-represented), and task types (fact centric and reasoning). We also propose a new confusion matrix, the ''Answerable-Unanswerable Confusion Matrix'' (AUCM) which serves as the basis for evaluating AA, by offering a structured and precise approach for assessment. Finally, we explore the impact of three prompting strategies-Strict Prompting, Verbal Confidence Thresholding, and Chain-of-Thought (CoT)-on improving AA. Our results indicate that even powerful models like GPT-4, Mixtral 8x22b encounter difficulties with abstention; however, strategic approaches such as Strict prompting and CoT can enhance this capability.
翻译:弃答能力(Abstention Ability, AA)是大语言模型可靠性的关键维度,指模型在不确定或缺乏明确答案时能够主动保留回答,同时不损害其整体性能。尽管已有研究尝试提升弃答能力,但当前仍缺乏标准化的评估方法,且不适用于无法获取词元预测概率的黑盒模型。这使得对各类模型的比较分析,特别是对最先进的闭源商业大语言模型,面临挑战。本文通过引入一种黑盒评估方法和一个新构建的数据集Abstain-QA来弥合这一差距。该数据集专为严格评估不同问题类型(可回答与不可回答)、知识领域(充分覆盖与覆盖不足)及任务类型(事实导向与推理导向)下的弃答能力而设计。我们还提出了一种新的混淆矩阵——"可回答-不可回答混淆矩阵"(Answerable-Unanswerable Confusion Matrix, AUCM),通过提供结构化且精确的评估框架,作为弃答能力评估的基础。最后,我们探究了三种提示策略——严格提示、言语置信度阈值和思维链——对提升弃答能力的影响。实验结果表明,即使是GPT-4、Mixtral 8x22b等强大模型在弃答方面仍存在困难;然而,严格提示和思维链等策略性方法能够有效增强该能力。