Chain-of-thought (CoT) prompting has significantly enhanced the capability of large language models (LLMs) by structuring their reasoning processes. However, existing methods face critical limitations: handcrafted demonstrations require extensive human expertise, while trigger phrases are prone to inaccuracies. In this paper, we propose the Zero-shot Uncertainty-based Selection (ZEUS) method, a novel approach that improves CoT prompting by utilizing uncertainty estimates to select effective demonstrations without needing access to model parameters. Unlike traditional methods, ZEUS offers high sensitivity in distinguishing between helpful and ineffective questions, ensuring more precise and reliable selection. Our extensive evaluation shows that ZEUS consistently outperforms existing CoT strategies across four challenging reasoning benchmarks, demonstrating its robustness and scalability.
翻译:思维链(CoT)提示通过结构化大型语言模型(LLM)的推理过程,显著提升了其能力。然而,现有方法面临关键局限:人工构建的演示示例需要大量专业知识,而触发短语则容易产生不准确的结果。本文提出零样本基于不确定性的选择(ZEUS)方法,这是一种新颖的方法,通过利用不确定性估计来选择有效的演示示例,而无需访问模型参数,从而改进了CoT提示。与传统方法不同,ZEUS在区分有益和无效问题方面具有高敏感性,确保了更精确和可靠的选择。我们在四个具有挑战性的推理基准上的广泛评估表明,ZEUS在各项基准上均持续优于现有的CoT策略,证明了其鲁棒性和可扩展性。