Despite the promising few-shot ability of large language models (LLMs), the standard paradigm of In-context Learning (ICL) suffers the disadvantages of susceptibility to selected demonstrations and the intricacy to generate these demonstrations. In this paper, we raise the fundamental question that whether human-generated demonstrations are necessary for ICL. To answer this question, we propose self-contemplation prompting strategy (SEC), a paradigm free from human-crafted demonstrations. The key point of SEC is that, instead of using hand-crafted examples as demonstrations in ICL, SEC asks LLMs to first create demonstrations on their own, based on which the final output is generated. SEC is a flexible framework and can be adapted to both the vanilla ICL and the chain-of-thought (CoT), but with greater ease: as the manual-generation process of both examples and rationale can be saved. Extensive experiments in arithmetic reasoning, commonsense reasoning, multi-task language understanding, and code generation benchmarks, show that SEC, which does not require hand-crafted demonstrations, significantly outperforms the zero-shot learning strategy, and achieves comparable results to ICL with hand-crafted demonstrations. This demonstrates that, for many tasks, contemporary LLMs possess a sufficient level of competence to exclusively depend on their own capacity for decision making, removing the need for external training data. Code is available at https://github.com/ruili33/SEC.
翻译:尽管大型语言模型(LLMs)展现出令人瞩目的少样本学习能力,但上下文学习(ICL)的标准范式仍存在对选定演示的敏感性和生成这些演示的复杂性等缺陷。本文提出一个根本性问题:人类生成的演示对ICL是否必要?为回答该问题,我们提出自我反思提示策略(SEC),这是一种无需人工设计演示的范式。SEC的核心在于:不同于在ICL中使用人工构建的示例作为演示,SEC要求LLMs首先自主生成演示,并基于这些演示生成最终输出。SEC是一个灵活框架,既可适用于标准ICL也可适用于思维链(CoT),但操作更为简便——因为示例及其推理过程的手工生成环节均可省略。在算术推理、常识推理、多任务语言理解和代码生成基准测试中的大量实验表明,无需人工演示的SEC策略显著优于零样本学习策略,且能达到与使用人工演示的ICL相当的性能。这证明:对于许多任务而言,当代LLMs已具备充分能力完全依赖自身决策机制,无需外部训练数据。代码开源于https://github.com/ruili33/SEC。