Large Language Models (LLMs) show impressive inductive reasoning capabilities, enabling them to generate hypotheses that could generalize effectively to new instances when guided by in-context demonstrations. However, in real-world applications, LLMs' hypothesis generation is not solely determined by these demonstrations but is significantly shaped by task-specific model priors. Despite their critical influence, the distinct contributions of model priors versus demonstrations to hypothesis generation have been underexplored. This study bridges this gap by systematically evaluating three inductive reasoning strategies across five real-world tasks with three LLMs. Our empirical findings reveal that, hypothesis generation is primarily driven by the model's inherent priors; removing demonstrations results in minimal loss of hypothesis quality and downstream usage. Further analysis shows the result is consistent across various label formats with different label configurations, and prior is hard to override, even under flipped labeling. These insights advance our understanding of the dynamics of hypothesis generation in LLMs and highlight the potential for better utilizing model priors in real-world inductive reasoning tasks.
翻译:大型语言模型(LLM)展现出令人印象深刻的归纳推理能力,使其能够在上下文示例的引导下生成能够有效泛化到新实例的假设。然而,在现实应用中,LLM的假设生成并非仅由这些示例决定,而是显著地受到任务特定模型先验的塑造。尽管模型先验具有关键影响,但其与示例对假设生成的各自贡献尚未得到充分探究。本研究通过系统评估三种LLM在五个现实世界任务上的三种归纳推理策略,填补了这一空白。我们的实证结果表明,假设生成主要由模型固有的先验驱动;移除示例仅导致假设质量及下游使用效用的极小损失。进一步分析表明,该结果在不同标签配置的各种标签格式下保持一致,并且先验难以被覆盖,即使在标签翻转的情况下亦是如此。这些见解增进了我们对LLM中假设生成动态的理解,并凸显了在现实世界归纳推理任务中更好地利用模型先验的潜力。