Human cognition excels at transcending sensory input and forming latent representations that structure our understanding of the world. Although Large Language Models (LLMs) can produce chain-of-thought reasoning, they lack a principled framework to capture latent structures and model uncertainty, especially in compositional reasoning tasks. We propose Verbalized Probabilistic Graphical Modeling (vPGM), a Bayesian prompting framework that guides LLMs to simulate key principles of Probabilistic Graphical Models (PGMs) in natural language. Unlike many traditional probabilistic methods requiring substantial domain expertise or specialized training, vPGM bypasses expert-driven model design, making it well-suited for scenarios with limited assumptions or scarce data. We evaluated our model on several compositional reasoning tasks, both close-ended and open-ended. Our results indicate that the model effectively enhances confidence calibration and text generation quality.
翻译:人类认知擅长超越感官输入,形成潜在表征以构建我们对世界的理解。尽管大型语言模型(LLM)能够生成思维链推理,但它们缺乏一个原则性框架来捕捉潜在结构并建模不确定性,尤其是在组合推理任务中。我们提出基于语言描述的概率图建模(vPGM),这是一种贝叶斯提示框架,可引导LLM以自然语言模拟概率图模型(PGM)的关键原理。与许多需要大量领域专业知识或专门训练的传统概率方法不同,vPGM绕过了专家驱动的模型设计,使其特别适用于假设有限或数据稀缺的场景。我们在多个组合推理任务(包括封闭式和开放式任务)上评估了我们的模型。结果表明,该模型有效提升了置信度校准和文本生成质量。