Contemporary AI applications leverage large language models (LLMs) to harness their knowledge and reasoning abilities for natural language processing tasks. This approach shares similarities with the concept of oracle Turing machines (OTMs). To capture the broader potential of these computations, including those not yet realized, we propose an extension to OTMs: the LLM-oracle machine (LLM-OM), by employing a cluster of LLMs as the oracle. Each LLM acts as a black box, capable of answering queries within its expertise, albeit with a delay. We introduce four variants of the LLM-OM: basic, augmented, fault-avoidance, and $\epsilon$-fault. The first two are commonly observed in existing AI applications. The latter two are specifically designed to address the challenges of LLM hallucinations, biases, and inconsistencies, aiming to ensure reliable outcomes.
翻译:当代人工智能应用利用大型语言模型(LLM)的知识与推理能力处理自然语言任务。这一方法与 Oracle 图灵机(OTM)的概念有相似之处。为捕捉此类计算(包括尚未实现的)更广泛的潜力,我们提出对 OTM 的扩展:LLM-Oracle 机(LLM-OM),其采用一组 LLM 作为 Oracle。每个 LLM 作为一个黑盒,能够在其专业领域内回答查询,尽管存在延迟。我们提出了 LLM-OM 的四种变体:基础型、增强型、故障规避型和 $\epsilon$-故障型。前两种常见于现有 AI 应用中。后两种则专门设计用于应对 LLM 的幻觉、偏见与不一致性挑战,旨在确保可靠的结果。