Cognitive systems generally require a human to translate a problem definition into some specification that the cognitive system can use to attempt to solve the problem or perform the task. In this paper, we illustrate that large language models (LLMs) can be utilized to map a problem class, defined in natural language, into a semi-formal specification that can then be utilized by an existing reasoning and learning system to solve instances from the problem class. We present the design of LLM-enabled cognitive task analyst agent(s). Implemented with LLM agents, this system produces a definition of problem spaces for tasks specified in natural language. LLM prompts are derived from the definition of problem spaces in the AI literature and general problem-solving strategies (Polya's How to Solve It). A cognitive system can then use the problem-space specification, applying domain-general problem solving strategies ("weak methods" such as search), to solve multiple instances of problems from the problem class. This result, while preliminary, suggests the potential for speeding cognitive systems research via disintermediation of problem formulation while also retaining core capabilities of cognitive systems, such as robust inference and online learning.
翻译:认知系统通常需要人类将问题定义转化为某种规范,使系统能据此尝试解决问题或执行任务。本文表明,大语言模型(LLMs)可用于将自然语言定义的问题类别映射为半形式化规范,进而由现有的推理与学习系统解决该类别中的具体问题实例。我们提出了基于LLM的认知任务分析智能体设计方案。该方案利用LLM智能体实现,可为自然语言描述的任务生成问题空间定义。LLM提示词源自人工智能文献中问题空间的定义以及通用问题解决策略(波利亚《怎样解题》)。认知系统可利用此问题空间规范,应用领域通用的问题解决策略(如搜索等“弱方法”),解决该问题类别中的多个实例。这一初步研究表明,通过消除问题形式化的中间环节,可在保留认知系统核心能力(如稳健推理与在线学习)的同时,加速认知系统研究。