Recent advances in Large Language Models (LLMs) have enabled the development of increasingly complex agentic and multi-agent systems capable of planning, tool use and task decomposition. However, empirical evidence shows that many of these systems suffer from fundamental reliability issues, including hallucinated actions, unexecutable plans and brittle coordination. Crucially, these failures do not stem from limitations of the underlying models themselves, but from the absence of explicit architectural structure linking goals, capabilities and execution. This paper presents a declarative, model-independent architectural layer for grounded agentic workflows that addresses this gap. The proposed layer, referred to as DALIA (Declarative Agentic Layer for Intelligent Agents), formalises executable capabilities, exposes tasks through a declarative discovery protocol, maintains a federated directory of agents and their execution resources, and constructs deterministic task graphs grounded exclusively in declared operations. By enforcing a clear separation between discovery, planning and execution, the architecture constrains agent behaviour to a verifiable operational space, reducing reliance on speculative reasoning and free-form coordination. We present the architecture and design principles of the proposed layer and illustrate its operation through a representative task-oriented scenario, demonstrating how declarative grounding enables reproducible and verifiable agentic workflows across heterogeneous environments.
翻译:近年来,大型语言模型(LLMs)的进展使得能够开发日益复杂的智能体与多智能体系统,这些系统具备规划、工具使用和任务分解能力。然而,经验证据表明,许多此类系统存在根本性的可靠性问题,包括幻觉行为、不可执行计划以及脆弱的协调机制。关键的是,这些故障并非源于底层模型本身的局限性,而是由于缺乏明确连接目标、能力与执行的架构结构。本文提出了一种用于具身化智能体工作流的声明式、模型无关的架构层,以解决这一缺陷。该层称为DALIA(智能体声明式智能体层),它形式化可执行能力,通过声明式发现协议公开任务,维护智能体及其执行资源的联邦目录,并构建完全基于声明操作确定的确定性任务图。通过强制实现发现、规划与执行之间的清晰分离,该架构将智能体行为约束在可验证的操作空间内,减少了对推测性推理和自由形式协调的依赖。我们阐述了所提出层的架构与设计原则,并通过一个代表性的面向任务场景说明其运行机制,展示了声明式具身化如何实现跨异构环境的可复现且可验证的智能体工作流。