Hierarchical organization is fundamental to biological systems and human societies, yet artificial intelligence systems often rely on monolithic architectures that limit adaptability and scalability. Current hierarchical reinforcement learning (HRL) approaches typically restrict hierarchies to two levels or require centralized training, which limits their practical applicability. We introduce TAME Agent Framework (TAG), a framework for constructing fully decentralized hierarchical multi-agent systems.TAG enables hierarchies of arbitrary depth through a novel LevelEnv concept, which abstracts each hierarchy level as the environment for the agents above it. This approach standardizes information flow between levels while preserving loose coupling, allowing for seamless integration of diverse agent types. We demonstrate the effectiveness of TAG by implementing hierarchical architectures that combine different RL agents across multiple levels, achieving improved performance over classical multi-agent RL baselines on standard benchmarks. Our results show that decentralized hierarchical organization enhances both learning speed and final performance, positioning TAG as a promising direction for scalable multi-agent systems.
翻译:分层组织是生物系统和人类社会的基础,然而人工智能系统通常依赖单一架构,限制了适应性和可扩展性。当前的分层强化学习方法通常将层级限制为两层或需要集中式训练,这限制了其实际应用性。我们提出了TAME智能体框架,这是一种构建完全去中心化的分层多智能体系统的框架。TAG通过新颖的LevelEnv概念实现了任意深度的层级结构,该概念将每个层级抽象为上层智能体的环境。这种方法在保持松散耦合的同时标准化了层级间的信息流,允许无缝集成多种智能体类型。我们通过实现跨多个层级结合不同强化学习智能体的分层架构,在标准基准测试中取得了优于经典多智能体强化学习基线的性能,从而证明了TAG的有效性。我们的结果表明,去中心化的分层组织既提高了学习速度也提升了最终性能,使TAG成为可扩展多智能体系统的一个有前景的方向。