Agentic workflows that use autonomous AI Agents powered by Large Language Models (LLMs) and Model Context Protocol (MCP) servers is rapidly rising. This introduces challenges in scalable cloud deployment and state management. Traditional hosting on Virtual Machines (VMs) is resource-intensive and lacks elasticity. Functions-as-a-Service (FaaS) platforms offer modularity, autoscaling and cost efficiency but are inherently stateless. In this paper, we present the FAME, a FaaS-based architecture for orchestrating MCP-enabled agentic workflows. FAME decomposes agentic patterns such as ReAct into composable agents: Planner, Actor and Evaluator, that are each a FaaS function built using LangGraph and are orchestrated as a FaaS workflow. This enables modular composition as AWS Step Functions and avoids function timeouts seen for monolithic agentic workflows. To address context persistence across user requests in a conversation, FAME automates agent memory persistence and injection using DynamoDB. It also optimizes MCP server deployment through AWS Lambda wrappers, caches tool outputs in S3 and proposes function fusion strategies. We evaluate FAME on two representative applications, on research paper summarization and log analytics, under diverse memory and caching configurations. Results show up to 13x latency reduction, 88% fewer input tokens and 66% in cost savings, along with improved workflow completion rates. This demonstrates the viability of serverless platforms for hosting complex, multi-agent AI workflows at scale.
翻译:利用大型语言模型(LLM)和模型上下文协议(MCP)服务器驱动的自主AI智能体工作流正迅速兴起。这给可扩展的云部署和状态管理带来了挑战。传统的虚拟机(VM)托管方式资源消耗大且缺乏弹性。函数即服务(FaaS)平台提供了模块化、自动扩展和成本效益,但本质上是无状态的。本文提出FAME,一种用于编排支持MCP的智能体工作流的基于FaaS的架构。FAME将ReAct等智能体模式分解为可组合的智能体:规划器、执行器和评估器,每个智能体均是基于LangGraph构建的FaaS函数,并以FaaS工作流形式进行编排。这使得模块化组合能够像AWS Step Functions一样运行,并避免了单体智能体工作流中出现的函数超时问题。为解决对话中跨用户请求的上下文持久化问题,FAME利用DynamoDB自动实现智能体记忆的持久化存储与注入。同时,它通过AWS Lambda封装器优化MCP服务器部署,在S3中缓存工具输出,并提出函数融合策略。我们在研究论文摘要和日志分析两个代表性应用上,评估了FAME在不同内存与缓存配置下的性能。结果表明,该系统实现了高达13倍的延迟降低、88%的输入令牌减少以及66%的成本节约,同时提高了工作流完成率。这证明了无服务器平台大规模托管复杂多智能体AI工作流的可行性。