The proliferation of autonomous AI agents within enterprise environments introduces a critical security challenge: managing access control for emergent, novel tasks for which no predefined policies exist. This paper introduces an advanced security framework that extends the Task-Based Access Control (TBAC) model by using a Large Language Model (LLM) as an autonomous, risk-aware judge. This model makes access control decisions not only based on an agent's intent but also by explicitly considering the inherent \textbf{risk associated with target resources} and the LLM's own \textbf{model uncertainty} in its decision-making process. When an agent proposes a novel task, the LLM judge synthesizes a just-in-time policy while also computing a composite risk score for the task and an uncertainty estimate for its own reasoning. High-risk or high-uncertainty requests trigger more stringent controls, such as requiring human approval. This dual consideration of external risk and internal confidence allows the model to enforce a more robust and adaptive version of the principle of least privilege, paving the way for safer and more trustworthy autonomous systems.
翻译:企业环境中自主AI智能体的激增带来了一个关键的安全挑战:如何管理那些不存在预定义策略的新兴、新颖任务的访问控制。本文提出了一种先进的安全框架,该框架通过使用大型语言模型(LLM)作为自主的、风险感知的判定器,扩展了基于任务的访问控制(TBAC)模型。该模型不仅基于智能体的意图做出访问控制决策,还明确考虑了与目标资源相关的固有**风险**以及LLM在其决策过程中自身的**模型不确定性**。当智能体提出一项新颖任务时,LLM判定器会即时合成一个策略,同时计算该任务的综合风险评分及其自身推理的不确定性估计。高风险或高不确定性的请求会触发更严格的控制措施,例如要求人工审批。这种对外部风险和内部置信度的双重考量,使得该模型能够执行一种更鲁棒、更自适应的最小权限原则,为构建更安全、更可信的自主系统铺平了道路。