Over the years, access control systems have become increasingly more complex, often causing a disconnect between what is envisaged by the stakeholders in decision-making positions and the actual permissions granted as evidenced from access logs. For instance, Attribute-based Access Control (ABAC), which is a flexible yet complex model typically configured by system security officers, can be made understandable to others only when presented at a high level in natural language. Although several algorithms have been proposed in the literature for automatic extraction of ABAC rules from access logs, there is no attempt yet to bridge the semantic gap between the machine-enforceable formal logic and human-centric policy intent. Our work addresses this problem by developing a framework that generates human understandable natural language access control policies from logs. We investigate to what extent the power of Large Language Models (LLMs) can be harnessed to achieve both accuracy and scalability in the process. Named LANTERN (LLM-based ABAC Natural Translation and Explanation for Rule Navigation), we have instantiated the framework as a publicly accessible web based application for reproducibility of our results.
翻译:多年来,访问控制系统日益复杂,常常导致决策层利益相关者的设想与访问日志所反映的实际授权权限之间存在脱节。例如,基于属性的访问控制(ABAC)作为一种灵活而复杂的模型,通常由系统安全官员配置,仅当以自然语言进行高层级描述时才能被他人理解。尽管已有文献提出了多种从访问日志自动提取ABAC规则的算法,但目前尚未有研究尝试弥合机器可执行的形式化逻辑与以人为中心的策略意图之间的语义鸿沟。本研究通过开发一个从日志生成人类可理解的自然语言访问控制策略的框架来解决此问题。我们探究了在多大程度上可以利用大语言模型(LLMs)的能力,以在此过程中同时实现准确性与可扩展性。我们将该框架实例化为名为LANTERN(基于LLM的ABAC自然语言翻译与规则导航解释系统)的公开可访问网络应用程序,以确保研究结果的可复现性。