Large language models (LLMs) are promising tools for supporting security management tasks, such as incident response planning. However, their unreliability and tendency to hallucinate remain significant challenges. In this paper, we address these challenges by introducing a principled framework for using an LLM as decision support in security management. Our framework integrates the LLM in an iterative loop where it generates candidate actions that are checked for consistency with system constraints and lookahead predictions. When consistency is low, we abstain from the generated actions and instead collect external feedback, e.g., by evaluating actions in a digital twin. This feedback is then used to refine the candidate actions through in-context learning (ICL). We prove that this design allows to control the hallucination risk by tuning the consistency threshold. Moreover, we establish a bound on the regret of ICL under certain assumptions. To evaluate our framework, we apply it to an incident response use case where the goal is to generate a response and recovery plan based on system logs. Experiments on four public datasets show that our framework reduces recovery times by up to 30% compared to frontier LLMs.
翻译:大语言模型(LLMs)是支持安全管理任务(如事件响应规划)的有前景的工具。然而,其不可靠性和幻觉倾向仍然是重大挑战。本文通过引入一个原则性框架来解决这些挑战,该框架将LLM用作安全管理中的决策支持。我们的框架将LLM集成在一个迭代循环中,在此循环中,LLM生成候选行动,这些行动会接受与系统约束及前瞻预测的一致性检查。当一致性较低时,我们放弃生成的行动,转而收集外部反馈,例如通过在数字孪生中评估行动。随后,该反馈通过上下文学习(ICL)用于优化候选行动。我们证明,通过调整一致性阈值,这种设计可以控制幻觉风险。此外,我们在特定假设下建立了ICL遗憾的上界。为了评估我们的框架,我们将其应用于一个事件响应用例,其目标是根据系统日志生成响应和恢复计划。在四个公共数据集上的实验表明,与前沿LLMs相比,我们的框架可将恢复时间缩短高达30%。