Security operation centers (SOCs) often produce analysis reports on security incidents, and large language models (LLMs) will likely be used for this task in the near future. We postulate that a better understanding of how veteran analysts evaluate reports, including their feedback, can help produce analysis reports in SOCs. In this paper, we aim to leverage LLMs for analysis reports. To this end, we first construct a Analyst-wise checklist to reflect SOC practitioners' opinions for analysis report evaluation through literature review and user study with SOC practitioners. Next, we design a novel LLM-based conceptual framework, named MESSALA, by further introducing two new techniques, granularization guideline and multi-perspective evaluation. MESSALA can maximize report evaluation and provide feedback on veteran SOC practitioners' perceptions. When we conduct extensive experiments with MESSALA, the evaluation results by MESSALA are the closest to those of veteran SOC practitioners compared with the existing LLM-based methods. We then show two key insights. We also conduct qualitative analysis with MESSALA, and then identify that MESSALA can provide actionable items that are necessary for improving analysis reports.
翻译:安全运营中心(SOCs)经常生成关于安全事件的分析报告,而大语言模型(LLMs)在不久的将来很可能被用于此项任务。我们假设,更好地理解资深分析师如何评估报告(包括他们的反馈),有助于在SOCs中生成分析报告。在本文中,我们旨在利用LLMs进行报告分析。为此,我们首先通过文献综述和与SOC从业者的用户研究,构建了一个分析师视角的检查清单,以反映SOC从业者对分析报告评估的意见。接着,我们设计了一个新颖的基于LLM的概念框架,命名为MESSALA,该框架进一步引入了两项新技术:粒度化指导原则和多视角评估。MESSALA能够最大化报告评估效果,并提供关于资深SOC从业者感知的反馈。当我们使用MESSALA进行大量实验时,与现有的基于LLM的方法相比,MESSALA的评估结果最接近资深SOC从业者的评估结果。随后,我们展示了两个关键见解。我们还使用MESSALA进行了定性分析,进而确认MESSALA能够提供改进分析报告所必需的可操作项。