Large Language Models (LLMs) have been applied to automate cyber security activities and processes including cyber investigation and digital forensics. However, the use of such models for cyber investigation and digital forensics should address accountability and security considerations. Accountability ensures models have the means to provide explainable reasonings and outcomes. This information can be extracted through explicit prompt requests. For security considerations, it is crucial to address privacy and confidentiality of the involved data during data processing as well. One approach to deal with this consideration is to have the data processed locally using a local instance of the model. Due to limitations of locally available resources, namely memory and GPU capacities, a Smaller Large Language Model (SLM) will typically be used. These SLMs have significantly fewer parameters compared to the LLMs. However, such size reductions have notable performance reduction, especially when tasked to provide reasoning explanations. In this paper, we aim to mitigate performance reduction through the integration of cognitive strategies that humans use for problem-solving. We term this as cognitive enhancement through prompts. Our experiments showed significant improvement gains of the SLMs' performances when such enhancements were applied. We believe that our exploration study paves the way for further investigation into the use of cognitive enhancement to optimize SLM for cyber security applications.
翻译:大型语言模型(LLMs)已被应用于自动化网络安全活动与流程,包括网络调查和数字取证。然而,在此类场景中使用模型需解决问责性与安全性问题。问责性要求模型具备提供可解释推理过程和结果的能力——该信息可通过显式提示请求提取。在安全性方面,数据处理过程中还需确保涉及数据的隐私与机密性。一种应对方案是采用本地模型实例进行数据处理。受限于本地可用资源(如内存和GPU容量),通常会采用小型化的语言模型(SLM)。相较于LLMs,这些SLM的参数规模显著降低。然而,这种规模缩减会带来明显的性能下降,尤其在需要提供推理解释的任务中。本文旨在通过整合人类在问题解决中使用的认知策略来缓解性能下降问题,我们将此方法称为"基于提示的认知增强"。实验表明,应用此类增强后,SLM的性能获得了显著提升。我们认为,这项探索性研究为后续利用认知增强优化SLM在网络安全领域的应用奠定了基础。