Large language models (LLMs) can be used to analyze cyber threat intelligence (CTI) data from cybercrime forums, which contain extensive information and key discussions about emerging cyber threats. However, to date, the level of accuracy and efficiency of LLMs for such critical tasks has yet to be thoroughly evaluated. Hence, this study assesses the performance of an LLM system built on the OpenAI GPT-3.5-turbo model [8] to extract CTI information. To do so, a random sample of more than 700 daily conversations from three cybercrime forums - XSS, Exploit_in, and RAMP - was extracted, and the LLM system was instructed to summarize the conversations and predict 10 key CTI variables, such as whether a large organization and/or a critical infrastructure is being targeted, with only simple human-language instructions. Then, two coders reviewed each conversation and evaluated whether the information extracted by the LLM was accurate. The LLM system performed well, with an average accuracy score of 96.23%, an average precision of 90% and an average recall of 88.2%. Various ways to enhance the model were uncovered, such as the need to help the LLM distinguish between stories and past events, as well as being careful with verb tenses in prompts. Nevertheless, the results of this study highlight the relevance of using LLMs for cyber threat intelligence.
翻译:大型语言模型可用于分析网络犯罪论坛中的网络威胁情报数据,这些论坛包含大量关于新兴网络威胁的信息和关键讨论。然而,迄今为止,大型语言模型在此类关键任务中的准确性和效率水平尚未得到充分评估。因此,本研究评估了基于OpenAI GPT-3.5-turbo模型构建的大型语言模型系统提取网络威胁情报信息的性能。为此,我们从XSS、Exploit_in和RAMP三个网络犯罪论坛中随机抽取了700多条日常对话样本,并指导该大型语言模型系统仅通过简单的人类语言指令来总结对话内容并预测10个关键的网络威胁情报变量,例如大型组织和/或关键基础设施是否成为攻击目标。随后,两名编码员审查了每条对话,并评估了大型语言模型提取的信息是否准确。该大型语言模型系统表现良好,平均准确率为96.23%,平均精确率为90%,平均召回率为88.2%。研究发现了多种增强模型性能的方法,例如需要帮助大型语言模型区分故事与过去事件,以及在提示中谨慎处理动词时态。尽管如此,本研究结果凸显了使用大型语言模型进行网络威胁情报分析的相关性。