Detecting anomaly edges for dynamic graphs aims to identify edges significantly deviating from the normal pattern and can be applied in various domains, such as cybersecurity, financial transactions and AIOps. With the evolving of time, the types of anomaly edges are emerging and the labeled anomaly samples are few for each type. Current methods are either designed to detect randomly inserted edges or require sufficient labeled data for model training, which harms their applicability for real-world applications. In this paper, we study this problem by cooperating with the rich knowledge encoded in large language models(LLMs) and propose a method, namely AnomalyLLM. To align the dynamic graph with LLMs, AnomalyLLM pre-trains a dynamic-aware encoder to generate the representations of edges and reprograms the edges using the prototypes of word embeddings. Along with the encoder, we design an in-context learning framework that integrates the information of a few labeled samples to achieve few-shot anomaly detection. Experiments on four datasets reveal that AnomalyLLM can not only significantly improve the performance of few-shot anomaly detection, but also achieve superior results on new anomalies without any update of model parameters.
翻译:动态图中的异常边检测旨在识别显著偏离正常模式的边,可应用于网络安全、金融交易和智能运维等多个领域。随着时间推移,异常边的类型不断涌现,且每种类型的标记异常样本极少。现有方法要么仅能检测随机插入的边,要么需要充足的标记数据进行模型训练,这在现实应用中限制了其适用性。本文通过结合大语言模型(LLMs)中编码的丰富知识来研究该问题,并提出一种名为AnomalyLLM的方法。为将动态图与大语言模型对齐,AnomalyLLM预训练了一个动态感知编码器以生成边的表示,并利用词嵌入的原型对边进行重编程。配合该编码器,我们设计了一种上下文学习框架,通过整合少量标记样本的信息实现少样本异常检测。在四个数据集上的实验表明,AnomalyLLM不仅能够显著提升少样本异常检测的性能,还能在无需更新模型参数的情况下对新型异常取得优越结果。