Anomaly detection in computational workflows is critical for ensuring system reliability and security. However, traditional rule-based methods struggle to detect novel anomalies. This paper leverages large language models (LLMs) for workflow anomaly detection by exploiting their ability to learn complex data patterns. Two approaches are investigated: 1) supervised fine-tuning (SFT), where pre-trained LLMs are fine-tuned on labeled data for sentence classification to identify anomalies, and 2) in-context learning (ICL) where prompts containing task descriptions and examples guide LLMs in few-shot anomaly detection without fine-tuning. The paper evaluates the performance, efficiency, generalization of SFT models, and explores zero-shot and few-shot ICL prompts and interpretability enhancement via chain-of-thought prompting. Experiments across multiple workflow datasets demonstrate the promising potential of LLMs for effective anomaly detection in complex executions.
翻译:计算工作流中的异常检测对于确保系统可靠性与安全性至关重要。然而,传统的基于规则的方法难以有效识别新型异常。本文利用大语言模型(LLMs)学习复杂数据模式的能力,将其应用于工作流异常检测。研究探讨了两种方法:1)监督微调(SFT),通过对预训练LLMs在标注数据上进行句子分类的微调以实现异常识别;2)上下文学习(ICL),通过包含任务描述和示例的提示引导LLMs在无需微调的情况下进行少样本异常检测。本文评估了SFT模型的性能、效率与泛化能力,探索了零样本与少样本ICL提示策略,并借助思维链提示增强模型可解释性。在多个工作流数据集上的实验表明,LLMs在复杂执行环境中具有实现高效异常检测的广阔潜力。