Incident management is essential to maintain the reliability and availability of cloud computing services. Cloud vendors typically disclose incident reports to the public, summarizing the failures and recovery process to help minimize their impact. However, such reports are often lengthy and unstructured, making them difficult to understand, analyze, and use for long-term dependability improvements. The emergence of LLMs offers new opportunities to address this challenge, but how to achieve this is currently understudied. In this paper, we explore the use of cutting-edge LLMs to extract key information from unstructured cloud incident reports. First, we collect more than 3,000 incident reports from 3 leading cloud service providers (AWS, AZURE, and GCP), and manually annotate these collected samples. Then, we design and compare 6 prompt strategies to extract and classify different types of information. We consider 6~LLM models, including 3 lightweight and 3 state-of-the-art (SotA), and evaluate model accuracy, latency, and token cost across datasets, models, prompts, and extracted fields. Our study has uncovered the following key findings: (1) LLMs achieve high metadata extraction accuracy, $75\%\text{--}95\%$ depending on the dataset. (2) Few-shot prompting generally improves accuracy for meta-data fields except for classification, and has better (lower) latency due to shorter output-tokens but requires $1.5\text{--}2\times$ more input-tokens. (3) Lightweight models (e.g., Gemini~2.0, GPT~3.5) offer favorable trade-offs in accuracy, cost, and latency; SotA models yield higher accuracy at significantly greater cost and latency. Our study provides tools, methodologies, and insights for leveraging LLMs to accurately and efficiently extract incident-report information. The FAIR data and code are publicly available at https://github.com/atlarge-research/llm-cloud-incident-extraction.
翻译:故障管理对于维持云计算服务的可靠性与可用性至关重要。云服务提供商通常会向公众披露故障报告,总结故障及恢复过程,以帮助减轻其影响。然而,此类报告通常篇幅冗长且为非结构化文本,难以理解、分析并用于长期的可靠性改进。大型语言模型的出现为解决这一挑战提供了新的机遇,但目前关于如何实现这一目标的研究尚不充分。本文探讨了利用前沿大型语言模型从非结构化云故障报告中提取关键信息的方法。首先,我们从三家领先的云服务提供商(AWS、AZURE 和 GCP)收集了超过 3000 份故障报告,并对这些收集的样本进行了人工标注。随后,我们设计并比较了 6 种提示策略,用于提取和分类不同类型的信息。我们考虑了 6 种大型语言模型,包括 3 种轻量级模型和 3 种最先进模型,并评估了在不同数据集、模型、提示策略及提取字段下的模型准确率、延迟和令牌成本。我们的研究揭示了以下关键发现:(1)大型语言模型在元数据提取上实现了较高的准确率,根据数据集不同,准确率在 $75\%\text{--}95\%$ 之间。(2)除分类任务外,少样本提示通常能提高元数据字段的准确率,并且由于输出令牌更短而具有更优(更低)的延迟,但需要多消耗 $1.5\text{--}2$ 倍的输入令牌。(3)轻量级模型(例如 Gemini~2.0、GPT~3.5)在准确率、成本和延迟之间提供了有利的权衡;最先进模型能以显著更高的成本和延迟获得更高的准确率。我们的研究为利用大型语言模型准确高效地提取故障报告信息提供了工具、方法和洞见。FAIR 数据和代码已在 https://github.com/atlarge-research/llm-cloud-incident-extraction 公开。