Answering end user security questions is challenging. While large language models (LLMs) like GPT, LLAMA, and Gemini are far from error-free, they have shown promise in answering a variety of questions outside of security. We studied LLM performance in the area of end user security by qualitatively evaluating 3 popular LLMs on 900 systematically collected end user security questions. While LLMs demonstrate broad generalist ``knowledge'' of end user security information, there are patterns of errors and limitations across LLMs consisting of stale and inaccurate answers, and indirect or unresponsive communication styles, all of which impacts the quality of information received. Based on these patterns, we suggest directions for model improvement and recommend user strategies for interacting with LLMs when seeking assistance with security.
翻译:回答终端用户的安全问题具有挑战性。尽管诸如GPT、LLAMA和Gemini等大型语言模型(LLMs)远非完美无缺,但它们在回答安全领域以外的各类问题上已展现出潜力。本研究通过定性评估三种主流LLMs对900个系统收集的终端用户安全问题的回答,考察了LLMs在终端安全领域的表现。研究发现,虽然LLMs展现出对终端用户安全信息的广泛通识性“知识”,但各模型普遍存在错误模式和局限性,包括提供过时或不准确的答案,以及采用间接或非应答式的沟通风格,这些因素均会影响所获信息的质量。基于这些模式,本文提出了模型改进的方向,并为用户在寻求安全协助时与LLMs的交互策略提供了建议。