Stakeholders often struggle to accurately express their requirements due to articulation barriers arising from limited domain knowledge or from cognitive constraints. This can cause misalignment between expressed and intended requirements, complicating elicitation and validation. Traditional elicitation techniques, such as interviews and follow-up sessions, are time-consuming and risk distorting stakeholders' original intent across iterations. Large Language Models (LLMs) can infer user intentions from context, suggesting potential for assisting stakeholders in expressing their needs. This raises the questions of (i) how effectively LLMs can support requirement expression and (ii) whether such support benefits stakeholders with limited domain expertise. We conducted a study with 26 participants who produced 130 requirement statements. Each participant first expressed requirements unaided, then evaluated LLM-generated revisions tailored to their context. Participants rated LLM revisions significantly higher than their original statements across all dimensions-alignment with intent, readability, reasoning, and unambiguity. Qualitative feedback further showed that LLM revisions often surfaced tacit details stakeholders considered important and helped them better understand their own requirements. We present and evaluate a stakeholder-centered approach that leverages LLMs as articulation aids in requirements elicitation and validation. Our results show that LLM-assisted reformulation improves perceived completeness, clarity, and alignment of requirements. By keeping stakeholders in the validation loop, this approach promotes responsible and trustworthy use of AI in Requirements Engineering.
翻译:利益相关者常因领域知识有限或认知约束导致的表达障碍而难以准确表达其需求。这会造成表达需求与真实意图之间的错位,使需求获取与验证工作复杂化。传统的访谈、跟进会议等需求获取方法耗时耗力,且在多轮迭代中存在扭曲利益相关者原始意图的风险。大型语言模型(LLMs)能够从上下文推断用户意图,展现出辅助利益相关者表达需求的潜力。这引出了两个核心问题:(i)LLMs能在多大程度上有效支持需求表达;(ii)此类支持是否对领域专业知识有限的利益相关者更具助益。我们开展了一项涉及26名参与者的研究,共收集130条需求陈述。每位参与者首先在无辅助条件下表达需求,随后评估基于其上下文生成的LLM修订版本。参与者在所有维度——意图一致性、可读性、逻辑性与无歧义性——上对LLM修订版的评分均显著高于其原始陈述。定性反馈进一步表明,LLM修订常能揭示利益相关者认为重要但未明言的细节,并帮助他们更好地理解自身需求。我们提出并评估了一种以利益相关者为中心的方法,将LLMs作为表达辅助工具应用于需求获取与验证流程。研究结果表明,LLM辅助的重新表述能提升需求在感知完整性、清晰度及一致性方面的表现。通过将利益相关者保持在验证闭环中,该方法促进了人工智能在需求工程领域负责任且可信赖的应用。