LLM-based and agent-based synthetic personas are increasingly used in design and product decision-making, yet prior work shows that prompt-based personas often produce persuasive but unverifiable responses that obscure their evidentiary basis. We present PersonaCite, an agentic system that reframes AI personas as evidence-bounded research instruments through retrieval-augmented interaction. Unlike prior approaches that rely on prompt-based roleplaying, PersonaCite retrieves actual voice-of-customer artifacts during each conversation turn, constrains responses to retrieved evidence, explicitly abstains when evidence is missing, and provides response-level source attribution. Through semi-structured interviews and deployment study with 14 industry experts, we identify preliminary findings on perceived benefits, validity concerns, and design tensions, and propose Persona Provenance Cards as a documentation pattern for responsible AI persona use in human-centered design workflows.
翻译:基于大语言模型和代理的合成角色在设计和产品决策中日益普及,然而先前研究表明,基于提示的角色常产生具有说服力但不可验证的响应,掩盖了其证据基础。我们提出PersonaCite,这是一个通过检索增强交互将AI角色重构为证据约束型研究工具的代理系统。与依赖基于提示的角色扮演的先前方法不同,PersonaCite在每次对话轮次中检索真实的用户之声制品,将响应约束于检索到的证据,在证据缺失时明确弃答,并提供响应级别的来源归因。通过对14位行业专家进行的半结构化访谈和部署研究,我们初步发现了关于感知效益、有效性疑虑和设计张力的结果,并提出"角色溯源卡片"作为在以人为中心的设计工作流中负责任地使用AI角色的文档模式。