Large Language Models (LLMs) are increasingly developed for use in complex professional domains, yet little is known about how teams design and evaluate these systems in practice. This paper examines the challenges and trade-offs in LLM development through a 12-week ethnographic study of a team building a pedagogical chatbot. The researcher observed design and evaluation activities and conducted interviews with both developers and domain experts. Analysis revealed four key practices: creating workarounds for data collection, turning to augmentation when expert input was limited, co-developing evaluation criteria with experts, and adopting hybrid expert-developer-LLM evaluation strategies. These practices show how teams made strategic decisions under constraints and demonstrate the central role of domain expertise in shaping the system. Challenges included expert motivation and trust, difficulties structuring participatory design, and questions around ownership and integration of expert knowledge. We propose design opportunities for future LLM development workflows that emphasize AI literacy, transparent consent, and frameworks recognizing evolving expert roles.
翻译:大型语言模型(LLMs)正日益被开发应用于复杂的专业领域,然而对于团队在实践中如何设计与评估此类系统,目前仍知之甚少。本文通过对一个构建教学聊天机器人的团队进行为期12周的人种志研究,探讨了LLM开发过程中的挑战与权衡。研究者观察了设计与评估活动,并对开发人员和领域专家进行了访谈。分析揭示了四项关键实践:为数据收集创建变通方案、在专家输入有限时转向增强策略、与专家共同制定评估标准,以及采用专家-开发者-LLM混合评估策略。这些实践展示了团队如何在约束条件下做出战略决策,并证明了领域专业知识在塑造系统中的核心作用。面临的挑战包括专家参与动机与信任问题、构建参与式设计的困难,以及关于专家知识所有权与整合的疑问。我们为未来的LLM开发工作流程提出了设计机遇,强调应注重人工智能素养、透明的知情同意,以及能够识别专家角色动态演变的框架。