Design feedback helps practitioners improve their artifacts while also fostering reflection and design reasoning. Large Language Models (LLMs) such as ChatGPT can support design work, but often provide generic, one-off suggestions that limit reflective engagement. We investigate how to guide LLMs to act as design mentors by applying the Cognitive Apprenticeship Model, which emphasizes demonstrating reasoning through six methods: modeling, coaching, scaffolding, articulation, reflection, and exploration. We operationalize these instructional methods through structured prompting and evaluate them in a within-subjects study with data visualization practitioners. Participants interacted with both a baseline LLM and an instructional LLM designed with cognitive apprenticeship prompts. Surveys, interviews, and conversational log analyses compared experiences across conditions. Our findings show that cognitively informed prompts elicit deeper design reasoning and more reflective feedback exchanges, though the baseline is sometimes preferred depending on task types or experience levels. We distill design considerations for AI-assisted feedback systems that foster reflective practice.
翻译:设计反馈不仅帮助从业者改进其作品,还能促进反思与设计推理。诸如ChatGPT等大型语言模型(LLMs)虽能支持设计工作,但通常仅提供通用的一次性建议,限制了反思性参与。本研究探讨如何通过应用认知学徒模型来引导LLMs扮演设计导师角色,该模型强调通过六种方法(示范、指导、支架、表达、反思、探索)来展现推理过程。我们通过结构化提示将这些教学方法操作化,并在数据可视化从业者的组内研究中进行评估。参与者分别与基线LLM和采用认知学徒提示设计的教学型LLM进行交互。通过问卷调查、访谈和对话日志分析,比较了不同条件下的体验差异。研究发现:认知启发的提示能引发更深层次的设计推理和更具反思性的反馈交流,但根据任务类型或经验水平,基线模型有时更受青睐。我们提炼出促进反思实践的人工智能辅助反馈系统的设计考量。