Creative coding requires continuous translation between evolving concepts and computational artifacts, making reflection essential yet difficult to sustain. Creators often struggle to manage ambiguous intentions, emergent outputs, and complex code, limiting depth of exploration. This work examines how large language models (LLMs) can scaffold reflection not as isolated prompts, but as a system-level mechanism shaping creative regulation. From formative studies with eight expert creators, we derived reflection challenges and design principles that informed Reflexa, an integrated scaffold combining dialogic guidance, visualized version navigation, and iterative suggestion pathways. A within-subject study with 18 participants provides an exploratory mechanism validation, showing that structured reflection patterns mediate the link between AI interaction and creative outcomes. These reflection trajectories enhanced perceived controllability, broadened exploration, and improved originality and aesthetic quality. Our findings advance HCI understanding of reflection from LLM-assisted creative practices, and provide design strategies for building LLM-based creative tools that support richer human-AI co-creativity.
翻译:创意编程需要在不断演化的概念与计算制品之间持续进行转换,这使得反思变得至关重要却又难以持续。创作者常常难以管理模糊的意图、涌现的输出和复杂的代码,从而限制了探索的深度。本研究探讨了大型语言模型(LLMs)如何能够支撑反思——并非作为孤立的提示,而是作为一种塑造创意调控的系统级机制。基于与八位专家创作者的初步研究,我们归纳了反思面临的挑战和设计原则,并以此指导开发了Reflexa。这是一个集对话式引导、可视化版本导航和迭代建议路径于一体的综合脚手架。一项包含18名参与者的组内研究提供了探索性的机制验证,表明结构化的反思模式在AI交互与创意成果之间起到了中介作用。这些反思轨迹增强了感知可控性,拓宽了探索范围,并提升了原创性与美学品质。我们的研究发现推进了人机交互领域对LLM辅助创意实践中反思的理解,并为构建支持更丰富人机协同创造力的基于LLM的创意工具提供了设计策略。