This paper introduces the Creative Intelligence Loop (CIL), a novel socio-technical framework for responsible human-AI co-creation. Rooted in the 'Workflow as Medium' paradigm, the CIL proposes a disciplined structure for dynamic human-AI collaboration, guiding the strategic integration of diverse AI teammates who function as collaborators while the human remains the final arbiter for ethical alignment and creative integrity. The CIL was empirically demonstrated through the practice-led creation of two graphic novellas, investigating how AI could serve as an effective creative colleague within a subjective medium lacking objective metrics. The process required navigating multifaceted challenges including AI's 'jagged frontier' of capabilities, sycophancy, and attention-scarce feedback environments. This prompted iterative refinement of teaming practices, yielding emergent strategies: a multi-faceted critique system integrating adversarial AI roles to counter sycophancy, and prioritizing 'feedback-ready' concrete artifacts to elicit essential human critique. The resulting graphic novellas analyze distinct socio-technical governance failures: 'The Steward' examines benevolent AI paternalism in smart cities, illustrating how algorithmic hubris can erode freedom; 'Fork the Vote' probes democratic legitimacy by comparing centralized AI opacity with emergent collusion in federated networks. This work contributes a self-improving framework for responsible human-AI co-creation and two graphic novellas designed to foster AI literacy and dialogue through accessible narrative analysis of AI's societal implications.
翻译:本文提出创造性智能循环(CIL),一种用于负责任人机协同创作的新型社会技术框架。该框架植根于‘工作流作为媒介’范式,为动态人机协作提供结构化规范,指导人类战略性地整合多样化的AI协作者——这些AI作为创作伙伴发挥作用,而人类始终是伦理对齐与创作完整性的最终裁决者。CIL通过实践主导的两部图像小说的创作过程得到实证验证,探讨了在缺乏客观评价标准的主观媒介中,AI如何成为有效的创意合作者。该过程需应对多重挑战,包括AI能力的‘锯齿边界’、谄媚倾向以及注意力稀缺的反馈环境。这促使团队协作实践进行迭代优化,衍生出新兴策略:整合对抗性AI角色的多维度批判系统以抑制谄媚行为,以及优先创建‘可反馈’的具体成果物以获取关键的人类批判。最终完成的两部图像小说分析了不同的社会技术治理失效案例:《守护者》审视智慧城市中仁慈的AI家长主义,阐释算法傲慢如何侵蚀自由;《分叉投票》通过对比中心化AI的不透明性与联邦网络中涌现的共谋现象,探讨民主合法性。本研究贡献了一个可自我优化的负责任人机协同创作框架,以及两部旨在通过可及性叙事分析AI社会影响来提升AI素养、促进公共对话的图像小说。