Text prompt is the most common way for human-generative AI (GenAI) communication. Though convenient, it is challenging to convey fine-grained and referential intent. One promising solution is to combine text prompts with precise GUI interactions, like brushing and clicking. However, there lacks a formal model to model synergistic designs between prompts and interactions, hindering their comparison and innovation. To fill this gap, via an iterative and deductive process, we develop the Interaction-Augmented Instruction (IAI) model, a compact entity-relation graph formalizing how the combination of interactions and text prompts enhances human-generative AI communication. With the model, we distill twelve recurring and composable atomic interaction paradigms from prior tools, verifying our model's capability to facilitate systematic design characterization and comparison. Case studies further demonstrate the model's utility in applying, refining, and extending these paradigms. These results illustrate our IAI model's descriptive, discriminative, and generative power for shaping future GenAI systems.
翻译:文本提示是人类与生成式人工智能(GenAI)通信最常用的方式。尽管便捷,但传达细粒度和指代性意图仍具挑战。一种有前景的解决方案是将文本提示与精确的图形用户界面交互(如刷选和点击)相结合。然而,目前缺乏形式化模型来建模提示与交互之间的协同设计,阻碍了其比较与创新。为填补这一空白,通过迭代与演绎过程,我们提出了交互增强指令(IAI)模型——一种紧凑的实体-关系图,形式化地描述了交互与文本提示的组合如何增强人类与生成式AI的通信。基于该模型,我们从现有工具中提炼出十二种可复现且可组合的原子交互范式,验证了模型在促进系统性设计表征与比较方面的能力。案例研究进一步展示了该模型在应用、优化和扩展这些范式方面的实用性。这些结果阐明了我们的IAI模型在塑造未来GenAI系统方面所具备的描述性、判别性和生成性能力。