As generative AI becomes more prevalent, it is important to study how human users interact with such models. In this work, we investigate how people use text-to-image models to generate desired target images. To study this interaction, we created ArtWhisperer, an online game where users are given a target image and are tasked with iteratively finding a prompt that creates a similar-looking image as the target. Through this game, we recorded over 50,000 human-AI interactions; each interaction corresponds to one text prompt created by a user and the corresponding generated image. The majority of these are repeated interactions where a user iterates to find the best prompt for their target image, making this a unique sequential dataset for studying human-AI collaborations. In an initial analysis of this dataset, we identify several characteristics of prompt interactions and user strategies. People submit diverse prompts and are able to discover a variety of text descriptions that generate similar images. Interestingly, prompt diversity does not decrease as users find better prompts. We further propose a new metric to quantify the steerability of AI using our dataset. We define steerability as the expected number of interactions required to adequately complete a task. We estimate this value by fitting a Markov chain for each target task and calculating the expected time to reach an adequate score in the Markov chain. We quantify and compare AI steerability across different types of target images and two different models, finding that images of cities and natural world images are more steerable than artistic and fantasy images. These findings provide insights into human-AI interaction behavior, present a concrete method of assessing AI steerability, and demonstrate the general utility of the ArtWhisperer dataset.
翻译:随着生成式人工智能日益普及,研究人类用户如何与此类模型交互变得至关重要。在本工作中,我们探究了人们如何使用文生图模型来生成期望的目标图像。为研究这种交互,我们创建了ArtWhisperer——一个在线游戏,用户被赋予一张目标图像,其任务是迭代地寻找能生成与目标图像外观相似的提示词。通过该游戏,我们记录了超过50,000次人机交互;每次交互对应一个用户创建的文本提示词及其相应的生成图像。其中大多数是重复交互,即用户通过迭代为其目标图像寻找最佳提示词,这使其成为一个用于研究人机协作的独特序列数据集。在对该数据集的初步分析中,我们识别了提示词交互和用户策略的若干特征。用户提交的提示词具有多样性,并能发现多种可生成相似图像的文本描述。有趣的是,当用户找到更好的提示词时,提示词的多样性并未降低。我们进一步提出一种新指标,利用本数据集量化人工智能的可操控性。我们将可操控性定义为充分完成任务所需的预期交互次数。通过为每个目标任务拟合马尔可夫链并计算在马尔可夫链中达到足够分数的期望时间,我们估计了该值。我们量化并比较了不同类型目标图像及两种不同模型间的AI可操控性,发现城市图像和自然世界图像比艺术和幻想图像更具可操控性。这些发现为理解人机交互行为提供了见解,提出了一种评估AI可操控性的具体方法,并展示了ArtWhisperer数据集的普遍实用性。