Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly during prompt-based prototyping. To address this, we present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping. Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers. After using Farsight, AI prototypers in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that Farsight encourages them to focus on end-users and think beyond immediate harms. We discuss these findings and reflect on their implications for designing AI prototyping experiences that meaningfully engage with AI harms. Farsight is publicly accessible at: https://PAIR-code.github.io/farsight.
翻译:基于提示的大型语言模型(LLM)界面使得原型设计和构建AI驱动的应用变得前所未有的简便。然而,识别AI应用可能引发的潜在危害仍然是一项挑战,尤其是在基于提示的原型设计阶段。为此,我们提出了Farsight——一种新颖的原位交互式工具,旨在帮助人们在原型设计过程中识别其AI应用可能带来的潜在危害。根据用户的提示,Farsight会突出显示相关AI事件的新闻报道,并允许用户探索和编辑LLM生成的使用场景、利益相关方及潜在危害。我们报告了与10位AI原型设计者进行的协同设计研究得出的设计见解,以及与42位AI原型设计者进行的用户研究结果。在使用Farsight后,参与用户研究的AI原型设计者能够更独立地识别与提示相关的潜在危害,并认为我们的工具比现有资源更有用、更易用。他们的定性反馈也强调,Farsight鼓励他们关注最终用户,并思考超越即时危害的长期影响。我们讨论了这些发现,并反思了其对设计能够切实关注AI危害的AI原型体验的意义。Farsight公开访问地址为:https://PAIR-code.github.io/farsight。