Large Language Models have become an integral part of new intelligent and interactive writing assistants. Many are offered commercially with a chatbot-like UI, such as ChatGPT, and provide little information about their inner workings. This makes this new type of widespread system a potential target for deceptive design patterns. For example, such assistants might exploit hidden costs by providing guidance up until a certain point before asking for a fee to see the rest. As another example, they might sneak unwanted content/edits into longer generated or revised text pieces (e.g. to influence the expressed opinion). With these and other examples, we conceptually transfer several deceptive patterns from the literature to the new context of AI writing assistants. Our goal is to raise awareness and encourage future research into how the UI and interaction design of such systems can impact people and their writing.
翻译:大型语言模型已成为新型智能交互式写作助手的核心组成部分。许多此类产品以聊天机器人界面(如ChatGPT)进行商业推广,却极少披露其内部运作机制。这种新型广泛部署的系统因此可能成为欺骗性设计模式的目标。例如,这类助手可能通过先提供部分指导、在关键节点要求付费才能继续使用的方式,利用隐性成本获取利益。又如,它们可能在较长的生成或修订文本中悄悄插入非预期内容/编辑(例如为影响用户所表达的观点)。结合这些案例,我们从文献中系统性地将多种欺骗性模式迁移至AI写作助手的全新应用场景。本研究旨在提升学界对此问题的认识,并呼吁针对此类系统的用户界面及交互设计如何影响用户及其写作行为展开进一步探索。