Recent work has highlighted the risks of LLM-generated content for a wide range of harmful behaviors, including incorrect and harmful code. In this work, we extend this by studying whether LLM-generated web design contains dark patterns. This work evaluated designs of ecommerce web components generated by four popular LLMs: Claude, GPT, Gemini, and Llama. We tested 13 commonly used ecommerce components (e.g., search, product reviews) and used them as prompts to generate a total of 312 components across all models. Over one-third of generated components contain at least one dark pattern. The majority of dark pattern strategies involve hiding crucial information, limiting users' actions, and manipulating them into making decisions through a sense of urgency. Dark patterns are also more frequently produced in components that are related to company interests. These findings highlight the need for interventions to prevent dark patterns during front-end code generation with LLMs and emphasize the importance of expanding ethical design education to a broader audience.
翻译:近期研究已突显大语言模型生成内容在广泛有害行为(包括错误和有害代码)方面的风险。本研究通过探究LLM生成的网页设计是否包含暗黑模式来拓展这一领域。本研究评估了四种主流LLM(Claude、GPT、Gemini和Llama)生成的电子商务网页组件设计,测试了13种常用电子商务组件(如搜索、商品评价)并将其作为提示词,在所有模型中合计生成312个组件。超过三分之一的生成组件包含至少一种暗黑模式。多数暗黑模式策略涉及隐藏关键信息、限制用户操作,以及通过制造紧迫感操纵用户决策。在与企业利益相关的组件中,暗黑模式的出现频率更高。这些发现表明,有必要在LLM前端代码生成过程中建立干预机制以防止暗黑模式,并强调将伦理设计教育拓展至更广泛受众的重要性。