Product designers often begin their design process with handcrafted personas. While personas are intended to ground design decisions in consumer preferences, they often fall short in practice by remaining abstract, expensive to produce, and difficult to translate into actionable design features. As a result, personas risk serving as static reference points rather than tools that actively shape design outcomes. To address these challenges, we built Personagram, an interactive system powered by multimodal large language models (MLLMs) that helps designers explore detailed census-based personas, extract product features inferred from persona attributes, and recombine them for specific customer segments. In a study with 12 professional designers, we show that Personagram facilitates more actionable ideation workflows by structuring multimodal thinking from persona attributes to product design features, achieving higher engagement with personas, perceived transparency, and satisfaction compared to a chat-based baseline. We discuss implications of integrating AI-generated personas into product design workflows.
翻译:产品设计师通常从手工制作的用户画像开始其设计流程。尽管用户画像旨在将设计决策建立在消费者偏好的基础上,但在实践中往往存在不足:它们通常保持抽象、制作成本高昂且难以转化为可执行的设计特征。因此,用户画像可能沦为静态参考点,而非能主动塑造设计成果的工具。为应对这些挑战,我们开发了Personagram——一个由多模态大语言模型驱动的交互式系统,可帮助设计师探索基于人口普查数据的详细用户画像,从画像属性中推断产品特征,并针对特定客户群体进行特征重组。通过对12位专业设计师的研究,我们证明Personagram通过构建从用户画像属性到产品设计特征的多模态思维框架,促进了更具可操作性的构思流程,相较于基于聊天的基线系统,在用户画像参与度、感知透明度和满意度方面均获得更高评价。本文进一步探讨了将AI生成用户画像整合到产品设计流程中的潜在影响。