This work investigates generative facial expression interfaces for intelligent agents from a meta-design perspective. We propose the Generative Personalized Facial Expression Interface (GPFEI) framework, which organizes rule-bounded spaces, character identity, and context--expression mapping to address challenges of control, coherence, and alignment in run-time facial expression generation. To operationalize this framework, we developed GenFaceUI, a proof-of-concept tool that enables designers to create templates, apply semantic tags, define rules, and iteratively test outcomes. We evaluated the tool through a qualitative study with twelve designers. The results show perceived gains in controllability and consistency, while revealing needs for structured visual mechanisms and lightweight explanations. These findings provide a conceptual framework, a proof-of-concept tool, and empirical insights that highlight both opportunities and challenges for advancing generative facial expression interfaces within a broader meta-design paradigm.
翻译:本研究从元设计的视角探讨智能体的生成式面部表情界面。我们提出了生成式个性化面部表情界面框架,该框架通过组织规则边界空间、角色身份以及语境-表情映射,以解决运行时面部表情生成中的控制性、连贯性与对齐性挑战。为实施该框架,我们开发了概念验证工具GenFaceUI,使设计者能够创建模板、应用语义标签、定义规则并进行迭代结果测试。我们通过一项包含十二名设计者的定性研究对该工具进行了评估。结果表明,该工具在可控性与一致性方面获得了感知增益,同时也揭示了对结构化视觉机制与轻量化解释的需求。这些发现提供了一个概念框架、一个概念验证工具以及实证见解,强调了在更广泛的元设计范式下推进生成式面部表情界面所面临的机遇与挑战。