The integration of Large Language Models (LLMs) into interactive systems opens new opportunities for adaptive user experiences, yet it also raises challenges regarding accessibility, explainability, and normative compliance. This paper presents an implemented model-driven architecture for generating personalised, multimodal, and accessibility-aligned user interfaces. The approach combines structured user profiles, declarative adaptation rules, and validated prompt templates to refine baseline accessible UI templates that conform to WCAG 2.2 and EN 301 549, tailored to cognitive and sensory support needs. LLMs dynamically transform language complexity, modality, and visual structure, producing outputs such as Plain-Language text, pictograms, and high-contrast layouts aligned with ISO 24495-1 and W3C COGA guidance. A healthcare use case demonstrates how the system generates accessible post-consultation medication instructions tailored to a user profile comprising cognitive disability and hearing impairment. SysML v2 models provide explicit traceability between user needs, adaptation rules, and normative requirements, ensuring explainable and auditable transformations. Grounded in Human-Centered AI (HCAI), the framework incorporates co-design processes and structured feedback mechanisms to guide iterative refinement and support trustworthy generative behaviour.
翻译:将大型语言模型(LLMs)集成到交互系统中,为自适应用户体验开辟了新的机遇,但也带来了在无障碍性、可解释性和规范性合规方面的挑战。本文提出了一种已实现的模型驱动架构,用于生成个性化、多模态且符合无障碍标准的用户界面。该方法结合了结构化的用户配置文件、声明式适配规则和经过验证的提示模板,以优化符合WCAG 2.2和EN 301 549标准的基线无障碍UI模板,并根据认知和感官支持需求进行定制。LLMs动态转换语言复杂性、模态和视觉结构,生成符合ISO 24495-1和W3C COGA指南的输出,例如简明语言文本、象形图和高对比度布局。一个医疗保健用例展示了该系统如何根据包含认知障碍和听力损伤的用户配置文件,生成无障碍的诊后用药指导。SysML v2模型提供了用户需求、适配规则和规范性要求之间的明确可追溯性,确保了可解释且可审计的转换。该框架以人为中心的人工智能(HCAI)为基础,纳入了协同设计过程和结构化反馈机制,以指导迭代优化并支持可信赖的生成行为。