Digital Human Modelling (DHM) is increasingly shaped by advances in AI, wearable biosensing, and interactive digital environments, particularly in research addressing accessibility and inclusion. However, many AI-enabled DHM approaches remain tightly coupled to specific platforms, tasks, or interpretative pipelines, limiting reproducibility, scalability, and ethical reuse. This paper presents a platform-agnostic DHM framework designed to support AI-ready multimodal interaction research by explicitly separating sensing, interaction modelling, and inference readiness. The framework integrates the OpenBCI Galea headset as a unified multimodal sensing layer, providing concurrent EEG, EMG, EOG, PPG, and inertial data streams, alongside a reproducible, game-based interaction environment implemented using SuperTux. Rather than embedding AI models or behavioural inference, physiological signals are represented as structured, temporally aligned observables, enabling downstream AI methods to be applied under appropriate ethical approval. Interaction is modelled using computational task primitives and timestamped event markers, supporting consistent alignment across heterogeneous sensors and platforms. Technical verification via author self-instrumentation confirms data integrity, stream continuity, and synchronisation; no human-subjects evaluation or AI inference is reported. Scalability considerations are discussed with respect to data throughput, latency, and extension to additional sensors or interaction modalities. Illustrative use cases demonstrate how the framework can support AI-enabled DHM and HCI studies, including accessibility-oriented interaction design and adaptive systems research, without requiring architectural modifications. The proposed framework provides an emerging-technology-focused infrastructure for future ethics-approved, inclusive DHM research.
翻译:数字人体建模(DHM)正日益受到人工智能、可穿戴生物传感及交互式数字环境发展的影响,在关注可及性与包容性的研究中尤为明显。然而,许多基于人工智能的DHM方法仍与特定平台、任务或解释流程紧密耦合,限制了其可复现性、可扩展性及符合伦理的重复使用。本文提出一种平台无关的DHM框架,通过明确分离传感、交互建模与推理就绪性,旨在支持面向人工智能的多模态交互研究。该框架集成OpenBCI Galea头戴设备作为统一的多模态传感层,提供同步的脑电图、肌电图、眼电图、光电容积脉搏波和惯性数据流,并结合使用SuperTux实现的、可复现的基于游戏的交互环境。该框架不嵌入人工智能模型或行为推理机制,而是将生理信号表示为结构化、时间对齐的观测变量,使得下游人工智能方法可在获得适当伦理批准后应用。交互通过计算任务基元和时间戳事件标记进行建模,支持跨异构传感器与平台的一致性对齐。通过作者自检式仪器验证,技术性能确认了数据完整性、流连续性与同步性;本文未报告涉及人类受试者的评估或人工智能推理结果。研究从数据吞吐量、延迟及对额外传感器或交互模态的扩展性等方面讨论了可扩展性问题。示例用例展示了该框架如何支持基于人工智能的DHM及人机交互研究,包括面向可及性的交互设计与自适应系统研究,且无需修改架构。所提出的框架为未来符合伦理规范的、包容性的DHM研究提供了一种聚焦新兴技术的基础设施。