Recent electroencephalography (EEG) spatial super-resolution (SR) methods, while showing improved quality by either directly predicting missing signals from visible channels or adapting latent diffusion-based generative modeling to temporal data, often lack awareness of physiological spatial structure, thereby constraining spatial generation performance. To address this issue, we introduce TopoDiff, a geometry- and relation-aware diffusion model for EEG spatial super-resolution. Inspired by how human experts interpret spatial EEG patterns, TopoDiff incorporates topology-aware image embeddings derived from EEG topographic representations to provide global geometric context for spatial generation, together with a dynamic channel-relation graph that encodes inter-electrode relationships and evolves with temporal dynamics. This design yields a spatially grounded EEG spatial super-resolution framework with consistent performance improvements. Across multiple EEG datasets spanning diverse applications, including SEED/SEED-IV for emotion recognition, PhysioNet motor imagery (MI/MM), and TUSZ for seizure detection, our method achieves substantial gains in generation fidelity and leads to notable improvements in downstream EEG task performance.
翻译:近期的脑电图空间超分辨率方法,无论是通过直接预测可见通道的缺失信号,还是将基于隐扩散的生成建模技术应用于时序数据,虽然显示出质量提升,但往往缺乏对生理空间结构的感知,从而限制了空间生成性能。为解决这一问题,我们提出了TopoDiff,一种用于脑电图空间超分辨率的几何与关系感知扩散模型。受人类专家解读脑电图空间模式的启发,TopoDiff整合了源自脑电图地形图表示、具有拓扑感知的图像嵌入,为空间生成提供全局几何上下文,并结合了一个动态通道关系图,该图编码了电极间的相互关系并随时间动态演化。这一设计构建了一个空间基础扎实的脑电图空间超分辨率框架,并带来了持续的性能提升。在涵盖多种应用场景的多个脑电图数据集上,包括用于情绪识别的SEED/SEED-IV、用于运动想象的PhysioNet MI/MM以及用于癫痫发作检测的TUSZ,我们的方法在生成保真度方面取得了显著提升,并显著改善了后续脑电图任务的性能。