Understanding neural responses to visual stimuli remains challenging due to the inherent complexity of brain representations and the modality gap between neural data and visual inputs. Existing methods, mainly based on reducing neural decoding to generation tasks or simple correlations, fail to reflect the hierarchical and temporal processes of visual processing in the brain. To address these limitations, we present NeuroAlign, a novel framework for fine-grained fMRI-video alignment inspired by the hierarchical organization of the human visual system. Our framework implements a two-stage mechanism that mirrors biological visual pathways: global semantic understanding through Neural-Temporal Contrastive Learning (NTCL) and fine-grained pattern matching through enhanced vector quantization. NTCL explicitly models temporal dynamics through bidirectional prediction between modalities, while our DynaSyncMM-EMA approach enables dynamic multi-modal fusion with adaptive weighting. Experiments demonstrate that NeuroAlign significantly outperforms existing methods in cross-modal retrieval tasks, establishing a new paradigm for understanding visual cognitive mechanisms.
翻译:理解视觉刺激的神经响应仍然具有挑战性,这源于大脑表征固有的复杂性以及神经数据与视觉输入之间的模态鸿沟。现有方法主要基于将神经解码简化为生成任务或简单相关性分析,未能反映大脑视觉处理的层次性和时序性过程。为应对这些局限,我们提出了NeuroAlign,一个受人类视觉系统层次结构启发、用于细粒度fMRI-视频对齐的新颖框架。我们的框架实现了一个模拟生物视觉通路的两阶段机制:通过神经-时序对比学习实现全局语义理解,以及通过增强的向量量化实现细粒度模式匹配。NTCL通过模态间的双向预测显式建模时序动态,而我们的DynaSyncMM-EMA方法则通过自适应加权实现动态多模态融合。实验表明,NeuroAlign在跨模态检索任务中显著优于现有方法,为理解视觉认知机制建立了一个新范式。