Self-supervised fMRI foundation models have shown promising transfer performance, yet most rely on predefined region-level parcellations that discard fine-grained voxel information and introduce atlas-dependent biases. We propose Omni-fMRI, an atlas-free foundation model that operates directly on voxel-level signals. To enable scalable pretraining on 49,497 fMRI sessions across nine datasets, Omni-fMRI introduces a dynamic patching mechanism that substantially reduces computational cost while preserving informative spatial structure. To support reproducibility and fair comparison, we establish a comprehensive benchmark suite spanning 11 datasets and a diverse set of resting-state and task-based fMRI tasks. Experimental results demonstrate that Omni-fMRI consistently outperforms existing foundation models, providing a scalable and reproducible framework for atlas-free brain representation learning. Code and logs are available.
翻译:自监督fMRI基础模型已展现出良好的迁移性能,但多数模型依赖于预定义的区域级分区图谱,这丢弃了细粒度的体素信息并引入了图谱依赖性偏差。我们提出了Omni-fMRI,一种直接在体素级信号上运行的无图谱基础模型。为了在跨越九个数据集的49,497个fMRI会话上实现可扩展的预训练,Omni-fMRI引入了一种动态分块机制,该机制在保留信息性空间结构的同时,显著降低了计算成本。为了支持可复现性和公平比较,我们建立了一个涵盖11个数据集以及多样化静息态和任务态fMRI任务的综合基准测试集。实验结果表明,Omni-fMRI始终优于现有的基础模型,为无图谱大脑表征学习提供了一个可扩展且可复现的框架。代码与日志已公开。