Data-hungry neuro-AI modelling requires ever larger neuroimaging datasets. CNeuroMod-THINGS meets this need by capturing neural representations for a wide set of semantic concepts using well-characterized images in a new densely-sampled, large-scale fMRI dataset. Importantly, CNeuroMod-THINGS exploits synergies between two existing projects: the THINGS initiative (THINGS) and the Courtois Project on Neural Modelling (CNeuroMod). THINGS has developed a common set of thoroughly annotated images broadly sampling natural and man-made objects which is used to acquire a growing collection of large-scale multimodal neural responses. Meanwhile, CNeuroMod is acquiring hundreds of hours of fMRI data from a core set of participants during controlled and naturalistic tasks, including visual tasks like movie watching and videogame playing. For CNeuroMod-THINGS, four CNeuroMod participants each completed 33-36 sessions of a continuous recognition paradigm using approximately 4000 images from the THINGS stimulus set spanning 720 categories. We report behavioural and neuroimaging metrics that showcase the quality of the data. By bridging together large existing resources, CNeuroMod-THINGS expands our capacity to model broad slices of the human visual experience.
翻译:数据驱动的神经-人工智能建模需要越来越大规模的神经影像数据集。CNeuroMod-THINGS通过在一个新的高密度采样、大规模fMRI数据集中,使用经过充分表征的图像来捕获广泛语义概念的神经表征,从而满足了这一需求。重要的是,CNeuroMod-THINGS利用了现有两个项目之间的协同效应:THINGS计划(THINGS)和Courtois神经建模项目(CNeuroMod)。THINGS开发了一套经过详尽标注的通用图像集,广泛采样了自然和人造物体,并用于获取不断增长的大规模多模态神经响应集合。与此同时,CNeuroMod正在从一组核心参与者在受控和自然主义任务(包括观看电影和玩电子游戏等视觉任务)中采集数百小时的fMRI数据。在CNeuroMod-THINGS项目中,四位CNeuroMod参与者每人完成了33-36次连续识别范式实验,使用了来自THINGS刺激集的约4000张图像,涵盖720个类别。我们报告了展示数据质量的行为和神经影像指标。通过整合现有的大型资源,CNeuroMod-THINGS扩展了我们建模人类视觉体验广泛切片的能力。