A central goal in many brain studies is the identification of those brain regions that are activated during an observation window that may correspond to a motor task, a stimulus, or simply a resting state. While functional MRI is currently the most commonly employed modality for such task, methods based on the electromagnetic activity of the brain are valuable alternatives because of their excellent time resolution and of the fact that the measured signals are directly related to brain activation and not to a secondary effect such as the hemodynamic response. In this work we focus on the MEG modality, investigating the performance of a recently proposed Bayesian dictionary learning (BDL) algorithm for brain region identification. The partitioning of the source space into the 148 regions of interest (ROI) corresponding to parcellation of the Destrieux atlas provides a natural determination of the subdictionaries necessary for the BDL algorithm. We design a simulation protocol where a small randomly selected patch in each ROI is activated, the MEG signal is computed and the inverse problem of active brain region identification is solved using the BDL algorithm. The BDL algorithm consists of two phases, the first one comprising dictionary compression and Bayesian compression error analysis, and the second one performing dictionary coding with a deflated dictionary built on the output of the first phase, both steps relying on Bayesian sparsity promoting computations. For assessing the performance, we give a probabilistic interpretation of the confusion matrix, and consider different impurity measures for a multi-class classifier.
翻译:许多脑研究中的一个核心目标是识别在观察窗口期间被激活的脑区,该窗口可能对应于运动任务、刺激或仅仅是静息状态。虽然功能磁共振成像目前是此类任务最常用的模态,但基于脑电磁活动的方法因其优异的时间分辨率以及测量信号直接与脑激活相关(而非血氧动力学响应等次级效应)而成为有价值的替代方案。在本工作中,我们聚焦于MEG模态,研究一种近期提出的贝叶斯字典学习算法在脑区识别中的性能。将源空间划分为对应于Destrieux图谱分区的148个感兴趣区域,为BDL算法所需的子字典提供了自然的确定方式。我们设计了一个仿真协议:在每个ROI中随机选择的小块被激活,计算MEG信号,并使用BDL算法解决活跃脑区识别的逆问题。BDL算法包含两个阶段:第一阶段包括字典压缩和贝叶斯压缩误差分析,第二阶段使用基于第一阶段输出构建的缩减字典进行字典编码,这两个步骤均依赖于贝叶斯稀疏性促进计算。为评估性能,我们给出了混淆矩阵的概率解释,并考虑了多分类器的不同不纯度度量。