The mapping from sound to neural activity that underlies hearing is highly non-linear. The first few stages of this mapping in the cochlea have been modelled successfully, with biophysical models built by hand and, more recently, with DNN models trained on datasets simulated by biophysical models. Modelling the auditory brain has been a challenge because central auditory processing is too complex for models to be built by hand, and datasets for training DNN models directly have not been available. Recent work has taken advantage of large-scale high resolution neural recordings from the auditory midbrain to build a DNN model of normal hearing with great success. But this model assumes that auditory processing is the same in all brains, and therefore it cannot capture the widely varying effects of hearing loss. We propose a novel variational-conditional model to learn to encode the space of hearing loss directly from recordings of neural activity in the auditory midbrain of healthy and noise exposed animals. With hearing loss parametrised by only 6 free parameters per animal, our model accurately predicts 62% of the explainable variance in neural responses from normal hearing animals and 68% for hearing impaired animals, within a few percentage points of state of the art animal specific models. We demonstrate that the model can be used to simulate realistic activity from out of sample animals by fitting only the learned conditioning parameters with Bayesian optimisation, achieving crossentropy loss within 2% of the optimum in 15-30 iterations. Including more animals in the training data slightly improved the performance on unseen animals. This model will enable future development of parametrised hearing loss compensation models trained to directly restore normal neural coding in hearing impaired brains, which can be quickly fitted for a new user by human in the loop optimisation.
翻译:从声音到神经活动的映射是听觉的基础,这一映射具有高度非线性。耳蜗中的前几个阶段已成功建模,包括手工构建的生物物理模型,以及最近利用生物物理模型模拟数据集训练的深度神经网络模型。然而,听觉脑部的建模一直面临挑战,因为中枢听觉处理过于复杂,难以手工构建模型,且缺乏直接训练深度神经网络模型的数据集。近期研究利用听觉中脑的大规模高分辨率神经记录,成功构建了正常听力的深度神经网络模型。但该模型假设所有大脑的听觉处理过程相同,因此无法捕捉听力损失带来的广泛变异效应。我们提出了一种新颖的变分条件模型,通过健康动物和噪声暴露动物听觉中脑的神经活动记录,直接学习编码听力损失的空间。仅用每只动物6个自由参数对听力损失进行参数化后,我们的模型能准确预测正常听力动物神经响应中62%的可解释方差,以及听力受损动物的68%可解释方差,与当前最先进的动物特异性模型仅有几个百分点的差距。我们证明该模型可通过贝叶斯优化仅拟合学习到的条件参数,用于模拟样本外动物的真实神经活动,在15-30次迭代内实现与最优解相差2%以内的交叉熵损失。在训练数据中包含更多动物能略微提升对未见动物的预测性能。该模型将推动参数化听力损失补偿模型的未来发展,这些模型经训练可直接恢复听力受损大脑的正常神经编码,并可通过人机协同优化快速适配新用户。