In this paper, we present a unified framework for various bio-inspired models to better understand their structural and functional differences. We show that liquid-capacitance-extended models lead to interpretable behavior even in dense, all-to-all recurrent neural network (RNN) policies. We further demonstrate that incorporating chemical synapses improves interpretability and that combining chemical synapses with synaptic activation yields the most accurate and interpretable RNN models. To assess the accuracy and interpretability of these RNN policies, we consider the challenging lane-keeping control task and evaluate performance across multiple metrics, including turn-weighted validation loss, neural activity during driving, absolute correlation between neural activity and road trajectory, saliency maps of the networks' attention, and the robustness of their saliency maps measured by the structural similarity index.
翻译:本文提出一个统一的框架,用于分析各类生物启发模型,以更好地理解其结构与功能差异。我们证明,即使在密集的全连接循环神经网络(RNN)策略中,液体-电容扩展模型也能产生可解释的行为。我们进一步表明,引入化学突触能提升模型的可解释性,而将化学突触与突触激活相结合可得到最精确且可解释的RNN模型。为评估这些RNN策略的准确性与可解释性,我们以具有挑战性的车道保持控制任务为测试场景,并通过多维度指标进行性能评估,包括转向加权验证损失、驾驶过程中的神经活动、神经活动与道路轨迹的绝对相关性、网络注意力的显著图,以及通过结构相似性指数衡量的显著图鲁棒性。