Going beyond 'dendritic democracy', we introduce a 'democracy of local processors', termed Cooperator. Here we compare their capabilities when used in permutation invariant neural networks for reinforcement learning (RL), with machine learning algorithms based on Transformers, such as ChatGPT. Transformers are based on the long standing conception of integrate-and-fire 'point' neurons, whereas Cooperator is inspired by recent neurobiological breakthroughs suggesting that the cellular foundations of mental life depend on context-sensitive pyramidal neurons in the neocortex which have two functionally distinct points. Weshow that when used for RL, an algorithm based on Cooperator learns far quicker than that based on Transformer, even while having the same number of parameters.
翻译:超越“树突民主”的概念,我们引入了一种称为“合作者”的“局部处理器民主”模型。本文比较了它们在用于强化学习的置换不变神经网络中的能力,并与基于Transformer的机器学习算法(如ChatGPT)进行了对比。Transformer基于长期存在的整合发放“点”神经元概念,而合作者的灵感则来源于最近的神经生物学突破,该突破表明心智生活的细胞基础依赖于新皮层中具有两个功能不同点的、上下文敏感的锥体神经元。我们证明,当用于强化学习时,基于合作者的算法学习速度远快于基于Transformer的算法,即使在参数数量相同的情况下也是如此。