Going beyond 'dendritic democracy', we introduce a 'democracy of local processors', termed Cooperator. Here we compare their capabilities when used in permutation invariant neural networks for reinforcement learning (RL), with machine learning algorithms based on Transformers, such as ChatGPT. Transformers are based on the long standing conception of integrate-and-fire 'point' neurons, whereas Cooperator is inspired by recent neurobiological breakthroughs suggesting that the cellular foundations of mental life depend on context-sensitive pyramidal neurons in the neocortex which have two functionally distinct points. Weshow that when used for RL, an algorithm based on Cooperator learns far quicker than that based on Transformer, even while having the same number of parameters.
翻译:超越"树突民主"的概念,我们引入了一种"局部处理器民主"机制,称为Cooperator。本文比较了它们在用于强化学习的置换不变神经网络中的能力,并与基于Transformer的机器学习算法(如ChatGPT)进行了对比。Transformer基于长期存在的整合发放"点"神经元概念,而Cooperator的灵感来源于近期神经生物学突破,该突破表明心智生活的细胞基础依赖于新皮层中具有两个功能不同位点的情境敏感性锥体神经元。我们证明,当用于强化学习时,基于Cooperator的算法学习速度远超基于Transformer的算法,即使在参数量相同的情况下亦是如此。