The field of artificial intelligence faces significant challenges in achieving both biological plausibility and computational efficiency, particularly in visual learning tasks. Current artificial neural networks, such as convolutional neural networks, rely on techniques like backpropagation and weight sharing, which do not align with the brain's natural information processing methods. To address these issues, we propose the Memory Network, a model inspired by biological principles that avoids backpropagation and convolutions, and operates in a single pass. This approach enables rapid and efficient learning, mimicking the brain's ability to adapt quickly with minimal exposure to data. Our experiments demonstrate that the Memory Network achieves efficient and biologically plausible learning, showing strong performance on simpler datasets like MNIST. However, further refinement is needed for the model to handle more complex datasets such as CIFAR10, highlighting the need to develop new algorithms and techniques that closely align with biological processes while maintaining computational efficiency.
翻译:人工智能领域在实现生物学可信度与计算效率方面面临重大挑战,尤其在视觉学习任务中。当前的人工神经网络(如卷积神经网络)依赖于反向传播和权重共享等技术,这些技术与大脑的自然信息处理方法并不一致。为解决这些问题,我们提出记忆网络——一种受生物学原理启发的模型,该模型避免使用反向传播和卷积操作,并采用单次前向处理机制。这种方法实现了快速高效的学习,模拟了大脑仅需少量数据即可快速适应的能力。实验表明,记忆网络在MNIST等简单数据集上表现出色,实现了高效且生物学可信的学习。然而,该模型在处理CIFAR10等更复杂数据集时仍需进一步改进,这凸显了开发与生物过程高度契合且保持计算效率的新算法与技术的必要性。