Deep learning has revolutionized modern society but faces growing energy and latency constraints. Deep physical neural networks (PNNs) are interconnected computing systems that directly exploit analog dynamics for energy-efficient, ultrafast AI execution. Realizing this potential, however, requires universal training methods tailored to physical intricacies. Here, we present the Physical Information Bottleneck (PIB), a general and efficient framework that integrates information theory and local learning, enabling deep PNNs to learn under arbitrary physical dynamics. By allocating matrix-based information bottlenecks to each unit, we demonstrate supervised, unsupervised, and reinforcement learning across electronic memristive chips and optical computing platforms. PIB also adapts to severe hardware faults and allows for parallel training via geographically distributed resources. Bypassing auxiliary digital models and contrastive measurements, PIB recasts PNN training as an intrinsic, scalable information-theoretic process compatible with diverse physical substrates.
翻译:深度学习已彻底改变了现代社会,但其面临着日益严峻的能耗与延迟限制。深度物理神经网络(PNNs)是一种互连的计算系统,它直接利用模拟动力学来实现高能效、超快速的人工智能执行。然而,要发挥这一潜力,需要针对物理复杂性量身定制的通用训练方法。在此,我们提出物理信息瓶颈(PIB),这是一个通用且高效的框架,它融合了信息论与局部学习,使深度PNN能够在任意物理动力学下进行学习。通过为每个单元分配基于矩阵的信息瓶颈,我们在电子忆阻芯片和光学计算平台上演示了监督学习、无监督学习和强化学习。PIB还能适应严重的硬件故障,并允许通过地理上分布的资源进行并行训练。通过绕过辅助数字模型和对比测量,PIB将PNN训练重塑为一个内在的、可扩展的信息论过程,兼容多种物理基底。