Most contemporary neural learning systems rely on epoch-based optimization and repeated access to historical data, implicitly assuming reversible computation. In contrast, real-world environments often present information as irreversible streams, where inputs cannot be replayed or revisited. Under such conditions, conventional architectures degrade into reactive filters lacking long-horizon coherence. This paper introduces Stream Neural Networks (StNN), an execution paradigm designed for irreversible input streams. StNN operates through a stream-native execution algorithm, the Stream Network Algorithm (SNA), whose fundamental unit is the stream neuron. Each stream neuron maintains a persistent temporal state that evolves continuously across inputs. We formally establish three structural guarantees: (1) stateless mappings collapse under irreversibility and cannot encode temporal dependencies; (2) persistent state dynamics remain bounded under mild activation constraints; and (3) the state transition operator is contractive for λ < 1, ensuring stable long-horizon execution. Empirical phase-space analysis and continuous tracking experiments validate these theoretical results. The execution principles introduced in this work define a minimal substrate for neural computation under irreversible streaming constraints.
翻译:大多数当代神经学习系统依赖于基于周期的优化和重复访问历史数据,隐含地假设了可逆计算。相比之下,现实世界环境通常以不可逆流的形式呈现信息,其中输入无法被重放或重新访问。在此类条件下,传统架构会退化为缺乏长程一致性的反应式滤波器。本文介绍了流式神经网络(StNN),一种专为不可逆输入流设计的执行范式。StNN通过一种流原生的执行算法——流网络算法(SNA)——运行,其基本单元是流神经元。每个流神经元维持一个持久的时态状态,该状态在输入间持续演化。我们正式确立了三个结构性保证:(1)无状态映射在不可逆性下崩溃,无法编码时间依赖关系;(2)在温和的激活约束下,持久状态动态保持有界;(3)对于λ < 1,状态转移算子是收缩的,确保了稳定的长程执行。经验性的相空间分析和连续跟踪实验验证了这些理论结果。本文引入的执行原则定义了在不可逆流约束下神经计算的最小基础。