Recent deep learning models increasingly rely on depth without structural guarantees on the validity of intermediate representations, rendering early stopping and adaptive computation ill-posed. We address this limitation by formulating a structural requirement for state-space model's scale-consistent latent dynamics across iterative refinement, and derive Fractal of Stationary Transformations (FROST), which enforces a self-similar representation manifold through a fractal inductive bias. Under this geometry, intermediate states correspond to different resolutions of a shared representation, and we provide a geometric analysis establishing contraction and stable convergence across iterations. As a consequence of this scale-consistent structure, halting naturally admits a ranking-based formulation driven by intrinsic feature quality rather than extrinsic objectives. Controlled experiments on ImageNet-100 empirically verify the predicted scale-consistent behavior, showing that adaptive efficiency emerges from the aligned latent geometry.
翻译:当前深度学习模型日益依赖深度,但缺乏对中间表示有效性的结构保证,导致早期停止和自适应计算难以准确定义。为解决这一局限,我们提出了状态空间模型在迭代优化过程中保持尺度一致潜在动力学的结构要求,并推导出平稳变换分形(FROST)。该方法通过分形归纳偏置强制实现自相似表示流形。在此几何框架下,中间状态对应于共享表示的不同分辨率层级,我们通过几何分析证明了迭代过程中的收缩性与稳定收敛性。这种尺度一致结构的直接结果是:停止机制自然地采用基于排序的公式化表达,其驱动力源于内在特征质量而非外部目标。在ImageNet-100上进行的受控实验从经验上验证了预测的尺度一致行为,表明自适应效率源自对齐的潜在几何结构。