While modern neural architectures typically generalize via smooth interpolation, it lacks the inductive biases required to uncover algebraic structures essential for systematic generalization. We present the first theoretical analysis of HyperCube, a differentiable tensor factorization architecture designed to bridge this gap. This work establishes an intrinsic geometric property of the HyperCube formulation: we prove that the architecture mediates a fundamental equivalence between geometric alignment and algebraic structure. Independent of the global optimization landscape, we show that the condition of geometric alignment imposes rigid algebraic constraints, proving that the feasible collinear manifold is non-empty if and only if the target is isotopic to a group. Within this manifold, we characterize the objective as a rank-maximizing potential that unconditionally drives factors toward full-rank, unitary representations. Finally, we propose the Collinearity Dominance mechanism to link these structural results to the global landscape. Supported by empirical scaling laws, we establish that global minima are achieved exclusively by unitary regular representations of group isotopes. This formalizes the HyperCube objective as a differentiable proxy for associativity, demonstrating how rigid geometric constraints enable the discovery of latent algebraic symmetry.
翻译:尽管现代神经网络架构通常通过平滑插值实现泛化,但其缺乏揭示系统性泛化所必需的代数结构的归纳偏置。本文首次对HyperCube——一种旨在弥合这一差距的可微张量分解架构——进行理论分析。本研究确立了HyperCube公式的内在几何特性:我们证明该架构在几何对齐与代数结构之间建立了根本等价关系。独立于全局优化景观,我们证明几何对齐条件施加了严格的代数约束,并证明可行共线流形非空的充要条件是目标与群同痕。在该流形内,我们将目标函数表征为秩最大化势函数,该函数无条件地驱动因子趋向满秩幺正表示。最后,我们提出共线性主导机制,将这些结构结果与全局景观相联系。基于经验缩放定律的支持,我们证实全局最小值仅能通过群同痕的幺正正则表示实现。这形式化了HyperCube目标作为可微结合性代理函数的本质,揭示了严格几何约束如何实现潜在代数对称性的发现。