Model reduction is essential for real-time simulation of deformable objects. Linear techniques such as PCA provide structured and predictable behavior, but their limited expressiveness restricts accuracy under large or nonlinear deformations. Nonlinear model reduction with neural networks offers richer representations and higher compression; however, without structural constraints, the learned mapping from latent coordinates to displacements often generalizes poorly beyond the training distribution. We present an odd difference-of-convex (DC) neural formulation that bridges linear and nonlinear model reduction. Our goal is to obtain a latent space that behaves reliably under unseen load magnitudes and directions. To improve extrapolation in magnitude, we introduce convexity into the decoder to discourage oscillatory responses. Yet convexity alone cannot represent the odd symmetry required by many symmetric systems, which is crucial for generalization to inverse force directions. We therefore adopt a DC formulation that preserves the stabilizing effect of convexity while explicitly enforcing odd symmetry. Practically, we realize this structure using an input-convex neural network (ICNN) augmented with symmetry constraints. Across challenging deformation scenarios with varying magnitudes and reversed load directions, our method demonstrates stronger generalization than unconstrained nonlinear reductions while maintaining compact latent spaces and real-time performance. Our DC formulation extends to both mesh-based and neural-field reductions, demonstrating applicability across multiple classes of neural nonlinear model reduction.
翻译:模型降阶对于可变形物体的实时仿真至关重要。线性降阶技术(如主成分分析)能够提供结构化且可预测的行为,但其有限的表达能力限制了在大变形或非线性变形下的精度。基于神经网络的非线性模型降阶方法能够提供更丰富的表示和更高的压缩率;然而,在没有结构约束的情况下,从潜在坐标到位移的学习映射通常在训练分布之外泛化能力较差。本文提出了一种奇偶差分凸神经架构,在线性与非线性模型降阶之间架起桥梁。我们的目标是获得一个在未见载荷大小和方向下表现可靠的潜在空间。为改善对载荷幅值的泛化能力,我们在解码器中引入凸性以抑制振荡响应。然而仅靠凸性无法满足许多对称系统所需的奇对称性,而这种对称性对于反向力方向的泛化至关重要。因此,我们采用差分凸架构,在保持凸性稳定作用的同时显式地强制执行奇对称性。具体实现上,我们通过增强对称约束的输入凸神经网络来实现该结构。在具有不同载荷大小和反向载荷方向的复杂变形场景中,我们的方法展现出比无约束非线性降阶更强的泛化能力,同时保持了紧凑的潜在空间和实时性能。该差分凸架构可同时适用于基于网格和神经场的降阶方法,证明了其在多类神经非线性模型降阶中的适用性。