The eigenfunctions of the Laplace operator are essential in mathematical physics, engineering, and geometry processing. Typically, these are computed by discretizing the domain and performing eigendecomposition, tying the results to a specific mesh. However, this method is unsuitable for continuously-parameterized shapes. We propose a novel representation for eigenfunctions in continuously-parameterized shape spaces, where eigenfunctions are spatial fields with continuous dependence on shape parameters, defined by minimal Dirichlet energy, unit norm, and mutual orthogonality. We implement this with multilayer perceptrons trained as neural fields, mapping shape parameters and domain positions to eigenfunction values. A unique challenge is enforcing mutual orthogonality with respect to causality, where the causal ordering varies across the shape space. Our training method therefore requires three interwoven concepts: (1) learning $n$ eigenfunctions concurrently by minimizing Dirichlet energy with unit norm constraints; (2) filtering gradients during backpropagation to enforce causal orthogonality, preventing earlier eigenfunctions from being influenced by later ones; (3) dynamically sorting the causal ordering based on eigenvalues to track eigenvalue curve crossovers. We demonstrate our method on problems such as shape family analysis, predicting eigenfunctions for incomplete shapes, interactive shape manipulation, and computing higher-dimensional eigenfunctions, on all of which traditional methods fall short.
翻译:拉普拉斯算子的特征函数在数学物理、工程和几何处理中至关重要。通常,这些特征函数通过对区域进行离散化并执行特征分解来计算,其结果与特定网格绑定。然而,该方法不适用于连续参数化的形状。我们提出了一种在连续参数化形状空间中表示特征函数的新方法,其中特征函数是空间场,其值连续依赖于形状参数,并通过最小化狄利克雷能量、单位范数和相互正交性来定义。我们使用多层感知机作为神经场来实现这一表示,将形状参数和区域位置映射到特征函数值。一个独特的挑战在于如何强制满足因果关系的相互正交性,其中因果顺序在形状空间中变化。因此,我们的训练方法需要三个相互交织的概念:(1) 通过最小化狄利克雷能量并施加单位范数约束,同时学习 $n$ 个特征函数;(2) 在反向传播过程中过滤梯度以强制因果正交性,防止较早的特征函数受较晚的特征函数影响;(3) 基于特征值动态排序因果顺序,以追踪特征值曲线的交叉点。我们在形状族分析、预测不完整形状的特征函数、交互式形状操作以及计算高维特征函数等问题上展示了我们的方法,而传统方法在这些方面均存在不足。