Conditional Neural Fields (CNFs) are increasingly being leveraged as continuous signal representations, by associating each data-sample with a latent variable that conditions a shared backbone Neural Field (NeF) to reconstruct the sample. However, existing CNF architectures face limitations when using this latent downstream in tasks requiring fine grained geometric reasoning, such as classification and segmentation. We posit that this results from lack of explicit modelling of geometric information (e.g. locality in the signal or the orientation of a feature) in the latent space of CNFs. As such, we propose Equivariant Neural Fields (ENFs), a novel CNF architecture which uses a geometry-informed cross-attention to condition the NeF on a geometric variable, a latent point cloud of features, that enables an equivariant decoding from latent to field. We show that this approach induces a steerability property by which both field and latent are grounded in geometry and amenable to transformation laws: if the field transforms, the latent representation transforms accordingly - and vice versa. Crucially, this equivariance relation ensures that the latent is capable of (1) representing geometric patterns faitfhully, allowing for geometric reasoning in latent space, (2) weight-sharing over similar local patterns, allowing for efficient learning of datasets of fields. We validate these main properties in a range of tasks including classification, segmentation, forecasting and reconstruction, showing clear improvement over baselines with a geometry-free latent space.
翻译:条件神经场(CNFs)正日益被用作连续信号的表示方法,其通过将每个数据样本与一个潜变量相关联,该变量调节共享主干神经场(NeF)以重建样本。然而,现有的CNF架构在将此类潜变量用于需要细粒度几何推理的下游任务(如分类与分割)时面临局限。我们认为,这是由于CNFs的潜空间缺乏对几何信息(如信号的局部性或特征的方向性)的显式建模所致。为此,我们提出等变神经场(ENFs),这是一种新颖的CNF架构,其采用几何感知的交叉注意力机制,通过几何变量——即一个由特征构成的潜点云——来调节NeF,从而实现从潜空间到场空间的等变解码。我们证明,该方法诱导出一种可操控性:场与潜变量均以几何为基础,并遵循变换规律——若场发生变换,其潜表示亦相应变换,反之亦然。至关重要的是,这种等变关系确保潜变量能够:(1)忠实表示几何模式,支持在潜空间中进行几何推理;(2)对相似的局部模式实现权重共享,从而高效学习场数据集。我们在一系列任务(包括分类、分割、预测与重建)中验证了这些核心特性,结果显示其明显优于采用无几何潜空间的基线方法。