This paper investigates the possibility of approximating multiple mathematical operations in latent space for expression derivation. To this end, we introduce different multi-operational representation paradigms, modelling mathematical operations as explicit geometric transformations. By leveraging a symbolic engine, we construct a large-scale dataset comprising 1.7M derivation steps stemming from 61K premises and 6 operators, analysing the properties of each paradigm when instantiated with state-of-the-art neural encoders. Specifically, we investigate how different encoding mechanisms can approximate expression manipulation in latent space, exploring the trade-off between learning different operators and specialising within single operations, as well as the ability to support multi-step derivations and out-of-distribution generalisation. Our empirical analysis reveals that the multi-operational paradigm is crucial for disentangling different operators, while discriminating the conclusions for a single operation is achievable in the original expression encoder. Moreover, we show that architectural choices can heavily affect the training dynamics, structural organisation, and generalisation of the latent space, resulting in significant variations across paradigms and classes of encoders.
翻译:本文研究了在潜空间中逼近多种数学运算以进行表达式推导的可能性。为此,我们提出了不同的多操作表示范式,将数学运算建模为显式的几何变换。通过利用符号引擎,我们构建了一个包含来自6.1万条前提和6个运算符的170万推导步骤的大规模数据集,分析了各范式在使用最先进神经编码器时的特性。具体而言,我们探究了不同编码机制如何在潜空间中逼近表达式操作,探讨了学习不同运算符与专注于单一操作之间的权衡,以及支持多步推导和分布外泛化的能力。我们的实证分析表明,多操作范式对于解耦不同运算符至关重要,而针对单一操作的结论区分在原始表达式编码器中即可实现。此外,我们发现架构选择会显著影响潜空间的训练动态、结构组织及泛化能力,导致不同范式和编码器类别间存在显著差异。