Vector representations have been pivotal in advancing natural language processing (NLP), with prior research focusing on embedding techniques for mathematical expressions using mathematically equivalent formulations. While effective, these approaches are constrained by the size and diversity of training data. In this work, we address these limitations by introducing E-Gen, a novel e-graph-based dataset generation scheme that synthesizes large and diverse mathematical expression datasets, surpassing prior methods in size and operator variety. Leveraging this dataset, we train embedding models using two strategies: (1) generating mathematically equivalent expressions, and (2) contrastive learning to explicitly group equivalent expressions. We evaluate these embeddings on both in-distribution and out-of-distribution mathematical language processing tasks, comparing them against prior methods. Finally, we demonstrate that our embedding-based approach outperforms state-of-the-art large language models (LLMs) on several tasks, underscoring the necessity of optimizing embedding methods for the mathematical data modality. The source code and datasets are available at https://github.com/MLPgroup/E-Gen.
翻译:向量表示在推动自然语言处理(NLP)发展中具有关键作用,先前研究主要关注利用数学等价形式对数学表达式进行嵌入的技术。尽管这些方法有效,但其性能受限于训练数据的规模与多样性。本研究通过引入E-Gen——一种基于e-graph的新型数据集生成方案,解决了这些局限性。该方案能合成大规模、多样化的数学表达式数据集,在数据规模和运算符多样性方面均超越先前方法。基于此数据集,我们采用两种策略训练嵌入模型:(1)生成数学等价表达式;(2)通过对比学习显式聚合等价表达式。我们在分布内与分布外数学语言处理任务上评估这些嵌入表示,并与现有方法进行比较。最后,我们证明基于嵌入的方法在多项任务上优于最先进的大型语言模型(LLMs),这凸显了针对数学数据模态优化嵌入方法的必要性。源代码与数据集公开于https://github.com/MLPgroup/E-Gen。