Current neural architectures lack a principled way to handle interchangeable tokens, i.e., symbols that are semantically equivalent yet distinguishable, such as bound variables. As a result, models trained on fixed vocabularies often struggle to generalize to unseen symbols, even when the underlying semantics remain unchanged. We propose a novel Transformer-based mechanism that is provably invariant to the renaming of interchangeable tokens. Our approach employs parallel embedding streams to isolate the contribution of each interchangeable token in the input, combined with an aggregated attention mechanism that enables structured information sharing across streams. Experimental results confirm the theoretical guarantees of our method and demonstrate substantial performance gains on open-vocabulary tasks that require generalization to novel symbols.
翻译:当前神经网络架构缺乏处理可互换标记(即语义等价但可区分的符号,如约束变量)的原则性方法。因此,在固定词汇表上训练的模型往往难以泛化到未见符号,即使底层语义保持不变。我们提出一种基于Transformer的新型机制,该机制在理论上对可互换标记的重命名具有不变性。我们的方法采用并行嵌入流来隔离输入中每个可互换标记的贡献,并结合聚合注意力机制实现跨流的结构化信息共享。实验结果证实了我们方法的理论保证,并在需要泛化到新符号的开放词汇任务上展现了显著的性能提升。