We advance the recently proposed neuro-symbolic Differentiable Tree Machine, which learns tree operations using a combination of transformers and Tensor Product Representations. We investigate the architecture and propose two key components. We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts. This results in a Differentiable Tree Experts model with a constant number of parameters for any arbitrary number of steps in the computation, compared to the previous method in the Differentiable Tree Machine with a linear growth. Given this flexibility in the number of steps, we additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically. The resulting Terminating Differentiable Tree Experts model sluggishly learns to predict the number of steps without an oracle. It can do so while maintaining the learning capabilities of the model, converging to the optimal amount of steps.
翻译:我们改进了最近提出的神经符号可微树机,该模型通过结合Transformer和Tensor Product Representations来学习树操作。我们研究了该架构并提出了两个关键组件。首先,我们通过引入专家混合机制,移除了原模型中每个计算步骤都需要使用的一系列不同Transformer层。这产生了可微树专家模型,其参数数量对于任意计算步骤数保持恒定,而先前可微树机方法的参数数量随步骤数线性增长。基于这种步骤数灵活性,我们进一步提出了一种新的终止算法,使模型能够自动选择计算步骤数。由此产生的可终止可微树专家模型无需预设真值,即可自主预测所需步骤数。该模型在保持学习能力的同时,能够收敛至最优步骤数。