Learning from interpretation transition (LFIT) is a framework for learning rules from observed state transitions. LFIT has been implemented in purely symbolic algorithms, but they are unable to deal with noise or generalize to unobserved transitions. Rule extraction based neural network methods suffer from overfitting, while more general implementation that categorize rules suffer from combinatorial explosion. In this paper, we introduce a technique to leverage variable permutation invariance inherent in symbolic domains. Our technique ensures that the permutation and the naming of the variables would not affect the results. We demonstrate the effectiveness and the scalability of this method with various experiments. Our code is publicly available at https://github.com/phuayj/delta-lfit-2
翻译:从解释转换中学习(LFIT)是一种从观测状态转换中学习规则的框架。LFIT已在纯符号算法中实现,但这些算法无法处理噪声或泛化到未观测的转换。基于神经网络的方法在规则提取时存在过拟合问题,而对规则进行分类的更通用实现则面临组合爆炸的挑战。本文引入一种技术,以利用符号域中固有的变量排列不变性。该技术确保变量的排列与命名不会影响结果。我们通过多项实验证明了该方法的有效性和可扩展性。我们的代码公开发布于 https://github.com/phuayj/delta-lfit-2。