Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas. We consider the problem of learning LTL formulas for classifying traces; despite a growing interest of the research community, existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result. We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime, meaning that it in most cases successfully outputs a formula, albeit possibly not of minimal size. We evaluate the performances of our algorithm using an open source implementation against publicly available benchmarks.
翻译:线性时序逻辑(LTL)是一种广泛应用于程序验证、机器人运动规划、过程挖掘等众多领域的有限序列(称为迹)规约语言。本文研究针对迹分类的LTL公式学习问题;尽管研究界对此日益关注,现有解决方案仍存在两个局限:无法扩展到小型公式之外,且可能耗尽计算资源而无法返回任何结果。我们提出一种新算法以同时解决这两个问题:该算法能够构建比先前方法大一个数量级的公式,并且具备随时性——即在大多数情况下能成功输出公式(尽管可能不是最小规模)。我们通过开源实现,在公开可用的基准测试上评估了该算法的性能。