Complex Logical Query Answering (CLQA) over incomplete knowledge graphs is a challenging task. Recently, Query Embedding (QE) methods are proposed to solve CLQA by performing multi-hop logical reasoning. However, most of them only consider historical query context information while ignoring future information, which leads to their failure to capture the complex dependencies behind the elements of a query. In recent years, the transformer architecture has shown a strong ability to model long-range dependencies between words. The bidirectional attention mechanism proposed by the transformer can solve the limitation of these QE methods regarding query context. Still, as a sequence model, it is difficult for the transformer to model complex logical queries with branch structure computation graphs directly. To this end, we propose a neural one-point embedding method called Pathformer based on the tree-like computation graph, i.e., query computation tree. Specifically, Pathformer decomposes the query computation tree into path query sequences by branches and then uses the transformer encoder to recursively encode these path query sequences to obtain the final query embedding. This allows Pathformer to fully utilize future context information to explicitly model the complex interactions between various parts of the path query. Experimental results show that Pathformer outperforms existing competitive neural QE methods, and we found that Pathformer has the potential to be applied to non-one-point embedding space.
翻译:在不完全知识图谱上进行复杂逻辑查询回答是一项具有挑战性的任务。近年来,查询嵌入方法被提出,通过执行多跳逻辑推理来解决复杂逻辑查询问题。然而,大多数方法仅考虑历史查询上下文信息,而忽略了未来信息,这导致它们无法捕捉查询元素背后复杂的依赖关系。近年来,Transformer架构在建模词语间长距离依赖方面展现出强大能力。Transformer提出的双向注意力机制可以解决这些查询嵌入方法在查询上下文方面的局限性。然而,作为一种序列模型,Transformer难以直接对具有分支结构计算图的复杂逻辑查询进行建模。为此,我们提出了一种基于树状计算图(即查询计算树)的神经单点嵌入方法,称为Pathformer。具体而言,Pathformer将查询计算树按分支分解为路径查询序列,然后使用Transformer编码器递归地编码这些路径查询序列,以获得最终的查询嵌入。这使得Pathformer能够充分利用未来上下文信息,显式地建模路径查询各部分之间复杂的交互作用。实验结果表明,Pathformer优于现有具有竞争力的神经查询嵌入方法,并且我们发现Pathformer有潜力应用于非单点嵌入空间。