One-class learning (OCL) comprises a set of techniques applied when real-world problems have a single class of interest. The usual procedure for OCL is learning a hypersphere that comprises instances of this class and, ideally, repels unseen instances from any other classes. Besides, several OCL algorithms for graphs have been proposed since graph representation learning has succeeded in various fields. These methods may use a two-step strategy, initially representing the graph and, in a second step, classifying its nodes. On the other hand, end-to-end methods learn the node representations while classifying the nodes in one learning process. We highlight three main gaps in the literature on OCL for graphs: (i) non-customized representations for OCL; (ii) the lack of constraints on hypersphere parameters learning; and (iii) the methods' lack of interpretability and visualization. We propose One-cLass Graph Autoencoder (OLGA). OLGA is end-to-end and learns the representations for the graph nodes while encapsulating the interest instances by combining two loss functions. We propose a new hypersphere loss function to encapsulate the interest instances. OLGA combines this new hypersphere loss with the graph autoencoder reconstruction loss to improve model learning. OLGA achieved state-of-the-art results and outperformed six other methods with a statistically significant difference from five methods. Moreover, OLGA learns low-dimensional representations maintaining the classification performance with an interpretable model representation learning and results.
翻译:单类学习包含一组技术,应用于现实问题中仅关注单个类别的场景。其常规流程是学习一个超球面,该超球面能包含目标类别的实例,并理想地排斥来自其他类别的未见实例。随着图表示学习在多个领域的成功应用,目前已提出多种针对图的单类学习算法。这些方法可能采用两阶段策略:首先对图进行表示,随后在第二阶段对其节点进行分类。而端到端方法则在一个学习过程中同时学习节点表示并执行节点分类。我们指出图单类学习文献中存在三个主要空白:(i)缺乏针对单类学习的定制化表示;(ii)超球面参数学习缺乏约束;(iii)方法缺乏可解释性与可视化能力。为此,我们提出单类图自编码器(OLGA)。OLGA采用端到端架构,通过结合两个损失函数,在学习图节点表示的同时封装目标类别实例。我们提出一种新的超球面损失函数来封装目标实例,并将该损失与图自编码器重构损失相结合以优化模型学习。实验结果表明,OLGA取得了最优性能,在统计显著性上优于其他六种方法中的五种。此外,OLGA在保持分类性能的同时学习了低维表示,并实现了可解释的模型表示学习与结果可视化。