This paper makes two contributions to the field of text-based patent similarity. First, it compares the performance of different kinds of patent-specific pretrained embedding models, namely static word embeddings (such as word2vec and doc2vec models) and contextual word embeddings (such as transformers based models), on the task of patent similarity calculation. Second, it compares specifically the performance of Sentence Transformers (SBERT) architectures with different training phases on the patent similarity task. To assess the models' performance, we use information about patent interferences, a phenomenon in which two or more patent claims belonging to different patent applications are proven to be overlapping by patent examiners. Therefore, we use these interferences cases as a proxy for maximum similarity between two patents, treating them as ground-truth to evaluate the performance of the different embedding models. Our results point out that, first, Patent SBERT-adapt-ub, the domain adaptation of the pretrained Sentence Transformer architecture proposed in this research, outperforms the current state-of-the-art in patent similarity. Second, they show that, in some cases, large static models performances are still comparable to contextual ones when trained on extensive data; thus, we believe that the superiority in the performance of contextual embeddings may not be related to the actual architecture but rather to the way the training phase is performed.
翻译:本文在基于文本的专利相似性领域做出两点贡献。首先,比较了不同类别的专利专用预训练嵌入模型——静态词嵌入(如word2vec和doc2vec模型)与上下文词嵌入(如基于Transformer的模型)——在专利相似性计算任务中的表现。其次,专门比较了经过不同训练阶段的Sentence Transformers(SBERT)架构在专利相似性任务中的表现。为评估模型性能,我们利用专利抵触信息——即专利审查员证明属于不同专利申请的两项以上专利权利要求存在重叠的现象。因此,我们将这些抵触案例视为两份专利之间的最大相似性代理,将其作为真实标签来评估不同嵌入模型的性能。研究结果表明:首先,本研究提出的专利SBERT-adapt-ub(即预训练Sentence Transformer架构的领域自适应版本)在专利相似性方面超越了当前最优水平;其次,在某些情况下,当基于大规模数据训练时,大型静态模型的性能仍可与上下文模型相媲美。因此我们认为,上下文嵌入的性能优势可能并非源于模型架构本身,而是取决于训练阶段的实施方式。