The generic text preprocessing pipeline, comprising Tokenisation, Normalisation, Stop Words Removal, and Stemming/Lemmatisation, has been implemented in many systems for syntactic ontology matching (OM). However, the lack of standardisation in text preprocessing creates diversity in mapping results. In this paper, we investigate the effect of the text preprocessing pipeline on syntactic OM in 8 Ontology Alignment Evaluation Initiative (OAEI) tracks with 49 distinct alignments. We find that Phase 1 text preprocessing (Tokenisation and Normalisation) is currently more effective than Phase 2 text preprocessing (Stop Words Removal and Stemming/Lemmatisation). To repair the less effective Phase 2 text preprocessing caused by unwanted false mappings, we propose a novel context-based pipeline repair approach that employs an ad hoc check to find common words that cause false mappings. These words are stored in a reserved word set and applied in text preprocessing. The experimental results show that our approach improves the matching correctness and the overall matching performance. We also discuss the integration of the classical text preprocessing pipeline with modern large language models (LLMs). We recommend that LLMs inject the text preprocessing pipeline via function calling to avoid the tendency towards unstable true mappings produced by prompt-based LLM approaches, and use LLMs to repair false mappings generated by the text preprocessing pipeline.
翻译:通用的文本预处理流程,包括分词、规范化、停用词去除和词干提取/词形还原,已在许多语法本体匹配系统中得到应用。然而,文本预处理缺乏标准化,导致映射结果存在多样性。本文在8个本体对齐评估倡议(OAEI)赛道、涉及49个不同对齐任务中,研究了文本预处理流程对语法本体匹配的影响。我们发现,第一阶段文本预处理(分词和规范化)目前比第二阶段文本预处理(停用词去除和词干提取/词形还原)更为有效。为修复由第二阶段文本预处理中不必要错误映射导致的低效问题,我们提出了一种新颖的基于上下文的流程修复方法,该方法采用临时检查来识别导致错误映射的常见词汇。这些词汇被存储在保留词集中并应用于文本预处理。实验结果表明,我们的方法提高了匹配正确性和整体匹配性能。我们还探讨了经典文本预处理流程与现代大语言模型(LLMs)的集成。我们建议LLMs通过函数调用注入文本预处理流程,以避免基于提示的LLM方法产生不稳定真实映射的倾向,并利用LLMs修复文本预处理流程生成的错误映射。