We introduce SHADOW, a fine-tuned language model trained on an intermediate task using associative deductive reasoning, and measure its performance on a knowledge base construction task using Wikidata triple completion. We evaluate SHADOW on the LM-KBC 2024 challenge and show that it outperforms the baseline solution by 20% with a F1 score of 68.72%.
翻译:我们介绍了SHADOW,这是一个通过在关联演绎推理中间任务上微调训练的语言模型,并评估了其在基于Wikidata三元组补全的知识库构建任务上的性能。我们在LM-KBC 2024挑战赛上对SHADOW进行了评估,结果表明其以68.72%的F1分数超越了基线解决方案20%。