While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.
翻译:虽然类比是自然语言处理中评估词嵌入的常见方式,但探究类比推理本身是否可作为一项可学习的任务同样具有研究价值。本文测试了多种学习基础类比推理的方法,特别聚焦于比常用自然语言处理基准更接近人类类比推理评估的典型类比。实验发现,模型即使仅使用少量数据也能学会类比推理。此外,我们将模型与含人类基线的数据集进行比较,发现经过训练后,模型的性能已接近人类水平。