Inferring contextually-relevant and diverse commonsense to understand narratives remains challenging for knowledge models. In this work, we develop a series of knowledge models, DiffuCOMET, that leverage diffusion to learn to reconstruct the implicit semantic connections between narrative contexts and relevant commonsense knowledge. Across multiple diffusion steps, our method progressively refines a representation of commonsense facts that is anchored to a narrative, producing contextually-relevant and diverse commonsense inferences for an input context. To evaluate DiffuCOMET, we introduce new metrics for commonsense inference that more closely measure knowledge diversity and contextual relevance. Our results on two different benchmarks, ComFact and WebNLG+, show that knowledge generated by DiffuCOMET achieves a better trade-off between commonsense diversity, contextual relevance and alignment to known gold references, compared to baseline knowledge models.
翻译:对于知识模型而言,推断与情境相关且多样化的常识以理解叙事仍然具有挑战性。在本工作中,我们开发了一系列知识模型DiffuCOMET,其利用扩散学习来重建叙事情境与相关常识知识之间的隐含语义联系。通过多个扩散步骤,我们的方法逐步优化一个锚定于叙事的常识事实表示,从而为输入情境生成与情境相关且多样化的常识推断。为评估DiffuCOMET,我们引入了新的常识推断度量指标,这些指标能更准确地衡量知识多样性与情境相关性。我们在ComFact和WebNLG+两个不同基准测试上的结果表明,与基线知识模型相比,DiffuCOMET生成的知识在常识多样性、情境相关性以及与已知黄金参考标准的对齐度之间实现了更好的权衡。