The problem of natural language generation, and, more specifically, method name prediction, faces significant difficulties when proposed models need to be evaluated on test data. Such a metric would need to consider the versatility with which a single method can be named, with respect to both semantics and syntax. Measuring the direct overlap between the predicted and reference (true) sequences will not be able to capture these subtleties. Other existing embedding based metrics either do not measure precision and recall or impose strict unrealistic assumptions on both sequences. To address these issues, we propose a new metric that, on the one hand, is very simple and lightweight, and, on the other hand, is able to calculate precision and recall without resorting to any assumptions while obtaining good performance with respect to the human judgement.
翻译:自然语言生成问题,特别是方法名预测任务,在评估模型于测试数据上的表现时面临显著困难。此类评估指标需兼顾单个方法在语义和句法层面可能存在的多种命名方式。直接测量预测序列与参考(真实)序列间的重叠度无法捕捉这些细微差异。现有基于嵌入的度量方法要么无法衡量精确率与召回率,要么对两种序列施加了严格且不切实际的假设。为解决这些问题,我们提出了一种新型评估指标,该指标一方面具有极简结构与轻量特性,另一方面能够在不依赖任何假设的前提下计算精确率与召回率,同时在人类评判相关性方面展现出优异性能。