The problem of natural language generation, and, more specifically, method name prediction, faces significant difficulties when proposed models need to be evaluated on test data. Such a metric would need to consider the versatility with which a single method can be named, with respect to both semantics and syntax. Measuring the direct overlap between the predicted and reference (true) sequences will not be able to capture these subtleties. Other existing embedding based metrics either do not measure precision and recall or impose strict unrealistic assumptions on both sequences. To address these issues, we propose a new metric that, on the one hand, is very simple and lightweight, and, on the other hand, is able to calculate precision and recall without resorting to any assumptions while obtaining good performance with respect to the human judgement.
翻译:自然语言生成,特别是方法名预测问题,在需要对所提模型进行测试数据评估时面临显著困难。此类评估指标需考虑单个方法在语义和句法层面命名方式的多样性。直接测量预测序列与参考(真实)序列间的重叠度无法捕捉这些细微差异。现有基于嵌入的度量方法要么无法衡量精确率与召回率,要么对两种序列施加了严格且不切实际的假设。为解决这些问题,我们提出了一种新指标:一方面具有极简轻量的特性,另一方面能够在无需任何假设的前提下计算精确率与召回率,同时在人类评判相关性方面表现出良好性能。