Large language models (LLMs) can be leveraged to help with writing formulas in spreadsheets, but resources on these formulas are scarce, impacting both the base performance of pre-trained models and limiting the ability to fine-tune them. Given a corpus of formulas, we can use a(nother) model to generate synthetic natural language utterances for fine-tuning. However, it is important to validate whether the NL generated by the LLM is indeed accurate to be beneficial for fine-tuning. In this paper, we provide empirical results on the impact of validating these synthetic training examples with surrogate objectives that evaluate the accuracy of the synthetic annotations. We demonstrate that validation improves performance over raw data across four models (2 open and 2 closed weight). Interestingly, we show that although validation tends to prune more challenging examples, it increases the complexity of problems that models can solve after being fine-tuned on validated data.
翻译:大型语言模型(LLMs)可用于辅助电子表格中的公式编写,但相关公式资源稀缺,这既影响了预训练模型的基础性能,也限制了对其进行微调的能力。给定公式语料库,我们可以使用(另一个)模型生成用于微调的合成自然语言表述。然而,验证LLM生成的自然语言是否确实准确从而有益于微调至关重要。本文通过代理目标评估合成标注的准确性,就验证这些合成训练样本的影响提供了实证结果。我们证明,在四种模型(2个开源权重和2个闭源权重)上,验证相比原始数据均能提升性能。有趣的是,我们发现尽管验证倾向于剪除更具挑战性的样本,但它提高了模型在已验证数据上微调后所能解决问题的复杂度。