Large language models (LLMs) have demonstrated their prowess in generating synthetic text and images; however, their potential for generating tabular data -- arguably the most common data type in business and scientific applications -- is largely underexplored. This paper demonstrates that LLMs, used as-is, or after traditional fine-tuning, are severely inadequate as synthetic table generators. Due to the autoregressive nature of LLMs, fine-tuning with random order permutation runs counter to the importance of modeling functional dependencies, and renders LLMs unable to model conditional mixtures of distributions (key to capturing real world constraints). We showcase how LLMs can be made to overcome some of these deficiencies by making them permutation-aware.
翻译:大型语言模型(LLMs)在生成合成文本与图像方面已展现出卓越能力;然而,其在生成表格数据——这一商业与科学应用中最常见的数据类型——方面的潜力尚未得到充分探索。本文证明,无论是直接使用还是经过传统微调,LLMs作为合成表格生成器都存在严重不足。由于LLMs的自回归特性,采用随机排列顺序进行微调会违背建模函数依赖关系的重要性,导致模型无法刻画条件混合分布(这对捕捉现实世界约束至关重要)。我们展示了如何通过使LLMs具备排列感知能力来克服部分缺陷。