With the rise of large language models (LLMs) for flexibly processing information as strings, a natural application is regression, specifically by preprocessing string representations into LLM embeddings as downstream features for metric prediction. In this paper, we provide one of the first comprehensive investigations into embedding-based regression and demonstrate that LLM embeddings as features can be better for high-dimensional regression tasks than using traditional feature engineering. This regression performance can be explained in part due to LLM embeddings over numeric data inherently preserving Lipschitz continuity over the feature space. Furthermore, we quantify the contribution of different model effects, most notably model size and language understanding, which we find surprisingly do not always improve regression performance.
翻译:随着大语言模型(LLMs)作为灵活处理字符串信息工具的兴起,一个自然的应用便是回归分析,具体而言,是将字符串表示预处理为LLM嵌入,作为下游度量预测的特征。本文首次对基于嵌入的回归方法进行了全面研究,并证明在高维回归任务中,使用LLM嵌入作为特征可能优于传统的特征工程方法。这种回归性能的提升,部分原因可归结为LLM嵌入在处理数值数据时,本质上保持了特征空间上的Lipschitz连续性。此外,我们量化了不同模型效应(最显著的是模型规模与语言理解能力)的贡献,并意外地发现这些因素并不总能提升回归性能。