Large language models (LLMs) have shown remarkable success in language modelling due to scaling laws found in model size and the hidden dimension of the model's text representation. Yet, we demonstrate that compressed representations of text can yield better performance in LLM-based regression tasks. In this paper, we compare the relative performance of embedding compression in three different signal-to-noise contexts: financial return prediction, writing quality assessment and review scoring. Our results show that compressing embeddings, in a minimally supervised manner using an autoencoder's hidden representation, can mitigate overfitting and improve performance on noisy tasks, such as financial return prediction; but that compression reduces performance on tasks that have high causal dependencies between the input and target data. Our results suggest that the success of interpretable compressed representations such as sentiment may be due to a regularising effect.
翻译:大型语言模型(LLM)因其在模型规模和文本表示隐藏维度方面发现的缩放定律,在语言建模中取得了显著成功。然而,我们证明文本的压缩表示可以在基于LLM的回归任务中获得更好的性能。本文比较了嵌入压缩在三种不同信噪比背景下的相对性能:金融收益预测、写作质量评估和评论评分。我们的结果表明,以最小监督方式使用自编码器的隐藏表示来压缩嵌入,可以缓解过拟合并提升在噪声任务(如金融收益预测)上的性能;但压缩会降低输入与目标数据间具有高因果依赖关系的任务性能。我们的结果表明,诸如情感等可解释压缩表示的成功可能归因于一种正则化效应。