In this study, the output of large language models (LLM) is considered an information source generating an unlimited sequence of symbols drawn from a finite alphabet. Given the probabilistic nature of modern LLMs, we assume a probabilistic model for these LLMs, following a constant random distribution and the source itself thus being stationary. We compare this source entropy (per word) to that of natural language (written or spoken) as represented by the Open American National Corpus (OANC). Our results indicate that the word entropy of such LLMs is lower than the word entropy of natural speech both in written or spoken form. The long-term goal of such studies is to formalize the intuitions of information and uncertainty in large language training to assess the impact of training an LLM from LLM generated training data. This refers to texts from the world wide web in particular.
翻译:本研究将大型语言模型(LLM)的输出视为从有限字母表中抽取符号生成无限序列的信息源。鉴于现代LLM的概率特性,我们假设这些LLM遵循恒定随机分布的概率模型,因而该信息源本身是平稳的。我们将此信息源的熵(每词)与以开放美国国家语料库(OANC)为代表的自然语言(书面或口语)的熵进行比较。研究结果表明,此类LLM的词熵低于书面或口语形式的自然语言的词熵。此类研究的长期目标在于形式化大型语言训练中信息与不确定性的直观概念,以评估使用LLM生成训练数据(特指来自万维网的文本)训练LLM所产生的影响。