Large language model (LLM) tokenizers act as structured compressors: by mapping text to discrete token sequences, they determine token count (and thus compute and context usage) and the statistical structure seen by downstream models. Despite their central role in LLM pipelines, the link between tokenization, compression efficiency and induced structure is not well understood. We empirically demonstrate that tokenizer training scale redistributes entropy: as training data grows, the token stream becomes more diverse in aggregate (higher unigram entropy) yet markedly more predictable in-context (lower higher-order conditional entropies), indicating that tokenization absorbs substantial short-range regularity although these gains degrade under train-test domain mismatch. To ground these observations, we first benchmark i) pretrained GPT-family tokenizers as black-box compressors across various domains, and ii) learned tokenizers across configurations spanning vocabulary size, training scale, and domain. Next, we study tokenization as a transform for universal compression and introduce a compression-aware BPE variant. Finally, we adopt a channel lens and introduce capacity-utilization metrics to analyze tokenizer behaviour and outline implications for downstream modeling. Put together, our results expose various trade-offs between compression, induced structure, and robustness under domain shift, and motivate principled, compression-aware tokenizer design.
翻译:大语言模型(LLM)分词器充当结构化压缩器:通过将文本映射为离散的标记序列,它们决定了标记数量(从而影响计算量和上下文使用量)以及下游模型所见的统计结构。尽管分词器在LLM流程中处于核心地位,但分词化、压缩效率与诱导结构之间的联系尚未得到充分理解。我们通过实证研究表明,分词器训练规模会重新分配熵:随着训练数据的增加,标记流在总体上变得更加多样化(一元熵更高),但在上下文中的可预测性却显著增强(高阶条件熵更低),这表明分词化吸收了大量的短程规律性,尽管这些增益在训练-测试领域不匹配的情况下会衰减。为了验证这些观察结果,我们首先对以下方面进行基准测试:i) 在不同领域中作为黑盒压缩器的预训练GPT系列分词器,以及ii) 跨不同配置(包括词汇量、训练规模和领域)的已学习分词器。接着,我们将分词化作为一种通用压缩变换进行研究,并引入一种压缩感知的BPE变体。最后,我们采用信道视角,引入容量利用率指标来分析分词器行为,并概述其对下游建模的影响。综合来看,我们的结果揭示了压缩、诱导结构以及在领域偏移下的鲁棒性之间的各种权衡,并推动了基于原理的、压缩感知的分词器设计。