Recent advancements in large language models (LLMs) have significantly enhanced their reasoning capabilities. However, they continue to struggle with basic character-level tasks, such as counting letters in words, a problem rooted in their tokenization process. While existing benchmarks have highlighted this weakness through basic character operations, such failures are often dismissed due to lacking practical relevance. Yet, many real-world applications, such as navigating text-based maps or interpreting structured tables, rely heavily on precise sub-token understanding. In this regard, we introduce SubTokenTest, a comprehensive benchmark that assesses sub-token understanding through practical, utility-driven tasks. Our benchmark includes ten tasks across four domains and isolates tokenization-related failures by decoupling performance from complex reasoning. We provide a comprehensive evaluation of nine advanced LLMs. Additionally, we investigate the impact of test-time scaling on sub-token reasoning and explore how character-level information is encoded within the hidden states.
翻译:近年来,大型语言模型(LLMs)的推理能力已取得显著提升。然而,它们在处理基础的字符级任务时仍然存在困难,例如计算单词中的字母数量,这一问题的根源在于其分词过程。尽管现有基准已通过基础的字符操作揭示了这一弱点,但由于缺乏实际相关性,此类失败常被忽视。然而,许多现实应用,例如导航基于文本的地图或解析结构化表格,都严重依赖于精确的子词理解。为此,我们提出了SubTokenTest,这是一个通过实用、效用驱动的任务来评估子词理解的综合性基准。我们的基准包含四个领域的十项任务,并通过将性能与复杂推理解耦来隔离与分词相关的失败。我们对九个先进的LLMs进行了全面评估。此外,我们还研究了测试时扩展对子词推理的影响,并探索了字符级信息在隐藏状态中的编码方式。