Counting should not depend on what is being counted; more generally, any algorithm's behavior should be invariant to the semantic content of its arguments. We introduce WhatCounts to test this property in isolation. Unlike prior work that conflates semantic sensitivity with reasoning complexity or prompt variation, WhatCounts is atomic: count items in an unambiguous, delimited list with no duplicates, distractors, or reasoning steps for different semantic types. Frontier LLMs show over 40% accuracy variation depending solely on what is being counted - cities versus chemicals, names versus symbols. Controlled ablations rule out confounds. The gap is semantic, and it shifts unpredictably with small amounts of unrelated fine-tuning. LLMs do not implement algorithms; they approximate them, and the approximation is argument-dependent. As we show with an agentic example, this has implications beyond counting: any LLM function may carry hidden dependencies on the meaning of its inputs.
翻译:计数不应依赖于被计数对象;更一般地说,任何算法的行为都应对其参数的语义内容保持不变。我们提出WhatCounts来独立验证这一特性。与先前将语义敏感性与推理复杂度或提示词变化相混淆的研究不同,WhatCounts具有原子性:针对不同语义类型,在无歧义、带分隔符且无重复项和干扰项的列表中直接计数,无需推理步骤。前沿大语言模型仅因计数对象不同——城市与化学品、名称与符号——就表现出超过40%准确率差异。受控消融实验排除了混杂因素。这种差异源于语义层面,且会因少量无关微调而产生不可预测的变化。大语言模型并非实现算法,而是近似算法,且这种近似具有参数依赖性。正如我们在智能体示例中所展示的,这一现象的影响超越计数范畴:任何大语言模型函数都可能隐含着对其输入意义的潜在依赖。