Integrating Chain-of-Thought (CoT) reasoning into Semantic ID-based recommendation foundation models (such as OpenOneRec) often paradoxically degrades recommendation performance. We identify the root cause as textual inertia from the General Subspace, where verbose reasoning dominates inference and causes the model to neglect critical Semantic ID. To address this, we propose a training-free Inference-Time Subspace Alignment framework. By compressing reasoning chains and applying bias-subtracted contrastive decoding, our approach mitigates ungrounded textual drift. Experiments show this effectively calibrates inference, allowing foundation models to leverage reasoning without sacrificing ID-grounded accuracy.
翻译:将思维链推理整合到基于语义ID的推荐基础模型(如OpenOneRec)中,往往会矛盾地降低推荐性能。我们发现根本原因在于通用子空间中的文本惯性,即冗长的推理过程主导了推断,导致模型忽略关键的语义ID。为解决此问题,我们提出了一种无需训练的推理时子空间对齐框架。通过压缩推理链并应用偏差减除的对比解码,我们的方法缓解了无根据的文本漂移。实验表明,该方法能有效校准推断过程,使基础模型能够在利用推理能力的同时,不牺牲基于ID的准确性。