Large Reasoning Models (LRMs) have emerged as a powerful advancement in multi-step reasoning tasks, offering enhanced transparency and logical consistency through explicit chains of thought (CoT). However, these models introduce novel safety and reliability risks, such as CoT-hijacking and prompt-induced inefficiencies, which are not fully captured by existing evaluation methods. To address this gap, we propose RT-LRM, a unified benchmark designed to assess the trustworthiness of LRMs. RT-LRM evaluates three core dimensions: truthfulness, safety and efficiency. Beyond metric-based evaluation, we further introduce the training paradigm as a key analytical perspective to investigate the systematic impact of different training strategies on model trustworthiness. We achieve this by designing a curated suite of 30 reasoning tasks from an observational standpoint. We conduct extensive experiments on 26 models and identify several valuable insights into the trustworthiness of LRMs. For example, LRMs generally face trustworthiness challenges and tend to be more fragile than Large Language Models (LLMs) when encountering reasoning-induced risks. These findings uncover previously underexplored vulnerabilities and highlight the need for more targeted evaluations. In addition, we release a scalable toolbox for standardized trustworthiness research to support future advancements in this important field. Our code and datasets will be open-sourced.
翻译:大型推理模型(LRMs)已成为多步推理任务中的一项重要进展,通过显式的思维链(CoT)提供了更高的透明度和逻辑一致性。然而,这些模型引入了新的安全性和可靠性风险,例如CoT劫持和提示诱导的低效性,这些风险尚未被现有评估方法充分捕捉。为填补这一空白,我们提出了RT-LRM,一个旨在评估LRMs可信度的统一基准。RT-LRM评估三个核心维度:真实性、安全性和效率。除基于指标的评估外,我们进一步引入训练范式作为关键分析视角,以研究不同训练策略对模型可信度的系统性影响。为此,我们从观察视角设计了一套包含30个推理任务的精选测试集。我们在26个模型上进行了广泛实验,并获得了关于LRMs可信度的若干重要发现。例如,LRMs普遍面临可信度挑战,且在遭遇推理诱导风险时往往比大型语言模型(LLMs)更为脆弱。这些发现揭示了先前未被充分探索的脆弱性,并强调了进行更具针对性评估的必要性。此外,我们发布了一个可扩展的工具箱,用于支持标准化可信度研究,以促进该重要领域的未来发展。我们的代码和数据集将开源。