Large Language Models (LLMs) often produce explanations that do not faithfully reflect the factors driving their predictions. In healthcare settings, such unfaithfulness is especially problematic: explanations that omit salient clinical cues or mask spurious shortcuts can undermine clinician trust and lead to unsafe decision support. We study how inference and training-time choices shape explanation faithfulness, focusing on factors practitioners can control at deployment. We evaluate three LLMs (GPT-4.1-mini, LLaMA 70B, LLaMA 8B) on two datasets-BBQ (social bias) and MedQA (medical licensing questions), and manipulate the number and type of few-shot examples, prompting strategies, and training procedure. Our results show: (i) both the quantity and quality of few-shot examples significantly impact model faithfulness; (ii) faithfulness is sensitive to prompting design; (iii) the instruction-tuning phase improves measured faithfulness on MedQA. These findings offer insights into strategies for enhancing the interpretability and trustworthiness of LLMs in sensitive domains.
翻译:大型语言模型(LLMs)生成的解释往往不能忠实地反映其预测背后的驱动因素。在医疗保健环境中,这种不忠实性尤为成问题:忽略关键临床线索或掩盖虚假捷径的解释会削弱临床医生的信任,并导致不安全的决策支持。我们研究了推理和训练阶段的选择如何影响解释的忠实性,重点关注从业者在部署时可控制的因素。我们在两个数据集——BBQ(社会偏见)和MedQA(医学执照考试问题)上评估了三种LLM(GPT-4.1-mini、LLaMA 70B、LLaMA 8B),并操纵了少样本示例的数量与类型、提示策略以及训练过程。我们的结果表明:(i)少样本示例的数量和质量均显著影响模型的忠实性;(ii)忠实性对提示设计敏感;(iii)指令微调阶段提高了在MedQA上测得的忠实性。这些发现为增强LLMs在敏感领域中的可解释性和可信度提供了策略性见解。