Backdoors are hidden behaviors that are only triggered once an AI system has been deployed. Bad actors looking to create successful backdoors must design them to avoid activation during training and evaluation. Since data used in these stages often only contains information about events that have already occurred, a component of a simple backdoor trigger could be a model recognizing data that is in the future relative to when it was trained. Through prompting experiments and by probing internal activations, we show that current large language models (LLMs) can distinguish past from future events, with probes on model activations achieving $90\%$ accuracy. We train models with backdoors triggered by a temporal distributional shift; they activate when the model is exposed to news headlines beyond their training cut-off dates. Fine-tuning on helpful, harmless and honest (HHH) data does not work well for removing simpler backdoor triggers but is effective on our backdoored models, although this distinction is smaller for the larger-scale model we tested. We also find that an activation-steering vector representing a model's internal representation of the date influences the rate of backdoor activation. We take these results as initial evidence that, at least for models at the modest scale we test, standard safety measures are enough to remove these backdoors. We publicly release all relevant code (https://github.com/sbp354/Future_triggered_backdoors), datasets (https://tinyurl.com/future-backdoor-datasets), and models (https://huggingface.co/saraprice).
翻译:后门是仅在人工智能系统部署后才被触发的隐藏行为。意图植入有效后门的恶意行为者必须设计其在训练和评估阶段保持静默。由于这些阶段使用的数据通常仅包含已发生事件的信息,一种简单的后门触发器组件可以是模型识别相对于其训练时间而言属于未来的数据。通过提示实验和对内部激活的探测,我们证明当前的大语言模型能够区分过去与未来事件,基于模型激活的探测准确率达到$90\%$。我们训练了由时序分布偏移触发的后门模型:当模型遇到超出其训练截止日期的新闻标题时,后门即被激活。基于有益、无害、诚实(HHH)数据的微调对于移除简单后门触发器效果有限,但对我们的后门模型有效,不过这种差异在我们测试的较大规模模型中较小。我们还发现,表征模型内部日期表示的激活导向向量会影响后门触发率。我们将这些结果视为初步证据表明,至少对于我们测试的中等规模模型,标准安全措施足以消除此类后门。我们已公开所有相关代码(https://github.com/sbp354/Future_triggered_backdoors)、数据集(https://tinyurl.com/future-backdoor-datasets)和模型(https://huggingface.co/saraprice)。