Protein language models (PLMs) are becoming practical tools for de novo protein design, yet their dual-use potential raises safety concerns. We show that domain adaptation to specific taxonomic groups can elicit toxic protein generation, even when toxicity is not the training objective. To address this, we adapt Logit Diff Amplification (LDA) as an inference-time control mechanism for PLMs. LDA modifies token probabilities by amplifying the logit difference between a baseline model and a toxicity-finetuned model, requiring no retraining. Across four taxonomic groups, LDA consistently reduces predicted toxicity rate (measured via ToxDL2) below the taxon-finetuned baseline while preserving biological plausibility. We evaluate quality using Fréchet ESM Distance and predicted foldability (pLDDT), finding that LDA maintains distributional similarity to natural proteins and structural viability (unlike activation-based steering methods that tend to degrade sequence properties). Our results demonstrate that LDA provides a practical safety knob for protein generators that mitigates elicited toxicity while retaining generative quality.
翻译:蛋白质语言模型正成为从头蛋白质设计的实用工具,但其双重用途潜力引发了安全性担忧。我们发现,针对特定分类群的领域适应可能引发毒性蛋白质生成,即使毒性并非训练目标。为解决此问题,我们采用对数差异放大技术作为蛋白质语言模型的推理时控制机制。该方法通过放大基线模型与毒性微调模型之间的对数概率差异来修正标记概率,无需重新训练。在四个分类群中,该技术持续将预测毒性率(通过ToxDL2测量)降低至低于分类群微调基线水平,同时保持生物学合理性。我们使用Fréchet ESM距离和预测可折叠性指标评估生成质量,发现该技术保持了与天然蛋白质的分布相似性及结构可行性(与基于激活的引导方法不同,后者往往会降低序列特性)。研究结果表明,该技术为蛋白质生成器提供了实用的安全调节机制,能在保持生成质量的同时有效缓解诱导毒性。