When querying a large language model (LLM), the context, i.e. personal, demographic, and cultural information specific to an end-user, can significantly shape the response of the LLM. For example, asking the model to explain Newton's second law with the context "I am a toddler" yields a different answer compared to the context "I am a physics professor." Proper usage of the context enables the LLM to generate personalized responses, whereas inappropriate contextual influence can lead to stereotypical and potentially harmful generations (e.g. associating "female" with "housekeeper"). In practice, striking the right balance when leveraging context is a nuanced and challenging problem that is often situation-dependent. One common approach to address this challenge is to fine-tune LLMs on contextually appropriate responses. However, this approach is expensive, time-consuming, and not controllable for end-users in different situations. In this work, we propose Context Steering (CoS) - a simple training-free method that can be easily applied to autoregressive LLMs at inference time. By measuring the contextual influence in terms of token prediction likelihood and modulating it, our method enables practitioners to determine the appropriate level of contextual influence based on their specific use case and end-user base. We showcase a variety of applications of CoS including amplifying the contextual influence to achieve better personalization and mitigating unwanted influence for reducing model bias. In addition, we show that we can combine CoS with Bayesian Inference to quantify the extent of hate speech on the internet. We demonstrate the effectiveness of CoS on state-of-the-art LLMs and benchmarks.
翻译:在查询大型语言模型时,上下文(即终端用户的个人、人口统计及文化信息)会显著影响模型的输出。例如,当用户以"我是一名幼儿"为上下文请求解释牛顿第二定律时,其生成的回答将不同于以"我是一名物理学教授"为上下文的结果。合理运用上下文可使语言模型生成个性化响应,而不当的上下文影响则可能导致刻板甚至有害的输出(例如将"女性"与"管家"关联)。实践中,如何权衡上下文的影响是一个依赖具体场景的精细且具挑战性的问题。一种常见解决方案是对语言模型进行上下文适配响应的微调,但该方法成本高昂、耗时且无法满足不同情境下终端用户的个性化控制需求。本文提出上下文导向(CoS)——一种无需训练的简单方法,可轻松应用于自回归语言模型的推理阶段。通过基于词元预测似然值测量并调节上下文影响,我们的方法允许实践者根据具体用例和终端用户群体确定合适的上下文影响水平。我们展示了CoS的多项应用场景,包括放大上下文影响以提升个性化效果,以及抑制不良影响以减少模型偏见。此外,我们证明可将CoS与贝叶斯推断相结合,量化互联网上的仇恨言论程度。我们在前沿语言模型和基准测试上验证了CoS的有效性。