Large Language Models (LLMs) have exhibited remarkable proficiency in comprehending and generating natural language. On the other hand, personalized LLM response generation holds the potential to offer substantial benefits for individuals in critical areas such as medical. Existing research has explored memory-augmented methods to prompt the LLM with pre-stored user-specific knowledge for personalized response generation in terms of new queries. We contend that such paradigm is unable to perceive fine-granularity information. In this study, we propose a novel \textbf{M}emory-\textbf{i}njected approach using parameter-efficient fine-tuning (PEFT) and along with a Bayesian Optimisation searching strategy to achieve \textbf{L}LM \textbf{P}ersonalization(\textbf{MiLP}).
翻译:大语言模型在理解和生成自然语言方面展现出卓越能力。另一方面,个性化大语言模型响应生成有望在医疗等关键领域为个人提供实质性帮助。现有研究已探索通过记忆增强方法,在面临新查询时利用预存储的用户特定知识来提示大语言模型生成个性化响应。我们认为此类范式难以感知细粒度信息。本研究提出一种新颖的**记忆注入**方法,结合参数高效微调技术与贝叶斯优化搜索策略,以实现**大语言模型个性化**。