Modern machine learning models are deployed in diverse, non-stationary environments where they must continually adapt to new tasks and evolving knowledge. Continual fine-tuning and in-context learning are costly and brittle, whereas neural memory methods promise lightweight updates with minimal forgetting. However, existing neural memory models typically assume a single fixed objective and homogeneous information streams, leaving users with no control over what the model remembers or ignores over time. To address this challenge, we propose a generalized neural memory system that performs flexible updates based on learning instructions specified in natural language. Our approach enables adaptive agents to learn selectively from heterogeneous information sources, supporting settings, such as healthcare and customer service, where fixed-objective memory updates are insufficient.
翻译:现代机器学习模型部署于多样化、非平稳的环境中,必须持续适应新任务与动态演化的知识。持续微调与上下文学习方法成本高昂且脆弱,而神经记忆方法则有望通过轻量级更新实现最小化遗忘。然而,现有神经记忆模型通常假设单一固定目标与同质化信息流,导致用户无法控制模型随时间推移应记忆或忽略的内容。为应对这一挑战,本文提出一种广义神经记忆系统,能够根据自然语言指定的学习指令执行灵活更新。该方法使自适应智能体能够从异构信息源中进行选择性学习,适用于医疗健康与客户服务等固定目标记忆更新机制无法满足需求的场景。