Modern knowledge workplaces increasingly strain human episodic memory as individuals navigate fragmented attention, overlapping meetings, and multimodal information streams. Existing workplace tools provide partial support through note-taking or analytics but rarely integrate cognitive, physiological, and attentional context into retrievable memory representations. This paper presents the Cognitive Prosthetic Multimodal System (CPMS) --an AI-enabled proof-of-concept designed to support episodic recall in knowledge work through structured episodic capture and natural language retrieval. CPMS synchronizes speech transcripts, physiological signals, and gaze behavior into temporally aligned, JSON-based episodic records processed locally for privacy. Beyond data logging, the system includes a web-based retrieval interface that allows users to query past workplace experiences using natural language, referencing semantic content, time, attentional focus, or physiological state. We present CPMS as a functional proof-of-concept demonstrating the technical feasibility of transforming heterogeneous sensor data into queryable episodic memories. The system is designed to be modular, supporting operation with partial sensor configurations, and incorporates privacy safeguards for workplace deployment. This work contributes an end-to-end, privacy-aware architecture for AI-enabled memory augmentation in workplace settings.
翻译:现代知识工作场所日益加重人类情景记忆的负担,个体需应对碎片化注意力、重叠会议和多模态信息流。现有工作场所工具通过笔记记录或分析提供部分支持,但鲜少将认知、生理及注意力情境整合为可检索的记忆表征。本文提出认知假体多模态系统——一种人工智能驱动的概念验证系统,旨在通过结构化情景捕捉与自然语言检索来支持知识工作中的情景回忆。CPMS将语音转录、生理信号与注视行为同步为时间对齐的、基于JSON的情景记录,并在本地处理以保障隐私。除数据记录外,该系统还包含基于网络的检索界面,允许用户使用自然语言查询过往工作经历,可依据语义内容、时间、注意力焦点或生理状态进行检索。我们提出CPMS作为功能性概念验证,展示了将异构传感器数据转化为可查询情景记忆的技术可行性。该系统采用模块化设计,支持部分传感器配置运行,并包含适用于工作场所部署的隐私保护机制。本研究贡献了一种端到端、具备隐私保护意识的人工智能记忆增强架构,适用于工作场所环境。