The Personal AI landscape is currently dominated by "Black Box" Retrieval-Augmented Generation. While standard vector databases offer statistical matching, they suffer from a fundamental lack of accountability: when an AI hallucinates or retrieves sensitive data, the user cannot inspect the cause nor correct the error. Worse, "deleting" a concept from a vector space is mathematically imprecise, leaving behind probabilistic "ghosts" that violate true privacy. We propose Ruva, the first "Glass Box" architecture designed for Human-in-the-Loop Memory Curation. Ruva grounds Personal AI in a Personal Knowledge Graph, enabling users to inspect what the AI knows and to perform precise redaction of specific facts. By shifting the paradigm from Vector Matching to Graph Reasoning, Ruva ensures the "Right to be Forgotten." Users are the editors of their own lives; Ruva hands them the pen. The project and the demo video are available at http://sisinf00.poliba.it/ruva/.
翻译:当前个人人工智能领域主要由“黑盒”检索增强生成技术主导。虽然标准向量数据库提供统计匹配,但其存在根本性的问责缺失:当人工智能产生幻觉或检索敏感数据时,用户既无法检查原因也无法纠正错误。更严重的是,从向量空间中“删除”概念在数学上是不精确的,会留下违反真实隐私的概率性“幽灵”。我们提出Ruva,首个为“人在回路”记忆管理设计的“玻璃盒”架构。Ruva将个人人工智能建立在个人知识图谱之上,使用户能够检查人工智能所掌握的知识,并对特定事实执行精确修订。通过将范式从向量匹配转向图推理,Ruva确保了“被遗忘权”。用户是自己生活的编辑者;Ruva将笔交还给他们。项目及演示视频详见http://sisinf00.poliba.it/ruva/。