Personal AI assistants (e.g., Apple Intelligence, Meta AI) offer proactive recommendations that simplify everyday tasks, but their reliance on sensitive user data raises concerns about privacy and trust. To address these challenges, we introduce the Guardian of Data (GOD), a secure, privacy-preserving framework for training and evaluating AI assistants directly on-device. Unlike traditional benchmarks, the GOD model measures how well assistants can anticipate user needs-such as suggesting gifts-while protecting user data and autonomy. Functioning like an AI school, it addresses the cold start problem by simulating user queries and employing a curriculum-based approach to refine the performance of each assistant. Running within a Trusted Execution Environment (TEE), it safeguards user data while applying reinforcement and imitation learning to refine AI recommendations. A token-based incentive system encourages users to share data securely, creating a data flywheel that drives continuous improvement. Specifically, users mine with their data, and the mining rate is determined by GOD's evaluation of how well their AI assistant understands them across categories such as shopping, social interactions, productivity, trading, and Web3. By integrating privacy, personalization, and trust, the GOD model provides a scalable, responsible path for advancing personal AI assistants. For community collaboration, part of the framework is open-sourced at https://github.com/PIN-AI/God-Model.
翻译:个人人工智能助手(如Apple Intelligence、Meta AI)通过主动推荐简化日常任务,但其对敏感用户数据的依赖引发了隐私与信任担忧。为解决这些挑战,我们提出数据守护者(GOD)框架——一种直接在设备端训练和评估AI助手的安全隐私保护框架。与传统基准测试不同,GOD模型通过模拟用户查询并采用课程学习方法优化各助手性能,在保护用户数据与自主权的同时,衡量助手预测用户需求(如礼物推荐)的能力。该框架在可信执行环境(TEE)中运行,在运用强化学习与模仿学习优化AI推荐的同时保障用户数据安全。基于代币的激励机制鼓励用户安全共享数据,形成推动持续改进的数据飞轮。具体而言,用户通过自身数据参与挖矿,挖矿速率由GOD模型根据其AI助手在购物、社交互动、生产力、交易及Web3等类别中对用户理解程度的评估结果决定。通过整合隐私保护、个性化与信任机制,GOD模型为推进个人AI助手发展提供了可扩展的负责任路径。为促进社区协作,框架部分代码已开源:https://github.com/PIN-AI/God-Model。