Personal AI assistants (e.g., Apple Intelligence, Meta AI) offer proactive recommendations that simplify everyday tasks, but their reliance on sensitive user data raises concerns about privacy and trust. To address these challenges, we introduce the Guardian of Data (GOD), a secure, privacy-preserving framework for training and evaluating AI assistants directly on-device. Unlike traditional benchmarks, the GOD model measures how well assistants can anticipate user needs-such as suggesting gifts-while protecting user data and autonomy. Functioning like an AI school, it addresses the cold start problem by simulating user queries and employing a curriculum-based approach to refine the performance of each assistant. Running within a Trusted Execution Environment (TEE), it safeguards user data while applying reinforcement and imitation learning to refine AI recommendations. A token-based incentive system encourages users to share data securely, creating a data flywheel that drives continuous improvement. By integrating privacy, personalization, and trust, the GOD model provides a scalable, responsible path for advancing personal AI assistants. For community collaboration, part of the framework is open-sourced at https://github.com/PIN-AI/God-Model.
翻译:个人人工智能助手(如Apple Intelligence、Meta AI)通过主动推荐简化日常任务,但其对敏感用户数据的依赖引发了隐私与信任担忧。为应对这些挑战,我们提出数据守护者(GOD)框架——一种直接在设备端训练和评估AI助手的安全隐私保护架构。与传统基准测试不同,GOD模型通过模拟用户查询(如礼物推荐场景)并采用课程化训练方法优化助手性能,在保护用户数据与自主权的同时,衡量助手预测用户需求的能力。该框架在可信执行环境(TEE)中运行,运用强化学习与模仿学习优化AI推荐,并通过代币激励系统鼓励用户安全共享数据,形成驱动持续改进的数据飞轮。通过整合隐私保护、个性化与信任机制,GOD模型为推进个人AI助手发展提供了可扩展的负责任路径。为促进社区协作,部分框架已在https://github.com/PIN-AI/God-Model开源。